How Can Schools Manage AI in Admissions?

Many questions remain around the role of artificial intelligence in admissions as schools navigate the balance between innovation and integrity.  

Artificial intelligence has officially taken root in higher education. Students are turning to ChatGPT and other generative AI tools for assignments and exams. Professors are assigning coursework that allows or requires AI tools, or even using AI as tutors for formative feedback for students, all while institutions are scrambling to adopt policies and standards to govern the use of AI. 

With AI increasingly prevalent across higher education, schools have had to quickly adapt and are gaining greater clarity and comfort around how students, faculty, and staff can leverage AI tools responsibly and ethically.

However, one area where institutions still face many open-ended questions is what role AI plays in admissions. How do schools know if an applicant uses AI tools when he or she applies? What can they do to enforce new policies and rules before a prospective student steps foot on campus? Does AI give an unfair advantage to applicants? Or can tools be used to improve access for historically underserved populations?

As admissions leaders and administrators contemplate these difficult questions, they will need to find the right balance between innovation and integrity. By establishing clear, practical guidelines for AI tools from the start — and reexamining traditional admissions practices — schools can set expectations for prospective students and ensure AI tools are used ethically and effectively throughout their academic journey.

How AI Is Transforming the Admissions Landscape

There's a lot we still don't know about how students use AI when they apply to schools. But one thing is certain: Applicants are tempted to use AI tools to help write personal essays and other application materials, and many are already doing so.

In fact, nearly half of current undergraduate and graduate students in a recent survey said they would have used ChatGPT and other AI tools to help complete college admissions essays if these tools had been available. That's despite 56% who said using AI tools provides an unfair advantage on college applications.

How can schools maintain a fair and robust admissions process in an academic environment where AI tools are increasingly prevalent and, many times, acceptable? So far, admissions teams have remained generally skeptical of AI tools. In fact, many institutions have adopted AI detection tools that can identify and reject AI-generated content in personal essays and statements. However, these detection tools are imperfect and have been known to falsely accuse students of cheating.

While schools are rightfully concerned about ethical standards and academic integrity, a zero-tolerance approach to AI is becoming increasingly difficult to enforce in a fair, consistent manner.

Moreover, these rigid admissions policies are out of step with the ways that students are allowed or even encouraged to use AI tools in their academic and professional careers. For instance, a candidate who relies on ChatGPT to write their entire essay, from start to finish, is much different from someone who uses it to help with research or organizing their thoughts. It makes little sense to reject an otherwise qualified student who uses AI in the exact same way as they would for a class or in a future job.

When used responsibly, AI can actually level the playing field for applicants from historically marginalized backgrounds or under-resourced communities, who may lack access to resources to help with applications or are unfamiliar with expectations. First-generation college candidates, for example, could use ChatGPT to review their papers or other assignments before submission.

AI isn't going anywhere — and the ways applicants leverage these tools will only continue to expand. As the landscape evolves, institutions need a strategy that provides transparent, practical, and consistent guidelines in the admissions process and beyond.

Revamping Admissions Policies for the AI Era

We are only at the beginning of understanding and governing AI tools. You may not yet have all the answers to important questions and concerns, but working through these challenges is an opportunity for learning.

Admissions leaders can rethink and revamp policies to better reflect AI technology's challenges and capabilities. While each institution needs to consider its own strategy, the following considerations can help keep admissions processes in line with today's digital landscape.

1) Set clear, consistent expectations from admissions to graduation. Admissions are the first touchpoint students have with your institution. By providing clear, transparent expectations and guidelines regarding the use of AI from the start, you can set students up for success — ensuring applicants understand what is acceptable and what constitutes a violation of academic integrity.

Policies should also remain generally consistent across the entire institution. Admissions leaders can learn from faculty, department heads, deans, and other academic leaders who have already implemented rules and policies regarding AI use in classrooms, syllabuses, and elsewhere.

At many institutions, there may be inconsistencies among policies that define what constitutes plagiarism, cheating, and misuse of AI tools in essays and written materials. But institutions are also seeing growing consensus and best practices develop around academic honesty and AI — and consistent expectations will avoid confusion and misunderstandings for students, faculty, and administrators alike.

2) Develop expertise among your own admissions teams. By gaining firsthand experience with these tools, admissions teams can better govern and manage how applicants use AI (both ethically and not). With a better understanding of generative AI capabilities and limitations, they can make more informed decisions about how applicants should take advantage of these tools.

A better understanding of AI also helps increase efficiency and expand the capacity of overstretched admissions teams. With AI, teams can quickly sort through large volumes of applications, categorizing them based on predefined criteria such as GPA, test scores, and extracurricular involvement. They might also leverage predictive analytics to help gauge which admitted students are most likely to accept an offer of admission to improve enrollment management and planning.

Most institutions are already starting this process. Half of admissions offices in a 2023 survey reported using AI to review applications. In 2024, 80% said they would integrate AI into review processes.

3) Integrate more holistic admissions practices. Traditional application metrics are in desperate need of an update. Personal statements, essays, and recommendation letters provided low reliability even before AI. Students have gained a significant advantage by relying on family or friends to help them draft and refine personal statements — and that was before ChatGPT, Bard, or other AI tools entered the mix. Other traditional inputs like standardized testing also introduce substantial bias and disadvantages for students. 

The challenges and considerations of AI further underscore the need for holistic admissions practices that assess the full range of an applicant's life experiences, capabilities, and potential. Rather than relying solely on test scores and essays, admissions leaders can also take into account soft skills like communication, teamwork, and creative thinking to make better decisions about students who bring diverse skills, experiences, and talents to campus.

Admissions Serve as an AI Learning Moment

Admissions is a critical starting point in a student's academic journey. Guiding and supporting the responsible and ethical use of AI tools helps prepare students for the rest of their academic and professional careers — it's among the first of many educational moments they will experience at your institution.

Clear, consistent admissions policies take into account the nuances and complexities of AI and the application process. Updating admissions policies empowers you to uphold academic integrity while setting your students — and your institution — up for success in the era of AI.

Featured

  • a glowing gaming controller, a digital tree structure, and an open book

    Report: Use of Game Engines Expands Beyond Gaming

    Game development technology is increasingly being utilized beyond its traditional gaming roots, according to the recently released annual "State of Game Development" report from development and DevOps solutions provider Perforce Software.

  • abstract representation of equity at the core of AI

    Why Equity Must Be a Core Part of the Conversation About AI

    AI is an immensely powerful tool that can provide customized support for students with diverse learning needs, tailoring educational experiences to meet student’s individual needs more effectively. However, significant disparities in AI access and digital literacy skills prevent many of these same students from fully leveraging its benefits.

  • Man wearing headset working on a computer

    Internet2: Network Routing Security and RPKI Adoption in Research and Education

    We ask James Deaton, vice president of network services, about Internet2's initiatives and leadership efforts to promote routing security and RPKI adoption in research and higher education networks.

  • network of transparent cloud icons, each containing a security symbol like a lock or shield

    Okta, OpenID Foundation Propose New Identity Security Standard

    Okta and the OpenID Foundation have announced the formation of the IPSIE Working Group — with the acronym standing for Interoperability Profiling for Secure Identity in the Enterprise — dedicated to a new identity security standard for Software-as-a-Service (SaaS) applications.