How Can Schools Manage AI in Admissions?

Many questions remain around the role of artificial intelligence in admissions as schools navigate the balance between innovation and integrity.  

Artificial intelligence has officially taken root in higher education. Students are turning to ChatGPT and other generative AI tools for assignments and exams. Professors are assigning coursework that allows or requires AI tools, or even using AI as tutors for formative feedback for students, all while institutions are scrambling to adopt policies and standards to govern the use of AI. 

With AI increasingly prevalent across higher education, schools have had to quickly adapt and are gaining greater clarity and comfort around how students, faculty, and staff can leverage AI tools responsibly and ethically.

However, one area where institutions still face many open-ended questions is what role AI plays in admissions. How do schools know if an applicant uses AI tools when he or she applies? What can they do to enforce new policies and rules before a prospective student steps foot on campus? Does AI give an unfair advantage to applicants? Or can tools be used to improve access for historically underserved populations?

As admissions leaders and administrators contemplate these difficult questions, they will need to find the right balance between innovation and integrity. By establishing clear, practical guidelines for AI tools from the start — and reexamining traditional admissions practices — schools can set expectations for prospective students and ensure AI tools are used ethically and effectively throughout their academic journey.

How AI Is Transforming the Admissions Landscape

There's a lot we still don't know about how students use AI when they apply to schools. But one thing is certain: Applicants are tempted to use AI tools to help write personal essays and other application materials, and many are already doing so.

In fact, nearly half of current undergraduate and graduate students in a recent survey said they would have used ChatGPT and other AI tools to help complete college admissions essays if these tools had been available. That's despite 56% who said using AI tools provides an unfair advantage on college applications.

How can schools maintain a fair and robust admissions process in an academic environment where AI tools are increasingly prevalent and, many times, acceptable? So far, admissions teams have remained generally skeptical of AI tools. In fact, many institutions have adopted AI detection tools that can identify and reject AI-generated content in personal essays and statements. However, these detection tools are imperfect and have been known to falsely accuse students of cheating.

While schools are rightfully concerned about ethical standards and academic integrity, a zero-tolerance approach to AI is becoming increasingly difficult to enforce in a fair, consistent manner.

Moreover, these rigid admissions policies are out of step with the ways that students are allowed or even encouraged to use AI tools in their academic and professional careers. For instance, a candidate who relies on ChatGPT to write their entire essay, from start to finish, is much different from someone who uses it to help with research or organizing their thoughts. It makes little sense to reject an otherwise qualified student who uses AI in the exact same way as they would for a class or in a future job.

When used responsibly, AI can actually level the playing field for applicants from historically marginalized backgrounds or under-resourced communities, who may lack access to resources to help with applications or are unfamiliar with expectations. First-generation college candidates, for example, could use ChatGPT to review their papers or other assignments before submission.

AI isn't going anywhere — and the ways applicants leverage these tools will only continue to expand. As the landscape evolves, institutions need a strategy that provides transparent, practical, and consistent guidelines in the admissions process and beyond.

Revamping Admissions Policies for the AI Era

We are only at the beginning of understanding and governing AI tools. You may not yet have all the answers to important questions and concerns, but working through these challenges is an opportunity for learning.

Admissions leaders can rethink and revamp policies to better reflect AI technology's challenges and capabilities. While each institution needs to consider its own strategy, the following considerations can help keep admissions processes in line with today's digital landscape.

1) Set clear, consistent expectations from admissions to graduation. Admissions are the first touchpoint students have with your institution. By providing clear, transparent expectations and guidelines regarding the use of AI from the start, you can set students up for success — ensuring applicants understand what is acceptable and what constitutes a violation of academic integrity.

Policies should also remain generally consistent across the entire institution. Admissions leaders can learn from faculty, department heads, deans, and other academic leaders who have already implemented rules and policies regarding AI use in classrooms, syllabuses, and elsewhere.

At many institutions, there may be inconsistencies among policies that define what constitutes plagiarism, cheating, and misuse of AI tools in essays and written materials. But institutions are also seeing growing consensus and best practices develop around academic honesty and AI — and consistent expectations will avoid confusion and misunderstandings for students, faculty, and administrators alike.

2) Develop expertise among your own admissions teams. By gaining firsthand experience with these tools, admissions teams can better govern and manage how applicants use AI (both ethically and not). With a better understanding of generative AI capabilities and limitations, they can make more informed decisions about how applicants should take advantage of these tools.

A better understanding of AI also helps increase efficiency and expand the capacity of overstretched admissions teams. With AI, teams can quickly sort through large volumes of applications, categorizing them based on predefined criteria such as GPA, test scores, and extracurricular involvement. They might also leverage predictive analytics to help gauge which admitted students are most likely to accept an offer of admission to improve enrollment management and planning.

Most institutions are already starting this process. Half of admissions offices in a 2023 survey reported using AI to review applications. In 2024, 80% said they would integrate AI into review processes.

3) Integrate more holistic admissions practices. Traditional application metrics are in desperate need of an update. Personal statements, essays, and recommendation letters provided low reliability even before AI. Students have gained a significant advantage by relying on family or friends to help them draft and refine personal statements — and that was before ChatGPT, Bard, or other AI tools entered the mix. Other traditional inputs like standardized testing also introduce substantial bias and disadvantages for students. 

The challenges and considerations of AI further underscore the need for holistic admissions practices that assess the full range of an applicant's life experiences, capabilities, and potential. Rather than relying solely on test scores and essays, admissions leaders can also take into account soft skills like communication, teamwork, and creative thinking to make better decisions about students who bring diverse skills, experiences, and talents to campus.

Admissions Serve as an AI Learning Moment

Admissions is a critical starting point in a student's academic journey. Guiding and supporting the responsible and ethical use of AI tools helps prepare students for the rest of their academic and professional careers — it's among the first of many educational moments they will experience at your institution.

Clear, consistent admissions policies take into account the nuances and complexities of AI and the application process. Updating admissions policies empowers you to uphold academic integrity while setting your students — and your institution — up for success in the era of AI.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.