Turnitin: More than Half of Students Continue to Use AI to Write Papers

Since its launch in April 2023, Turnitin's AI writing detection tool has reviewed over 200 million papers, with data showing that more than half of students continue to use AI to write their papers.

As of late March 2024, the company said, out of the over 200 million papers reviewed, over 22 million were at least 20% AI-written, and over 6 million were at least 80% AI-written.

The company said this indicates that "educators and institutions should look at a variety of factors — or puzzle pieces — beyond detection." It suggests that teachers and institutions should have open discussions with students about what is acceptable use of AI in the classroom, as well as review academic policies and revise essay prompts.

Turnitin referenced a study conducted in Spring 2023 it contributed to with Tyton Partners, showing academic cheating as the number one concern of educators, as high numbers of students revealed they were "likely or extremely likely to use generative AI writing tools, even if they were prohibited."

The study also showed that 97% of academic institutions were woefully unprepared to deal with the issue — only 3% had developed a policy about it.

Turnitin said it had been developing its AI detection tool over two years before the launch of ChatGPT and within months of OpenAI's generative AI application. The company said its tool "integrates the AI writing report within the existing Turnitin workflow, providing educators with an overall percentage of the document that AI writing tools, like ChatGPT, may have generated."

The tool is available within Turnitin Originality, Turnitin Feedback Studio with Originality, and iThenticate 2.0, the company said.

"We're at an important juncture in education where technologies are transforming learning, and the need for academic integrity is more critical than ever," said Annie Chechitelli, chief product officer. "Everyone in education is looking for resources to enable them to perform at their best, and technologies, including our AI writing detection feature, help advance learning without sacrificing academic integrity."

Turnitin has created an interactive graphic with links to articles on AI use in education. The company also explains what an AI detection "false positive" is and says its rate on that is only 1%, emphasizing that Turnitin "does not make a determination of misconduct" but provides data for educators to use their professional judgment in determining whether academic integrity has been breached.

To read more about Turnitin, visit the company's Why Turnitin? page.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • From the Kuali Days 2025 Conference: A CEO's View of Planning for AI

    How can a company serving higher education navigate the changes AI brings to ed tech? What will customers expect? CT talks with Kuali CEO Joel Dehlin, who shared his company's AI strategies with attendees at Kuali Days 2025 in Anaheim.

  • glowing blue AI sphere connected by fine light lines, positioned next to a red-orange shield with a checkmark

    Cloud Security Alliance Offers Playbook for Red Teaming Agentic AI Systems

    The Cloud Security Alliance has introduced a guide for red teaming Agentic AI systems, targeting the security and testing challenges posed by increasingly autonomous artificial intelligence.

  • Training the Next Generation of Space Cybersecurity Experts

    CT asked Scott Shackelford, Indiana University professor of law and director of the Ostrom Workshop Program on Cybersecurity and Internet Governance, about the possible emergence of space cybersecurity as a separate field that would support changing practices and foster future space cybersecurity leaders.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.