Turnitin AI Detection Rates: 3.3% of 65M Papers Reviewed Were Flagged as Majority AI Writing

Of the more than 65 million student papers reviewed for AI writing by Turnitin's detection tool since April, more than 2 million — or 3.3% of all papers reviewed — have been flagged as containing 80% or more AI-written text, according to a news release. 

More than 6 million papers, or 10.3% of all those reviewed, were flagged as containing at least 20% AI-written text, Turnitin said, noting that tracking the detection rate shows how widely generative AI is being used by students but “whether this is acceptable or not is determined by educators themselves.” 

Almost 98% of education institutions using Turnitin have enabled the AI writing detection feature within their workflows, said Annie Chechitelli, chief product officer at Turnitin.

“Sharing usage and indication rates is one way that we can help improve understanding of the presence and use of generative AI in their teaching and learning practices,” Chechitelli said. “Given the urgency expressed by educators about these challenges and the public’s interest in AI text creation and AI text detection, we are committed to sharing these insights so that we can all begin to understand the trends that are currently shaping education.”

Turnitin has published guides and resources for educators concerned about how students are using ChatGPT and similar generative AI tools in their writing for class assignments, and the company has repeatedly urged educators to incorporate AI tools into the classroom to better prepare students for the future of work. 

“We want teachers and students to talk about appropriate use of writing tools, proper citation and original thinking. Our role is to provide them with a tool to start those meaningful conversations,” said Patti West-Smith, Turnitin senior director of customer engagement and a long-time K–12 teacher and administrator.

“Conversations are critical because even a very high proportion of the statistical signatures of AI in a document does not necessarily indicate misconduct,” Turnitin said. “Some educators are specifically asking students to use AI tools in their work, so detecting its presence may not be as concerning for them. And yet, other educators might tell their students that generative AI is not allowed. In these cases, detection may help them address the issue earlier in the draft process.” 

Learn more at Turnitin.com/solutions/ai-writing.

About the Author

Kristal Kuykendall is editor, 1105 Media Education Group. She can be reached at [email protected].


Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.