Turnitin AI Detection Rates: 3.3% of 65M Papers Reviewed Were Flagged as Majority AI Writing

Of the more than 65 million student papers reviewed for AI writing by Turnitin's detection tool since April, more than 2 million — or 3.3% of all papers reviewed — have been flagged as containing 80% or more AI-written text, according to a news release. 

More than 6 million papers, or 10.3% of all those reviewed, were flagged as containing at least 20% AI-written text, Turnitin said, noting that tracking the detection rate shows how widely generative AI is being used by students but “whether this is acceptable or not is determined by educators themselves.” 

Almost 98% of education institutions using Turnitin have enabled the AI writing detection feature within their workflows, said Annie Chechitelli, chief product officer at Turnitin.

“Sharing usage and indication rates is one way that we can help improve understanding of the presence and use of generative AI in their teaching and learning practices,” Chechitelli said. “Given the urgency expressed by educators about these challenges and the public’s interest in AI text creation and AI text detection, we are committed to sharing these insights so that we can all begin to understand the trends that are currently shaping education.”

Turnitin has published guides and resources for educators concerned about how students are using ChatGPT and similar generative AI tools in their writing for class assignments, and the company has repeatedly urged educators to incorporate AI tools into the classroom to better prepare students for the future of work. 

“We want teachers and students to talk about appropriate use of writing tools, proper citation and original thinking. Our role is to provide them with a tool to start those meaningful conversations,” said Patti West-Smith, Turnitin senior director of customer engagement and a long-time K–12 teacher and administrator.

“Conversations are critical because even a very high proportion of the statistical signatures of AI in a document does not necessarily indicate misconduct,” Turnitin said. “Some educators are specifically asking students to use AI tools in their work, so detecting its presence may not be as concerning for them. And yet, other educators might tell their students that generative AI is not allowed. In these cases, detection may help them address the issue earlier in the draft process.” 

Learn more at Turnitin.com/solutions/ai-writing.

About the Author

Kristal Kuykendall is editor, 1105 Media Education Group. She can be reached at [email protected].


Featured

  • laptop displaying a red padlock icon sits on a wooden desk with a digital network interface background

    Reports Highlight Domain Controllers as Prime Ransomware Targets

    A recent report from Microsoft reinforces warnings about the critical role Active Directory (AD) domain controllers play in large-scale ransomware attacks, aligning with U.S. government advisories on the persistent threat of AD compromise.

  • various technology icons including a cloud, AI chip, and padlock shield above a laptop displaying charts and cloud data

    AI-Focused Data Security Report Identifies Cloud Governance Gaps

    A new Varonis data security report notes that excessive permissions and AI-driven risks are leaving cloud environments dangerously exposed.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.