Turnitin: Of 38M Submissions Since April 4, 3.5% Had At Least 80% AI-Written Text

Analysis of 6 Weeks' Worth of Educator Submissions Leads to Updates for Turnitin's AI Detection Platform

In the first six weeks of educators using Turnitin’s new AI writing detection feature, the platform processed 38.5 million submissions, and the results — as well as plenty of feedback from educators and administrators — led Turnitin to tweak the detector and to further explain the meaning and accuracy rates of the detection scores. 

Of the submissions run through the AI detector, Turnitin said 3.5% contained more than 80% AI-written text, and just under one-tenth of submissions contained at least 20% AI-written text.

In a new blog post, Turnitin Chief Product Officer Annie Chechitelli explains the findings and details a few tweaks to the platform’s AI detection feature, in response to feedback from educators using it since its launch in early April.

Updates to the AI detection feature include:

  • Asterisk Added to Scores Under 20%: An asterisk will now appear next to the indicator “score” — or the percentage of a submission considered to be AI-written text — when the score is less than 20%, since the analysis of submissions thus far shows that false positives are higher when the detector finds less than 20% of a document is AI-written. The asterisk indicates that the score is less reliable, according to the blog post. 

  • Minimum Word Count Raised: The minimum number of words required for the AI detector to work has been raised from 150 to 300, because the detector is more accurate the longer a submission is, Chechitelli said. “Results show that our accuracy increases with a little more text, and our goal is to focus on long-form writing. We may adjust this minimum word requirement over time based on the continuous evaluation of our model.”

  • Changes to Detector Analysis of Opening and Closing Sentences: “We also observed a higher incidence of false positives in the first few or last few sentences of a document,” Chechitelli said. “Many times, these sentences are the introduction or conclusion in a document. As a result, we have changed how we aggregate these specific sentences for detection to reduce false positives.”

In their feedback, instructors’ and administrators’ main concern is false positives for “AI writing detection in general and in specific cases within our writing detecion,” according to the blog post. Since the release of the detection feature, Turnitin has seen that “real-world use is yielding different results” from lab tests performed during development, Chechitelli said.

The findings follow Turnitin’s investigation of cases where educators flag a submission for additional scrutiny due to questionable detection results, and an additional study of 800,000 academic writing samples — written before the release of ChatGPT — run through Turnitin’s AI detector.

Other findings from the detector’s first six weeks in use by educators include confusion about how to interpret Turnitin’s scores or AI writing metrics, Chechitelli said. 

She explained that the detector calculates two different statistics: the AI writing metric at the document level and at the sentence level.

As a result of educator feedback, “we’ve updated how we discuss false positive rates for documents and false positive rates for sentences,” she said. 

For documents with over 20% of AI writing, Turnitin’s document false positive rate is less than 1%, which was again validated by the new analysis of 800,000 pre-GPT writing samples. This translates into one human-written document out of 100 being incorrectly flagged as AI-written, Chechitelli said.

“While 1% is small, behind each false positive instance is a real student who may have put real effort into their original work,” she said. “We cannot mitigate the risk of false positives completely given the nature of AI writing and analysis, so, it is important that educators use the AI score to start a meaningful and impactful dialogue with their students in such instances.”

Turnitin has published a guide for educators on how to handle false positives on its website. 

The sentence-level false positive rate is slightly higher at around 4%, according to the blog post; the company’s analysis of results since the detector’s launch found that the false positive incidence is more common in documents with a mix of human- and AI-written text, “particularly in the transitions between human- and AI-written content,” Chechitelli said. 

Findings on false positives at the sentence-level:

  • 54% of false positive sentences are located right next to actual AI writing

  • 26% of false positive sentences are located two sentences away of actual AI writing

  • 10% of false positive sentences are located three sentences away of actual AI writing

  • The remaining 10% are not near any actual AI writing

The correlations between these sentences and their proximity to actual AI writing warrant further research, she added, which is already in the works.

Another key finding from educators’ feedback while using the detector is that “teachers feel uncertain about the actions they can take upon discovering AI-generated writing,” Chechitelli said. “We understand that as an education community, we are in uncharted territory.”

Turnitin has published a number of free resources for educators dealing with AI misuse and how to address it with students:

Read the full blog post and learn more at Turnitin.com.

Featured

  • computer with a red warning icon on its screen, surrounded by digital grids, glowing neural network patterns, and a holographic brain

    Report Highlights Security Risks of Open Source AI

    In these days of rampant ransomware and other cybersecurity exploits, security is paramount to both proprietary and open source AI approaches — and here the open source movement might be susceptible to some inherent drawbacks, such as use of possibly insecure code from unknown sources.

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.

  • a professional worker in business casual attire interacting with a large screen displaying a generative AI interface in a modern office

    Study: Generative AI Could Inhibit Critical Thinking

    A new study on how knowledge workers engage in critical thinking found that workers with higher confidence in generative AI technology tend to employ less critical thinking to AI-generated outputs than workers with higher confidence in personal skills.

  • university building with classical columns and a triangular roof displayed on a computer screen, surrounded by minimalist tech elements like circuit lines and abstract digital shapes

    Pima Community College Launches New Portal for a Unified Digital Campus Experience

    Arizona's Pima Community College is elevating the digital campus experience for students, faculty, and staff with a new portal built on the Pathify digital engagement platform.