Turnitin Integrating AI Writing Detector into Its Products

Plagiarism detection company Turnitin announced that the AI writing detection tool it teased in January will be available as a feature of its existing products as soon as April.

According to the company, the tool detects ChatGPT- and GPT 3.0-written content 97% of the time, with a false positive rate of less than 1%. According to Turnitin: "The new functionality will operate within the existing Turnitin workflow so that educators will be able to analyze content and use feedback tools in the same user experience they have today."

"Based on how our detection technology is performing in our lab and with a significant number of test samples, we are confident that Turnitin's AI writing detection capabilities will give educators information to help them decide how to best handle work that may have been influenced by AI writing tools," said Annie Chechitelli, chief product officer of Turnitin, in a prepared statement. "Equally important to our confidence in the technology is making the information usable and helpful and in a format that educators can use. We are being very deliberate in releasing a detector that is highly accurate and trained on the largest dataset of academic writing. It is essential that our detector and any others limit false positives that may impact student engagement or motivation."

Turnitin also launched an AI writing resource page designed specifically for educators to "support educators with teaching resources and to report its progress in developing AI writing detection features. The newly launched AI writing resource page is publicly available and will be updated regularly with information about Turnitin's progress in bringing detection features to market including how they are performing in its research and development lab. Turnitin experts in pedagogy and instruction will also contribute to an expanded library of resources to help guide K–12 teachers and higher education faculty on how to adjust to an academic environment where AI writing is used. Additionally, demo and preview videos will be regularly posted."

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • central cloud platform connected to various AI icons—including a brain, robot, and network nodes

    Linux Foundation to Host Protocol for AI Agent Interoperability

    The Linux Foundation has announced it will host the Agent2Agent (A2A) protocol project, an open standard originally developed by Google to support secure communication and interoperability among AI agents.

  • laptop displaying a digital bookshelf of textbooks on its screen

    Collaboration Brings OpenStax Course Materials to Microsoft Learning Zone

    Open education resources provider OpenStax has partnered with Microsoft to integrate its digital library of 80 openly licensed titles into Microsoft Learning Zone, an on-device AI tool for generating interactive lessons and learning activities.

  • cybersecurity analyst in a modern operations center monitors multiple digital screens showing padlock icons, graphs, and a global map with security markers

    Louisiana State University Doubles Down on Larger Student-Run SOC

    In an effort to provide students with increased access to real-world cybersecurity experience, Louisiana State University has expanded its relationship with cybersecurity solutions provider TekStream to launch TigerSOC, a new student-run security operations center.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.