Turnitin: More than Half of Students Continue to Use AI to Write Papers

Since its launch in April 2023, Turnitin's AI writing detection tool has reviewed over 200 million papers, with data showing that more than half of students continue to use AI to write their papers.

As of late March 2024, the company said, out of the over 200 million papers reviewed, over 22 million were at least 20% AI-written, and over 6 million were at least 80% AI-written.

The company said this indicates that "educators and institutions should look at a variety of factors — or puzzle pieces — beyond detection." It suggests that teachers and institutions should have open discussions with students about what is acceptable use of AI in the classroom, as well as review academic policies and revise essay prompts.

Turnitin referenced a study conducted in Spring 2023 it contributed to with Tyton Partners, showing academic cheating as the number one concern of educators, as high numbers of students revealed they were "likely or extremely likely to use generative AI writing tools, even if they were prohibited."

The study also showed that 97% of academic institutions were woefully unprepared to deal with the issue — only 3% had developed a policy about it.

Turnitin said it had been developing its AI detection tool over two years before the launch of ChatGPT and within months of OpenAI's generative AI application. The company said its tool "integrates the AI writing report within the existing Turnitin workflow, providing educators with an overall percentage of the document that AI writing tools, like ChatGPT, may have generated."

The tool is available within Turnitin Originality, Turnitin Feedback Studio with Originality, and iThenticate 2.0, the company said.

"We're at an important juncture in education where technologies are transforming learning, and the need for academic integrity is more critical than ever," said Annie Chechitelli, chief product officer. "Everyone in education is looking for resources to enable them to perform at their best, and technologies, including our AI writing detection feature, help advance learning without sacrificing academic integrity."

Turnitin has created an interactive graphic with links to articles on AI use in education. The company also explains what an AI detection "false positive" is and says its rate on that is only 1%, emphasizing that Turnitin "does not make a determination of misconduct" but provides data for educators to use their professional judgment in determining whether academic integrity has been breached.

To read more about Turnitin, visit the company's Why Turnitin? page.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • interconnected blocks of data

    Rubrik Intros Immutable Backup for Okta Environments

    Rubrik has announced Okta Recovery, extending its identity resilience platform to Okta with immutable backups and in-place recovery, while separately detailing its integration with Okta Identity Threat Protection for automated remediation.

  • teenager’s study desk with a laptop displaying an AI symbol, surrounded by books, headphones, a notebook, and a cup of colorful pencils

    Survey: Student AI Use on the Rise

    Ninety-three percent of students across the United States have used AI at least once or twice for school-related purposes, according to the latest AI in Education report from Microsoft.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.