Turnitin: More than Half of Students Continue to Use AI to Write Papers

Since its launch in April 2023, Turnitin's AI writing detection tool has reviewed over 200 million papers, with data showing that more than half of students continue to use AI to write their papers.

As of late March 2024, the company said, out of the over 200 million papers reviewed, over 22 million were at least 20% AI-written, and over 6 million were at least 80% AI-written.

The company said this indicates that "educators and institutions should look at a variety of factors — or puzzle pieces — beyond detection." It suggests that teachers and institutions should have open discussions with students about what is acceptable use of AI in the classroom, as well as review academic policies and revise essay prompts.

Turnitin referenced a study conducted in Spring 2023 it contributed to with Tyton Partners, showing academic cheating as the number one concern of educators, as high numbers of students revealed they were "likely or extremely likely to use generative AI writing tools, even if they were prohibited."

The study also showed that 97% of academic institutions were woefully unprepared to deal with the issue — only 3% had developed a policy about it.

Turnitin said it had been developing its AI detection tool over two years before the launch of ChatGPT and within months of OpenAI's generative AI application. The company said its tool "integrates the AI writing report within the existing Turnitin workflow, providing educators with an overall percentage of the document that AI writing tools, like ChatGPT, may have generated."

The tool is available within Turnitin Originality, Turnitin Feedback Studio with Originality, and iThenticate 2.0, the company said.

"We're at an important juncture in education where technologies are transforming learning, and the need for academic integrity is more critical than ever," said Annie Chechitelli, chief product officer. "Everyone in education is looking for resources to enable them to perform at their best, and technologies, including our AI writing detection feature, help advance learning without sacrificing academic integrity."

Turnitin has created an interactive graphic with links to articles on AI use in education. The company also explains what an AI detection "false positive" is and says its rate on that is only 1%, emphasizing that Turnitin "does not make a determination of misconduct" but provides data for educators to use their professional judgment in determining whether academic integrity has been breached.

To read more about Turnitin, visit the company's Why Turnitin? page.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.