Moodle Partners with Copyleaks to Detect AI Content, Interspersed Human/AI Content, and Plagiarism

Open source learning management system Moodle has formed a partnership with AI content and plagiarism detector Copyleaks. With thousands of education institutions using Moodle's LMS, custom development, and learning design services, and the explosive use of AI-generated content, Moodle said in a release that feedback from its clients cemented its move to incorporate an AI detection tool.

Copyleaks uses a multifaceted approach to content authentication. It first identifies whether the content was written by a human or a chatbot, either fully or partially, whether content has been plagiarized or paraphrased, and how much.

Copyleaks can detect content written with ChatGPT4 "with 99% accuracy across a dozen languages, including English, French, Portuguese, and Spanish," according to the release. It is also able to detect human content interspersed with AI content at the sentence level — an industry first, the release said.

A Copylinks analysis done in March 2023 of the prevalence of AI-generated content, using data from tens of thousands of high school and college students worldwide in January and February, found that 11.21% of all papers and assignments contained such content. Interestingly, after students were told their papers would be checked for AI content, high school student use still increased by 237.15%, while college student use decreased by 38.9%.

"Many of our clients have expressed concerns about the use of AI for content generation, particularly in the higher education space," said Jonathan Moore, head of Moodle US.

Copyleaks CEO Alon Yamin acknowledged that concern, noting, "We are all still learning the rules together." The analysis discussion asks: "Is this the end of the written assignment? And what can educators do to leverage AI-generated content as a potential learning tool, much like the calculator?" Yamin agreed the AI detection tool will help Moodle users make "informed decisions about what the rules are for them surrounding AI-generated content."

Copyleaks' AI and plagiarism detection tools are easily integrated into Moodle's existing LMS system and not only detect, in less than a minute, AI content and multiple forms of plagiarism, including that used in source code, but also any attempt to deceive the software.

To learn more, visit the Moodle Plagiarism and AI Content page and view a sample interactive report.

Moodle is a customizable e-learning toolbox used by thousands of educational institutions worldwide. Their experts work with clients to identify their unique needs, whether with hosting, course design, custom development, or more. Visit the Moodle About page to learn more about its history and goals.

Copyleaks is an AI-based text analysis company whose software identifies potential plagiarism and paraphrasing across nearly every language, detects AI-generated content, verifies authenticity and ownership, and empowers error-free writing. Visit the Copyleaks About Us page for more information on its services.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.