New Turnitin Product Brings AI-Powered Tools to Students with Instructor Guardrails

Academic integrity solution provider Turnitin has introduced Turnitin Clarity, a paid add-on for Turnitin Feedback Studio that provides a composition workspace for students with educator-guided AI assistance, AI-generated writing feedback, visibility into integrity insights, and more.

Utilizing an institution's existing Turnitin workflow, students can access writing assignments within Turnitin Clarity, including instructions, grading rubric, and expectations around the use of generative AI, and write and edit their submission over multiple sessions, the company explained in a news announcement. Instructors can enable the tool's optional AI writing assistant feature to allow students to use AI according to course policies.

In turn, instructors can view a student's entire writing process, such as pasted text, typing patterns, construction time, and draft history, including any potential use of AI. "When enabled, educators can … see where and how students may have used AI tools, and provide guidance based on their usage," the company said. "This will help provide information to determine whether the students’ work meets the institution and assignment’s integrity standards."

"Turnitin Clarity serves as a bridge between students and educators," said Chief Product Officer Annie Chechitelli, in a statement. "Students will need to use AI in their future careers. With Turnitin Clarity, educators can begin to understand how students use it and identify ways to incorporate it into their writing, without hindering their academic progress."

For more information, visit the Turnitin site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • AI robot with cybersecurity symbol on its chest

    Microsoft Adds New Agentic AI Tools to Security Copilot

    Microsoft has announced a major expansion of its AI-powered cybersecurity platform, introducing a suite of autonomous agents to help organizations counter rising threats and manage the growing complexity of cloud and AI security.

  • illustration of a human head with a glowing neural network in the brain, connected to tech icons on a cool blue-gray background

    Meta Launches Stand-Alone AI App

    Meta Platforms has introduced a stand-alone artificial intelligence app built on its proprietary Llama 4 model, intensifying the competitive race in generative AI alongside OpenAI, Google, Anthropic, and xAI.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • laptop displaying a red padlock icon sits on a wooden desk with a digital network interface background

    Reports Highlight Domain Controllers as Prime Ransomware Targets

    A recent report from Microsoft reinforces warnings about the critical role Active Directory (AD) domain controllers play in large-scale ransomware attacks, aligning with U.S. government advisories on the persistent threat of AD compromise.