AI Writing Detection Tool Analyzes Linguistic Fingerprint to Check Authorship

FLINT Systems has introduced a new linguistic tool designed to detect whether a document was written by its attributed author. The system is designed not simply to detect whether a piece of writing was authored by an AI, but whether it was written by the person claiming authorship at all.

To do this, according to the company, the system "applies forensic linguistic methodologies to create a digital linguistic fingerprint of an individual's writing style. It then creates a linguistic fingerprint of the document at question and compares the two. Testing results showed that when documents were created by anyone other than the individual who submitted the document, FLINT Systems correctly identified in over 80% of the cases."

This "fingerprinting" approach distinguishes it from other AI writing detection tools like GPTZero becaue it eliminates the potential errors in detection that occur when AI-written content is edited by a human, the company said. "By applying linguistic fingerprinting technology, the FLINT System can correctly identify when an individual did not author the document, regardless of whether or not there are elements of humanly developed texts interwoven into the AI document."

A free trial of the system is available. In a test case, I compared one of my articles with three other articles I'd written, and it determined that the article in question was 50% to 55% likely to have been written by me. (It was written by me — although, as in the case of this article, it did contain quotes from other people.)

The free trial, which requires registration, is available at free.flintai.com/home. To use it, upload some documents from a single author. Then click the "Compare and Analyze" button to upload a document to compare against the previous documents.

For more information, visit flintai.com.

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.

  • three glowing stacks of tech-themed icons

    Research: LLMs Need a Translation Layer to Launch Complex Cyber Attacks

    While large language models have been touted for their potential in cybersecurity, they are still far from executing real-world cyber attacks — unless given help from a new kind of abstraction layer, according to researchers at Carnegie Mellon University and Anthropic.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • magnifying glass revealing the letters AI

    New Tool Tracks Unauthorized AI Usage Across Organizations

    DevOps platform provider JFrog is taking aim at a growing challenge for enterprises: users deploying AI tools without IT approval.