ProctorU Gets Rid of AI-Only Proctoring

ProctorU, the academic division of Meazure Learning, has announced it is discontinuing services that rely solely on artificial intelligence for exam proctoring. Instead, it will use human proctors for every test session.   

"We believe that only a human can best determine whether test-taker behavior is suspicious or violates test rules," explained Scott McFarland, CEO of ProctorU, in a statement. "Depending exclusively on AI and outside review can lead to mistakes or incorrect conclusions as well as create other problems."

Previously, ProctorU's AI-based services would record each test session, use AI or similar analytics tools to automatically identify potential misconduct, and send that information to the school or test provider for review. After reviewing its own data and consulting with its customers, the company said, it has determined that "using technology alone, without trained human proctors, has three main deficiencies and side effects that significantly undercut its effectiveness: a failure to consistently review test sessions, increased opportunity to unfairly implicate test-takers in misconduct, and increased workload for instructors."

AI can detect "anomalies" that turn out to be meaningless, the company pointed out. A dog barking could be deemed an "unusual background noise," causing a test session to be flagged for review. "While this should not result in a finding of misconduct," the company added, "human proctors are trained to discern and dismiss innocuous actions or sounds."

"The critical point here is that people can tell when someone is trying to be dishonest, but computers aren't so good at that," said McFarland.

ProctorU plans to migrate its education and test partners to human proctoring by the 2021-2022 academic year.  

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • simplified, abstract illustration focusing on the negative side of generative AI misuse, balancing the concepts of cybersecurity and human impact

    Researchers Provide Breakdown of Generative AI Misuse

    In an effort to clarify the potential risks of GenAI and provide "a concrete understanding of how GenAI models are specifically exploited or abused in practice, including the tactics employed to inflict harm," a group of researchers from Google DeepMind, Jigsaw, and Google.org recently published a paper entitled, "Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data."

  • clock with gears and digital circuits inside

    Report Pegs Cost of AI at Nearly $300K Per Minute

    A new report from cloud-based data/BI specialist Domo provides a staggering estimate of the minute-by-minute impact of today's generative AI boom.

  • stylized illustration of a college administrator lying awake in a cozy bed, looking thoughtful

    When Thinking About Data, What Keeps You Up at Night?

    The proliferation of technology in education means we have more data about how, what and if students are learning than ever before. The question is, how do we ensure that data gets into the hands of the people who can use it to improve teaching and learning, without invading a student or educator's privacy?

  • a glowing gaming controller, a digital tree structure, and an open book

    Report: Use of Game Engines Expands Beyond Gaming

    Game development technology is increasingly being utilized beyond its traditional gaming roots, according to the recently released annual "State of Game Development" report from development and DevOps solutions provider Perforce Software.