ProctorU Proposes Student Bill of Rights for Remote Learning

Doing school work remotely presents some unique challenges. For one, teachers can't necessarily see how a student is accomplishing class work and may therefore make faulty assumptions about how it was done; and two, the education technology that facilitates online learning collects data on the student and the interactions, frequently without the student even knowing, let alone opting in.

Now, ProctorU, an education technology company that specializes in online proctoring, has proposed a "student bill of rights." The goal is to develop a level playing field for students as schools continue delivering classes online.

The document covers seven areas of expectations students should be able to count on from their academic institutions:

  • The right to have questions about digital or remote academic work "answered clearly and promptly";

  • The expectation that a student's work is presumed to be done with "honesty and integrity";

  • The presumption that anybody involved in the remote work is complying with privacy laws and policies related to student privacy and student data;

  • The right to expect that there are policies and procedures in place for ensuring and maintaining the integrity of student work;

  • The right to review policies that might place students at an unfair disadvantage compared to others who might choose to use "inappropriate or unauthorized tools, tactics or assistance";

  • The right to understand what and why data is collected and stored, and how it is being used; and

  • The expectation that data collection is specific and limited.

"Taking a test or doing work online should be no different than doing that same work in person, in a classroom," said Scott McFarland, CEO of ProctorU, in a statement. "There's no reason students should feel their work is more at risk, that the integrity standards are any different or that they have to surrender any more privacy to be online. Students should be protected in all of those areas."

More importantly, he added, the policies and procedures related to integrity and privacy should be understandable and allow the student to "make good decisions."

The company is hoping to spark discussion about the bill of rights on its dedicated website.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.