Instructors Believe Students More Likely to Cheat When Class Is Online

One outcome for the shift to online classes, according to the college and university instructors now teaching them, is that students will be more likely to cheat. In a recent survey, 93 percent of educators said they expected online learning to be more conducive to academic dishonesty.

The survey was conducted by education publisher Wiley, in May 2020, among 789 instructors in higher education. More than half — 54 percent — had never taught online prior to the emergency move to remote education.

A report on the results, "Academic Integrity in the Age of Online Learning," stated that while 62 percent of faculty agreed that students were more likely to cheat in an online class than an in-person class, most students (95 percent) said cheating happened in both environments equally. (The student response was pulled from a 2013 paper exploring student perceptions of cheating.)

Instructors cited a number of techniques for preventing cheating. The top one, referenced by 34 percent of faculty, was the use of proctored or monitored testing, done in person or via webcam. The use of webcams and videos was mentioned by 16 percent of respondents, and the use of a locked down browser by 15 percent. Far smaller numbers used Zoom during the testing (6 percent) or plagiarism software (4 percent).

The report offered guidance to instructors concerned about discouraging academic misconduct, including these suggestions:

  • Clarify the purpose of the assignment so that students understand why they need to "put in the work";
  • Increase teacher presence in the course and encourage teacher and student interactions to help students feel connected;
  • Impose time limits and have students take the test all at the same time;
  • Create your own test question pool and randomize it as much as possible so each student gets a unique version;
  • Use "shorter, more frequent practice tasks," so that students can refine their approaches "in a more iterative fashion"; and
  • Use open book exams, which test higher levels of learning.

"In today's online environment, there are powerful digital tools and platforms that support students' academic dishonesty and others that help faculty discourage it," said David Rettinger, president emeritus of the International Center for Academic Integrity, who presented research on the psychology of cheating at a recent Wiley event. "However, entering a technological 'arms race' to maintain academic integrity is a losing proposition. Instructors must break this cycle and develop pedagogical approaches that create a culture of integrity and personal responsibility."

The full report is openly available on the Wiley website.

A recording of Rettinger's session on cheating, along with other sessions presented during a company online teaching "summer camp," is available with registration.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.