Time for Class 2023 Report Shows Number One Faculty Concern: Preventing Student Cheating Via AI

Tyton Partners' 2023 annual report, "Time for Class: Bridging Student and Faculty Perspectives on Digital Learning," notes that the premier concern of higher education faculty is preventing student cheating, mainly from the use of generative AI. This was up from being their number 10 concern in 2022.

The report combines three surveys conducted in spring 2023 encompassing 2,048 students, 1,748 instructors, and 306 higher education administrators. The overall conclusion is that faculty and students are at odds with each on how best to teach courses to facilitate learning, the role of digital tools, and the challenges of AI.

Of faculty, 46% prefer digital courseware, while 75% of students prefer it, and 55% of faculty prefer face-to-face teaching, while only 31% of students prefer it, the report noted.

What students feel they need and want includes:

  • Reliable access to technology;
  • Hybrid and digital courses and materials options;
  • More free or affordable course materials; and
  • Course help communities that feature study aids and collaboration.

For course help communities, while most students turn to peers (an average of 61% between first-year students and others), then faculty (54%), then course materials (50%), the study revealed an average of 35% will use free online resources or study aid providers such as YouTube, Khan Academy, and Chegg.

What faculty feel they need and want includes:

  • Custom digital tool selection and options for each course (90% use some digital resources);
  • Familiarity with generative AI and methods and policies to prevent student cheating in using it; and
  • Professional development support from institutions to improve teaching.

Despite prevention of student cheating jumping to the top spot of faculty concerns in 2023, "institutions have been slow to respond with changes to policy: only 3% of institutions have developed a formal policy regarding the use of AI tools, and most (58%) indicated they will begin to develop one 'soon,'" the report noted.

One of the studies released in April 2023 and used in this report revealed that 51% of current student AI users were likely or extremely likely to use generative AI writing tools, even if they were prohibited.

"Considering that our research also found that close to 80% of institutions and over 50% of individual courses have writing requirements to graduate, identifying a path forward is crucial," the 2023 report concluded.

To download and read the full study, visit the report page.

Tyton Partners is an investment banking and strategy consulting company focused on the education and global knowledge sector. Find more information at its Expertise page.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.