Report Finds Students Struggling with Being Prepared for Courses and Increasingly Turning to Generative AI, Social Media to Study

In its second annual 2023 "Study Trends Report," McGraw Hill found that college students were feeling unprepared for their courses, but also that they have turned to generative AI and social media to study and would like more learning resources in a similar format.

The study, conducted by Morning Consult between July 18 and Aug. 11, 2023, surveyed 500 undergraduate college students and 200 college instructors. Some of the key findings include:

  • Students feeling unprepared for college courses nearly doubled, from 11% in 2022 to 21% in 2023. Over a third of instructors agreed but felt it was worse than students themselves realized.
  • Over 80% of students had used ChatGPT, generative AI, or social media for study help, and would like more in this format to supplement their course materials. Over 70% felt they would study more and better if they had such resources.
  • A majority of students felt that COVID disruptions, lack of time, and more responsibilities have affected their mental health, making them feel overwhelmed (57%) and stressed (56%) due to their studies. Over 90% of instructors agreed and worried that these mental health challenges were impacting undergraduate student success. Also, 24% of instructors felt they didn't have adequate resources from their institutions to help students with it.
  • Because of these pressures, over a third of students (35%) said they felt like dropping out, compared to 26% last year.

Despite increasingly positive attitudes toward AI chatbots as a study resource, both instructors and students recognized their flaws, mainly to do with accuracy and trustworthiness. Only 46% of instructors and 39% of students said they would be comfortable using AI tools for coursework, and that's even if the content were "developed and vetted by trusted academic sources."

"Students' learning needs are in constant flux, and they increasingly seek technology that mirrors the engaging, convenient format of tools and technology they use in their daily lives like ChatGPT and social media," said Justin Singh, chief transformation and strategy officer for McGraw Hill, adding that content providers must meet "high standards for accuracy and trustworthiness."

Visit this McGraw Hill blog page to read the study trends report.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.