Indiana U Researchers Create New Model for Studying Student Learning at Scale

Researchers at Indiana University have developed a new model for studying the effectiveness of teaching practices — not just in a single classroom or context, but in a variety of classrooms at multiple universities. The project, dubbed ManyClasses, allows researchers to determine what works across a diverse range of disciplines, institution types, course formats and student populations.

"ManyClasses is a large-scale, distributed experiment," explained Ben Motz, a research scientist in the IU Bloomington College of Arts and Sciences' Department of Psychological and Brain Sciences and member of the ManyClasses research team, in a video about the initiative. "The basic idea is that rather than doing a single experiment in one classroom on some instructional intervention, instead we're going to do that same experiment in multiple classrooms — actually in dozens of classrooms — all in parallel. If we're actually going to test a theory about how people learn in educational settings, we probably shouldn't just test in single settings; instead we should test in many dozens of settings, so that we can understand the full breadth of what the outcomes might be if different types of teachers embed the same intervention in lots of different types of courses."

For the first ManyClasses experiment, Motz and fellow IU researcher Emily Fyfe recruited instructors from 17 different disciplines and 15 campuses at five universities — the University of Minnesota, the University of Michigan, the University of Nebraska-Lincoln, Penn State University and Indiana University (all members of the Unizin consortium) — to analyze the optimal timing for instructor feedback on student assignments. The study collected data from the institutions' Canvas learning management systems on more than 2,081 students in 38 courses, comparing the effect of immediate vs. delayed feedback on learning outcomes.

The result: While traditional wisdom has suggested that immediate feedback from instructors is most effective for students, the ManyClasses data revealed no difference between immediate and delayed feedback.

"The main conclusion of the study — to the great surprise of many teachers — is that there is no overall effect of feedback timing that spans all learning environments," said Fyfe, an assistant professor in the Department of Psychological and Brain Sciences, in a university news article. "The findings should provide some comfort to teachers. If they take a few days to return feedback, there is no evidence that the delay will hamper their students' progress, and in some cases, the delay might even be helpful."

The study will be published in July in the journal Advances in Methods and Practices in Psychological Science.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.