U Michigan Students Benefit from Course Data

Students at the University of Michigan now have access to an online tool intended to help them make better decisions about the courses they're considering. The Academic Reporting Toolkit (ART) 2.0 is actually a data visualization program that crunches historic data from 9,273 courses to inform users about the paths followed by students who have taken particular classes. It's intended to be used not only by students but also by advisors, faculty and administrators.

The program is a second generation of a previous tool that has been in use for 12 years, primarily by faculty to examine information such as mean grade and grade distributions, enrollment in other classes, grade correlations among classes and the effect of standardized test scores on course performance. The new version uses the same data but displays it in an interface students will be able to follow.

Previously, to get this type of course information required students to talk to others or hit online sites that rate courses and professors.

The development work was undertaken by the Digital Innovation Greenhouse within the Office of Digital Education & Innovation. DIG, as it's known, works with campus groups to scale up promising software. Other DIG projects include ECoach, to send personalized messages to STEM students in large introductory courses; GradeCraft, a gamified learning management system; and Student Explorer, an early warning system for advisors.

"What we're trying to do is increase the visibility of useful data that already exists," said ART 2.0 creator August "Gus" Evrard, a professor of physics, in a university press release. "Accuracy is important and our aim is to provide data you can understand and trust." The first-generation version of ART was designed by an IT advisory committee in the College of Literature, Science and the Arts, and later expanded to include the university's engineering department. That committee was led by Evrard. The program initially was used by faculty who needed to advise students on the best route to graduation.

"It provides everyone on campus with an array of new information about courses — who takes them, when, what they go on to major in, and what other courses these students take before and after this," added DIG principal investigator, Timothy McKay, a professor of physics. "It's our hope that students will use this information to choose courses which are right for them — to personalize their own education."

Evrard noted that the program will continue evolving as DIG uncovers additional useful ways to share data with the campus community.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.