Down the Analytics Rabbit Hole

The search for data that impacts student success may lead to more questions than answers.

Down the Analytics Rabbit Hole
Shutterstock.com

When we first sketched out a topic for this month's feature on the quest for data that impacts student success, we had a slightly different story in mind. We wanted to come up with a handful of data types that any higher ed institution pursuing learning analytics should have on its list — say, five key data points that really impact student outcomes.

Of course, boiling analytics down to five types of data would be an oversimplification, but we hoped it could generate some interesting discussion. And with projects like the PAR Framework and Open Academic Analytics Initiative making great inroads in the field — generating predictive models with a significant degree of accuracy across various types of institutions — our goal seemed attainable. Surely the experts would agree that certain basic data — lecture capture usage, LMS activity, etc. — universally contributed to academic performance.

In reality, what we encountered was something of an analytics rabbit hole. Take lecture capture usage, for example: The University of Saskatchewan found that students in second-year STEM classes who accessed lecture capture systems weekly earned higher grades than students who did not regularly integrate lecture capture into their study habits. But when the model was applied to a first-year social sciences class, it "didn't hold up very well at all," as researcher Christopher Brooks told us. "There are so many different variables," he said — differences in the course, the students, the domain or the instructor could all skew the results.

Similarly, researchers at Brigham Young University (UT) determined that basic LMS data — students' time spent within the system and the number of page views they generate — is not enough to indicate student performance. By taking a more granular approach, categorizing specific types of page visits, they were able to uncover some insights into student learning, but those findings still require further exploration.

It's a realization that I imagine many institutions pursuing learning analytics must have upon delving into the data: The closer you look, the more questions you discover.

That doesn't mean higher ed should stop searching for answers in big data; rather, I think it highlights the need for continued research, and more important, the need to share findings and collaborate across institutions. And here's one place to start: At our Campus Technology Fall Forum this Nov. 4–5 in Chicago, University of Michigan professor Perry Samson will speak on "Predicting Student Success in Hybrid Courses Based on Student Participation," offering his lessons learned from linking student participation data with student outcomes in a large lecture class at U-M. Don't miss it!

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • abstract graph showing growth

    Where Are You on the Ed Tech Maturity Curve?

    Ed tech maturity models can help institutions map progress and make smarter tech decisions.

  • abstract coding

    Anthropic's New AI Model Targets Coding, Enterprise Work

    Anthropic has released Claude Opus 4.6, introducing a million-token context window and automated agent coordination features as the AI company seeks to expand beyond software development into broader enterprise applications.

  • Abstract digital cloudscape of glowing interconnected clouds and radiant lines

    Cloud Complexity Outpacing Human Defenses, Report Warns

    According to the 2026 Cloud Security Report from Fortinet, while cloud security budgets are rising, 66% of organizations lack confidence in real-time threat detection across increasingly complex multi-cloud environments, with identity risks, tool sprawl, and fragmented visibility creating persistent operational gaps despite significant investment increases.

  • AI word on microchip and colorful light spread

    Microsoft Unveils Maia 200 Inference Chip to Cut AI Serving Costs

    Microsoft recently introduced Maia 200, a custom-built accelerator aimed at lowering the cost of running artificial intelligence workloads at cloud scale, as major providers look to curb soaring inference expenses and lessen dependence on Nvidia graphics processors.