IMS Global Updates Caliper Analytics Standard

The IMS Global Learning Consortium has announced Caliper Analytics v1.1, the latest version of its interoperability standard for learning data, designed to foster an open ecosystem for educational data and analytics.

Caliper Analytics v1.1 "makes it easier to deliver learning event data by providing guided language for describing, collecting and exchanging learning data across learning technologies, and promotes better data interoperability through a shared vocabulary for describing learning interactions," according to a news announcement. With the data streams enabled by Caliper, institutions can "produce a complete record of learning activity, not just outcomes, from all digital resources to support teaching and learning at scale while informing new learning models, student success programs and institutional strategic planning."

First created in 2015, the standard enables "the collection of valuable learning and tool usage data from digital resources, which can be used for predictive analytics and to deliver powerful insights about learning activity, instructional resource efficacy and student engagement," according to an IMS statement. Caliper has been adopted by LMS and learning tool companies such as Blackboard, D2L, Instructure, Elsevier, Intellify Learning, Kaltura, Learning Objects and McGraw-Hill Education; Canvas by Instructure and Ingram/VitalSource are among the first to be certified for Caliper Analytics v1.1.

IMS reports that institutions such as Penn State University, Purdue University, University of California, Berkeley, University of Kentucky, University of Maryland, Baltimore County, University of Michigan, and the Unizin Consortium "are using Caliper-formatted data from one or more learning tools to analyze and measure the effectiveness of learning activities to support their student success initiatives."

"Caliper enables high volume, real-time streams of activity data that can be tracked and analyzed to help inform academic planning, program and course design, and student intervention measures," said Rob Abel, chief executive officer for IMS Global Learning Consortium, in a statement. "The continued growth in adoption of Caliper Analytics in ed tech will result in a more productive and personalized environment that meets the evolving needs of students and educators."

For more information, visit the IMS site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • stylized AI code and a neural network symbol, paired with glitching code and a red warning triangle

    New Anthropic AI Models Demonstrate Coding Prowess, Behavior Risks

    Anthropic has released Claude Opus 4 and Claude Sonnet 4, its most advanced artificial intelligence models to date, boasting a significant leap in autonomous coding capabilities while simultaneously revealing troubling tendencies toward self-preservation that include attempted blackmail.

  • university building with classical architecture is partially overlaid by a glowing digital brain graphic

    NSF Invests $100 Million in National AI Research Institutes

    The National Science Foundation has announced a $100 million investment in National Artificial Intelligence Research Institutes, part of a broader White House strategy to maintain American leadership as competition with China intensifies.

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.