Future of Privacy Forum Joins SafeInsights Education Research Project

The Future of Privacy Forum (FPF) is partnering with OpenStax, the Rice University-based publisher of free, open educational resources, as part of the National Science Foundation-funded SafeInsights project.

Announced in April 2024, SafeInsights will bring together researchers, education institutions, and digital learning platforms to create a large-scale education research hub that will enable long-term research on the predictors of effective learning while protecting student privacy, OpenStax shared in a news announcement.

"By design, SafeInsights stringently protects student privacy through an innovative architecture that makes large-scale information about learning available for research without revealing that protected information to researchers," explained J.P. Slavinsky, technical director at OpenStax and executive director of SafeInsights.

The Future of Privacy Forum will collaborate with SafeInsights partners to enable privacy-preserving research studies to better understand student learning, the organization said. 

"Through this project, we're excited to lend the Future of Privacy Forum's expertise to help inform how researchers access rich learning data without compromising student privacy," said John Verdi, FPF's senior vice president for policy, in a statement. "Since its founding, FPF's work has been driven by a belief that fair and ethical use of technology can improve people's lives while safeguarding our privacy. SafeInsights' model and directive will be critical to advancing the next generation of education research."

Other SafeInsights partners and participating institutions include:

  • R&D partners with expertise in learning and education research, open science, technology, student data privacy, community engagement, and project management, such as AEM Corporation, Arizona State University (ASU), Center for Open Science, Digital Promise, Georgia Institute of Technology, Morehouse College, National Network of Education Research-Practice Partnerships, Tapia Center for Excellence and Equity in Education, TERC, The University of Chicago, University of Massachusetts Amherst, University of Pennsylvania, Washington University in St. Louis, and Worcester Polytechnic Institute; 
  • Digital learning platforms ASSISTments, EdPlus at ASU, CourseKata, Infinite Campus, iSTART, Quill.org, TERC's Data Arcade, UPenn's Massive Online Open Courses, and The WritingPal;
  • And additional thought partners and collaborators.

"Better research leads to better learning. SafeInsights will enable a community of researchers to safely study large, diverse groups of students over time as they use different learning platforms," commented Richard Baraniuk, Rice professor, OpenStax founder, and project lead. "Researchers will be able to explore new ways to understand learning for students at all levels of education, which can lead to unprecedented discoveries and next-level innovations."

For more information on SafeInsights, visit safeinsights.org.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.