IU-Led ResearchSOC to Help Secure Data for Geoscience Research

The Research Security Operations Center (ResearchSOC), a collaborative security response center led by Indiana University, is providing its data security and threat detection services to the Geodetic Facility for the Advancement of Geoscience (GAGE). GAGE is a national facility dedicated to the study of the Earth's shape, gravity field and rotation, operated by UNAVCO, a nonprofit, university-governed consortium.

"The data that the GAGE Facility archives and shares for the scientific community are precious because they record how the Earth changes shape over time through a huge range of dynamic processes," explained Rebecca Bendick, UNAVCO president, in a statement. "Information about the past state can never be re-measured if lost, so protecting this intellectual treasure is one of our most solemn responsibilities. Partnering with ResearchSOC ensures that we are using the most advanced and best practices for data security and stability."

ResearchSOC draws on the research cybersecurity capabilities of Indiana University, Duke University, the Pittsburgh Supercomputing Center and the University of California San Diego, and is supported by funding from the National Science Foundation. Among its services are security monitoring by shared cybersecurity operations center OmniSOC (also based at IU); a Vulnerability Identification Service; a Shared Threat Intelligence for Network Gatekeeping and Automated Response for identifying and defending against network attacks; and services from IU's Center for Applied Cybersecurity Research (CACR), an organization focused on applied cybersecurity technology, education and policy.

"The goal of ResearchSOC is to secure the integrity and reproducibility of highly valuable science research data," said Von Welch, project director for ResearchSOC and director of CACR. "In today's threat environment, researchers need to be confident that we are doing everything we can to ensure the integrity of their work."

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • futuristic crystal ball with holographic data projections

    Call for Opinions: 2025 Predictions for Higher Ed IT

    How will the technology landscape in higher education change in the coming year? We're inviting our readership to weigh in with their predictions, wishes, or worries for 2025.

  • cloud icon connected to a data network with an alert symbol (a triangle with an exclamation mark) overlaying the cloud

    U.S. Department of Commerce Proposes Mandatory Reporting Requirement for AI, Cloud Providers

    This proposed rule from the department's Bureau of Industry and Security aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • glowing AI symbol integrated into a stylized cloud icon, surrounded by interconnected digital nodes and translucent security shields, set against a gradient white-to-blue background with grid lines and abstract risk charts

    Cloud Security Alliance Report Plots Path to Trustworthy AI

    A new report from the Cloud Security Alliance highlights the need for AI audits that extend beyond regulatory compliance, and advocates for a risk-based, comprehensive methodology designed to foster trust in rapidly evolving intelligent systems.