Cambridge Shears Data Back-up Time by Two-Thirds

A Cambridge University research lab has cut the time it took to back up more than 23 terabytes of data from seven days to two, as part of an overhaul of its storage area network.

The university's Juvenile Diabetes Research Foundation (JDRF) installed software from Bakbone Software to automate administrative tasks required in the back-up process, Computerweekly (UK) reported.

The amount of data JDRF processed had risen by 3 terabytes in three years. Consequently, the time required to back up that data increased to the point that some jobs had to be postponed, putting source data at risk.

"If we did not address the deteriorating quality of our storage setup as soon as we did, we would have been unable to continue research--it is as simple as that," Systems Manager Vin Everett told Computerweekly.

JDRF spent two months testing the software to ensure it could handle the large volumes of data. According to Everett, the upgrade could only be attempted once because of JDRF's limited budget. Stress-testing the application beforehand was therefore essential to limiting revision costs.

"If this installation went wrong, and we had to fix something later on, it would have eaten into the main research budget and compromised the quality of work we carry out here," said Everett.

Read More:

About the Author

Paul McCloskey is contributing editor of Syllabus.

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.