Valdosta State Mops Up Data Breach

With the help of the Georgia Bureau of Investigation and its own campus police, the IT staff at Valdosta State University is continuing to investigate an incident of unauthorized access to a campus server. The server held student and staff social security numbers and grades dating back to the mid-1990s. Some public estimates stated that the security breach, first reported in December 2009, affected up to 170,000 people.

"An initial investigation has found no evidence that any personal data was accessed or transferred," said Joe Newton, director of IT. "The breached server was secured and removed from the network."

The university has begun contacting individuals whose information might have been exposed and has set up a Web site to share updates on the investigation.

"We regret the incident and are reviewing and revising our procedures and practices to minimize the risk of a reoccurrence," Newton said.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • glowing AI brain composed of geometric lines and nodes, encased within a protective shield of circuit patterns

    NIST's U.S. AI Safety Institute Announces Research Collaboration with Anthropic and OpenAI

    The U.S. AI Safety Institute, part of the National Institute of Standards and Technology (NIST), has formalized agreements with AI companies Anthropic and OpenAI to collaborate on AI safety research, testing, and evaluation.

  • a glowing gaming controller, a digital tree structure, and an open book

    Report: Use of Game Engines Expands Beyond Gaming

    Game development technology is increasingly being utilized beyond its traditional gaming roots, according to the recently released annual "State of Game Development" report from development and DevOps solutions provider Perforce Software.

  • translucent lock composed of interconnected nodes and circuits at the center

    Cloud Security Alliance: Best Practices for Securing AI Systems

    The Cloud Security Alliance (CSA), a not-for-profit organization whose mission statement is defining and raising awareness of best practices to help ensure a secure cloud computing environment, has released a new report offering guidance on securing systems that leverage large language models (LLMs) to address business challenges.