Virginia Commonwealth U Uses Video To Communicate Data Breach Details

Virginia Commonwealth University has put together a video laying out the details of a potential security vulnerability that struck the campus last month to inform those who may have been affected during the data breach. The breach involved nearly 176,567 current and former students, staff, and faculty members, according to the university's Technology Services organization. VCU has 32,000 students at two Richmond-based campuses and a medical center.

As part of its response in the wake of the discovery of the security incident, the institution has sent a blanket e-mail to all potential victims, began the process of sending out first-class letters to the same group, developed a dedicated Web site about the incident to inform the community, and posted a video to YouTube and reused on its own site, featuring interviews with the school's CIO and its information security officer. Information about the breach also appears as a link on the home page of the university Web site.

As the university has made public, IT people discovered the intrusion Oct. 24, during a "routine monitoring" of servers. The server was taken offline, and VCU began a forensic dissection to understand what activities had taken place and how. According to the investigation, the machine had been infected with an Internet worm six days earlier, allowing an intruder--later identified as being someone offsite--to access the server and use it as a platform to compromise other servers on the network. That server held no personal data. The intruder set up two accounts on a second server and accessed that second server for 16 minutes Oct. 19.

That second server, which lay behind the university's firewall, is used to house applications that transfer data among university systems, such as Banner, and applications for parking, ID cards, and health systems. It stored 10 files holding sensitive data such as Social Security numbers, date of birth, and contact details.

According to CIO Mark Willis, because that second server was only accessed for a short time, during which the intruder had established accounts and loaded new files onto the server, the university doesn't believe the intent was to access personal data. "Our investigation did not show that the data was stolen," he noted in the video.

In the same video, Dan Han, information security officer, explained that the lag between the time of discovery and the time the breach was communicated to affected people was owing to the time-consuming nature of the forensic investigation. It "takes time," he said, "to determine the scope of the incident as well as any type of information that could have been compromised [as well as to] determine whether any information was breached, how the attackers got in, and to understand what information was out there on these servers."

He reiterated the university's belief that there's a "very low risk of actual compromise of personal data," noting, "It really seems like the attacker wasn't after the data."

The university has handed evidence of its analysis over to campus police as well as the Federal Bureau of Investigation. Willis said IT security staff members have removed the initial server, which was infected. They have also "added some layers of security around these servers and changed the security architecture to provide a little bit more protection." He added that the IT team was also bringing in external consulting firms to perform a "top to bottom assessment to look at our security procedures to make sure we're following best practices."

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • stylized illustration of an open laptop displaying the ChatGPT interface

    'Early Version' of ChatGPT Windows App Now Available to Paid Users

    OpenAI has announced the release of the ChatGPT Windows desktop app, about five months after the macOS version became available.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • Jetstream logo

    Qualified Free Access to Advanced Compute Resources with NSF's Jetstream2 and ACCESS

    Free access to advanced computing and HPC resources for your researchers and education programs? Check out NSF's Jetstream2 and ACCESS.