Carnegie Mellon CyLab Survey Exposes Gap in the Way Companies Manage Cyber Risks

A recent Carnegie Mellon University (CMU) CyLab survey of corporate board directors reveals a gap in board and senior executive oversight in managing cyber risks. Based on data from 703 individuals (primarily independent directors) serving on boards of public companies listed in the United States, only 36 percent of the respondents indicated that their board had any direct involvement with oversight of information security.

The survey also said that cybersecurity issues need to be seen as an enterprise risk management problem rather than an IT issue.

"Managing cyber risk is not just a technical challenge, but it is a managerial and strategic business challenge," said Pradeep Khosla, dean of CMU's College of Engineering and CyLab founder.

"There are real fiduciary duty and oversight issues involved here," said Jody Westby, adjunct distinguished fellow at Carnegie Mellon CyLab and the survey's lead author. "There is a clear duty to protect the assets of a company, and today, most corporate assets are digital."

"We also found that boards were only involved about 31 percent of the time in assessment of risk related to IT or personal data--the data that triggers security breach notification laws," said Westby.

Only 8 percent of survey respondents said their boards had a risk committee that is separate from the audit committee, according to Westby.

"Without the right organizational structure and interest from top officials, enterprise security can't be effective no matter how much money an organization throws at it," said Richard Power, co-author of the report and a distinguished fellow at Carnegie Mellon CyLab.

Power said the survey also shows that senior management hasn't budgeted for key positions requiring expertise in cybersecurity or privacy areas. "No wonder the number of security breaches has doubled in the past year--only 12 percent of the respondents have established functional separation of privacy and security, and most companies don't have C-level executives responsible for these areas," Power added.

To help company boards improve corporate governance of privacy and security, the survey recommends broad operational changes from establishing a board risk committee separate from the audit committee to reviewing existing top-level policies to creating a culture of security and respect for privacy.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • stylized illustration of an open laptop displaying the ChatGPT interface

    'Early Version' of ChatGPT Windows App Now Available to Paid Users

    OpenAI has announced the release of the ChatGPT Windows desktop app, about five months after the macOS version became available.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • Jetstream logo

    Qualified Free Access to Advanced Compute Resources with NSF's Jetstream2 and ACCESS

    Free access to advanced computing and HPC resources for your researchers and education programs? Check out NSF's Jetstream2 and ACCESS.