Educause Survey Reveals Growing Dissatisfaction Among Higher Education Cybersecurity Professionals

Nonprofit higher education association Educause recently conducted a study of 350 cybersecurity and privacy professionals in July 2023. The study report outlines concerns of those professionals and shows a growing dissatisfaction with training, work conditions, and time constraints.

The report is the first in a series examining specific workforce areas in higher education to determine relevant findings and make recommendations. This report examines five key areas: respondent composition; department structure, size, and reporting lines; staffing and budgets; work role experiences; and competencies and professional development.

Key findings from the report note that:

  • Staffing issues have affected cybersecurity and privacy negatively at respondents' institutions, despite there being adequate FTE budgeting for such jobs in many cases.
  • Professionals want remote and hybrid options for work, and 68% said they do have them, indicating a favorable response by their institutions.
  • Poor job satisfaction prompted 56% to say they will look for other jobs inside higher education, and 55% outside of it, within the next year.
  • Professionals have identified problems and conflicts of interest in IT and cybersecurity that they feel should be reported to administrators outside IT.
  • Professionals feel their current workload is excessive because of increased demands in "compliance and regulations, monitoring and detection, and incident response and threat hunting."
  • There has been reduced time demand for offensive and specialized offensive operations and threat intel and forensics in the last year.
  • Respondents feel technical skill and AI competency is of great importance, but time and commitment are needed from employers for training and professional development.
  • Cybersecurity and privacy needs are becoming more intertwined and expanding beyond current IT services.

Conclusions and recommendations are for employers to recognize that cybersecurity and privacy threats are growing and need more staff and training to handle them. To this end, they should focus on recruitment, training, retention, workload, support, and professional development.

"More than half of the cybersecurity and privacy professionals who completed our workforce survey said that they are likely to apply for other positions in the next 12 months," said Nicole Muscanell, the report's author. "And despite there being more competitive salaries for industry jobs, respondents said they are equally likely to apply for positions within (56%) and outside of higher ed (55%)."

Learn more and read the report on the survey page.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.