Gartner Predicts 1 in 4 Cybersecurity Leaders Will Leave the Field by 2025

A new forecast from research firm Gartner estimates that nearly half of all cybersecurity leaders will change jobs by 2025. And 25% will move to non-security roles due to multiple work-related stressors.

"Cybersecurity professionals are facing unsustainable levels of stress," explained Deepti Gopal, director analyst at Gartner, in a statement. "CISOs are on the defense, with the only possible outcomes that they don't get hacked or they do. The psychological impact of this directly affects decision quality and the performance of cybersecurity leaders and their teams."

Gartner points to organizational culture and under-prioritization of security risk management as culprits behind the predicted cybersecurity talent churn. "Gartner research shows that compliance-centric cybersecurity programs, low executive support and subpar industry-level maturity are all indicators of an organization that does not view security risk management as critical to business success," the research firm said. "Organizations of this type are likely to experience higher attrition as talent leaves for roles where their impact is felt and valued."

"Burnout and voluntary attrition are outcomes of poor organizational culture," added Gopal. "While eliminating stress is an unrealistic goal, people can manage incredibly challenging and stressful jobs in cultures where they're supported."

Gartner also noted:

  • By 2025, half of major data security incidents will be the result of lack of talent or human failure;
  • 69% of employees have bypassed their organization's cybersecurity guidance in the past year, according to a recent Gartner survey;
  • 74% of employees in the same survey said they would be willing to bypass cybersecurity guidance if it helped them or their team achieve a business objective; and
  • Also by 2025, half of medium and large enterprises will adopt an insider risk management program, up from 10% now.

The full forecast is available to Gartner clients at gartner.com.

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.

  • three glowing stacks of tech-themed icons

    Research: LLMs Need a Translation Layer to Launch Complex Cyber Attacks

    While large language models have been touted for their potential in cybersecurity, they are still far from executing real-world cyber attacks — unless given help from a new kind of abstraction layer, according to researchers at Carnegie Mellon University and Anthropic.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • magnifying glass revealing the letters AI

    New Tool Tracks Unauthorized AI Usage Across Organizations

    DevOps platform provider JFrog is taking aim at a growing challenge for enterprises: users deploying AI tools without IT approval.