Generative AI Is a Top Emerging Risk for Enterprises, Gartner Says

Generative AI tools such as ChatGPT and Google Bard have cracked the top-10 list of emerging risks for enterprises, according to a new report by research firm Gartner.

The report polled 249 senior risk executives to develop a list of the top-20 emerging risks for enterprises.

"Generative AI was the second most-frequently named risk in our second quarter survey, appearing in the top 10 for the first time," said Ran Xu, director, research, in the Gartner Risk & Audit Practice, in a prepared statement. "This reflects both the rapid growth of public awareness and usage of generative AI tools, as well as the breadth of potential use cases, and therefore potential risks, that these tools engender."

Gartner said that among the key risks for generative AI are:

  • Data privacy, since generative AI tools share information with third parties;

  • Information security, as hackers look for ways to "subvert it for their own ends," according to Xu; and

  • Intellectual property, as sensitive, copyrighted, or confidential information can be leaked through these tools.

The top 5 emerging risks in the latest report included:

  1. Third-party viability, cited by 67% of respondents;

  2. Mass availability of generative AI, cited by 66%;

  3. Financial planning uncertainty (62%);

  4. Cloud concentration risk (62%); and

  5. China trade tensions (56%).

The full report is available to Gartner clients here.

 

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.

  • three glowing stacks of tech-themed icons

    Research: LLMs Need a Translation Layer to Launch Complex Cyber Attacks

    While large language models have been touted for their potential in cybersecurity, they are still far from executing real-world cyber attacks — unless given help from a new kind of abstraction layer, according to researchers at Carnegie Mellon University and Anthropic.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • magnifying glass revealing the letters AI

    New Tool Tracks Unauthorized AI Usage Across Organizations

    DevOps platform provider JFrog is taking aim at a growing challenge for enterprises: users deploying AI tools without IT approval.