The Dark Side of ChatGPT: 6 Generative AI Risks to Watch

Gartner has identified six critical areas where the use of large language models such as ChatGPT can present legal or compliance risks that enterprise organizations must be aware of — or face potentially dire consequences. Organizations should consider what guardrails to put in place in order to ensure responsible use of these tools, the research firm advised.

"The output generated by ChatGPT and other large language model (LLM) tools are prone to several risks," said Ron Friedmann, senior director analyst in Gartner's Legal & Compliance Practice, in a statement. "Legal and compliance leaders should assess if these issues present a material risk to their enterprise and what controls are needed …. Failure to do so could expose enterprises to legal, reputational and financial consequences."

Risks to consider include:

Fabricated and inaccurate answers. Writing produced by generative AI is notorious for being both convincing and potentially incorrect at the same time, Gartner noted. "ChatGPT is also prone to ‘hallucinations,' including fabricated answers that are wrong, and nonexistent legal or scientific citations," added Friedmann. "Legal and compliance leaders should issue guidance that requires employees to review any output generated by ChatGPT for accuracy, appropriateness and actual usefulness before being accepted."

Data privacy and confidentiality. Without the proper safeguards, any information entered into ChatGPT may become a part of its training dataset, Gartner pointed out. "Sensitive, proprietary, or confidential information used in prompts may be incorporated into responses for users outside the enterprise," said Friedmann. "Legal and compliance need to establish a compliance framework for ChatGPT use, and clearly prohibit entering sensitive organizational or personal data into public LLM tools."

Model and output bias. OpenAI has been transparent about its efforts to reduce bias in ChatGPT outputs. Yet bias and discrimination are "likely to persist," said Gartner. "Complete elimination of bias is likely impossible," but legal and compliance need to stay on top of laws governing AI bias, and make sure their guidance is compliant," asserted Friedmann.

Intellectual property and copyright risks. Because ChatGPT is trained on internet data — which by nature includes copyrighted material — its outputs "have the potential to violate copyright or IP protections," Gartner warned. Compounding the issue is the fact that ChatGPT does not cite sources for the text it generates. "Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn't infringe on copyright or IP rights," said Friedmann.

Cyber fraud risks. "Bad actors are already misusing ChatGPT to generate false information at scale," Garter said, offering the example of fake reviews that can influence a consumer's purchasing decisions. "Moreover, applications that use LLM models, including ChatGPT, are also susceptible to prompt injection, a hacking technique in which malicious adversarial prompts are used to trick the model into performing tasks that it wasn't intended for such as writing malware codes or developing phishing sites that resemble well-known sites."

Consumer protection risks. Here, Gartner pointed to the importance of disclosing ChatGPT usage to consumers and making sure an organization's ChatGPT use complies with any legal regulations. As an example, the firm said, "the California chatbot law mandates that in certain consumer interactions, organizations must disclose clearly and conspicuously that a consumer is communicating with a bot."

More information is available to Gartner clients in "Quick Answer: What Should Legal and Compliance Leaders Know About ChatGPT Risks?" on the web here.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • Two autonomous AI figures performing tasks in a tech environment; one interacts with floating holographic screens, while the other manipulates digital components

    Agentic AI Named Top Tech Trend for 2025

    Agentic AI will be the top tech trend for 2025, according to research firm Gartner. The term describes autonomous machine "agents" that move beyond query-and-response generative chatbots to do enterprise-related tasks without human guidance.

  • sleek fishing hook with a translucent email icon hanging from it

    Report Identifies Rise in Phishing-as-a-Service Attacks

    Cybersecurity researchers at Trustwave are warning about a surge in malicious e-mail campaigns leveraging Rockstar 2FA, a phishing-as-a-service (PhaaS) toolkit designed to steal Microsoft 365 credentials.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • abstract technology icons connected by lines and dots

    Digital Layers and Human Ties: Navigating the CIO's Dilemma in Higher Education

    As technology permeates every aspect of life on campus, efficiency and convenience may come at the cost of human connection and professional identity.