The Dark Side of ChatGPT: 6 Generative AI Risks to Watch

Gartner has identified six critical areas where the use of large language models such as ChatGPT can present legal or compliance risks that enterprise organizations must be aware of — or face potentially dire consequences. Organizations should consider what guardrails to put in place in order to ensure responsible use of these tools, the research firm advised.

"The output generated by ChatGPT and other large language model (LLM) tools are prone to several risks," said Ron Friedmann, senior director analyst in Gartner's Legal & Compliance Practice, in a statement. "Legal and compliance leaders should assess if these issues present a material risk to their enterprise and what controls are needed …. Failure to do so could expose enterprises to legal, reputational and financial consequences."

Risks to consider include:

Fabricated and inaccurate answers. Writing produced by generative AI is notorious for being both convincing and potentially incorrect at the same time, Gartner noted. "ChatGPT is also prone to ‘hallucinations,' including fabricated answers that are wrong, and nonexistent legal or scientific citations," added Friedmann. "Legal and compliance leaders should issue guidance that requires employees to review any output generated by ChatGPT for accuracy, appropriateness and actual usefulness before being accepted."

Data privacy and confidentiality. Without the proper safeguards, any information entered into ChatGPT may become a part of its training dataset, Gartner pointed out. "Sensitive, proprietary, or confidential information used in prompts may be incorporated into responses for users outside the enterprise," said Friedmann. "Legal and compliance need to establish a compliance framework for ChatGPT use, and clearly prohibit entering sensitive organizational or personal data into public LLM tools."

Model and output bias. OpenAI has been transparent about its efforts to reduce bias in ChatGPT outputs. Yet bias and discrimination are "likely to persist," said Gartner. "Complete elimination of bias is likely impossible," but legal and compliance need to stay on top of laws governing AI bias, and make sure their guidance is compliant," asserted Friedmann.

Intellectual property and copyright risks. Because ChatGPT is trained on internet data — which by nature includes copyrighted material — its outputs "have the potential to violate copyright or IP protections," Gartner warned. Compounding the issue is the fact that ChatGPT does not cite sources for the text it generates. "Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn't infringe on copyright or IP rights," said Friedmann.

Cyber fraud risks. "Bad actors are already misusing ChatGPT to generate false information at scale," Garter said, offering the example of fake reviews that can influence a consumer's purchasing decisions. "Moreover, applications that use LLM models, including ChatGPT, are also susceptible to prompt injection, a hacking technique in which malicious adversarial prompts are used to trick the model into performing tasks that it wasn't intended for such as writing malware codes or developing phishing sites that resemble well-known sites."

Consumer protection risks. Here, Gartner pointed to the importance of disclosing ChatGPT usage to consumers and making sure an organization's ChatGPT use complies with any legal regulations. As an example, the firm said, "the California chatbot law mandates that in certain consumer interactions, organizations must disclose clearly and conspicuously that a consumer is communicating with a bot."

More information is available to Gartner clients in "Quick Answer: What Should Legal and Compliance Leaders Know About ChatGPT Risks?" on the web here.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.

  • three glowing stacks of tech-themed icons

    Research: LLMs Need a Translation Layer to Launch Complex Cyber Attacks

    While large language models have been touted for their potential in cybersecurity, they are still far from executing real-world cyber attacks — unless given help from a new kind of abstraction layer, according to researchers at Carnegie Mellon University and Anthropic.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • magnifying glass revealing the letters AI

    New Tool Tracks Unauthorized AI Usage Across Organizations

    DevOps platform provider JFrog is taking aim at a growing challenge for enterprises: users deploying AI tools without IT approval.