The Dark Side of ChatGPT: 6 Generative AI Risks to Watch

Gartner has identified six critical areas where the use of large language models such as ChatGPT can present legal or compliance risks that enterprise organizations must be aware of — or face potentially dire consequences. Organizations should consider what guardrails to put in place in order to ensure responsible use of these tools, the research firm advised.

"The output generated by ChatGPT and other large language model (LLM) tools are prone to several risks," said Ron Friedmann, senior director analyst in Gartner's Legal & Compliance Practice, in a statement. "Legal and compliance leaders should assess if these issues present a material risk to their enterprise and what controls are needed …. Failure to do so could expose enterprises to legal, reputational and financial consequences."

Risks to consider include:

Fabricated and inaccurate answers. Writing produced by generative AI is notorious for being both convincing and potentially incorrect at the same time, Gartner noted. "ChatGPT is also prone to ‘hallucinations,' including fabricated answers that are wrong, and nonexistent legal or scientific citations," added Friedmann. "Legal and compliance leaders should issue guidance that requires employees to review any output generated by ChatGPT for accuracy, appropriateness and actual usefulness before being accepted."

Data privacy and confidentiality. Without the proper safeguards, any information entered into ChatGPT may become a part of its training dataset, Gartner pointed out. "Sensitive, proprietary, or confidential information used in prompts may be incorporated into responses for users outside the enterprise," said Friedmann. "Legal and compliance need to establish a compliance framework for ChatGPT use, and clearly prohibit entering sensitive organizational or personal data into public LLM tools."

Model and output bias. OpenAI has been transparent about its efforts to reduce bias in ChatGPT outputs. Yet bias and discrimination are "likely to persist," said Gartner. "Complete elimination of bias is likely impossible," but legal and compliance need to stay on top of laws governing AI bias, and make sure their guidance is compliant," asserted Friedmann.

Intellectual property and copyright risks. Because ChatGPT is trained on internet data — which by nature includes copyrighted material — its outputs "have the potential to violate copyright or IP protections," Gartner warned. Compounding the issue is the fact that ChatGPT does not cite sources for the text it generates. "Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn't infringe on copyright or IP rights," said Friedmann.

Cyber fraud risks. "Bad actors are already misusing ChatGPT to generate false information at scale," Garter said, offering the example of fake reviews that can influence a consumer's purchasing decisions. "Moreover, applications that use LLM models, including ChatGPT, are also susceptible to prompt injection, a hacking technique in which malicious adversarial prompts are used to trick the model into performing tasks that it wasn't intended for such as writing malware codes or developing phishing sites that resemble well-known sites."

Consumer protection risks. Here, Gartner pointed to the importance of disclosing ChatGPT usage to consumers and making sure an organization's ChatGPT use complies with any legal regulations. As an example, the firm said, "the California chatbot law mandates that in certain consumer interactions, organizations must disclose clearly and conspicuously that a consumer is communicating with a bot."

More information is available to Gartner clients in "Quick Answer: What Should Legal and Compliance Leaders Know About ChatGPT Risks?" on the web here.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • glowing digital brain above a chessboard with data charts and flowcharts

    Why AI Strategy Matters (and Why Not Having One Is Risky)

    If your institution hasn't started developing an AI strategy, you are likely putting yourself and your stakeholders at risk, particularly when it comes to ethical use, responsible pedagogical and data practices, and innovative exploration.

  • abstract pattern of lights and connecting lines

    Google Introduces Gemini Enterprise Platform

    Google Cloud has launched Gemini Enterprise, a unified artificial intelligence platform designed to integrate AI capabilities across enterprise workflows.

  • A Comprehensive Guide to the Best Value Evaluation Systems

    Choosing the most cost-effective evaluation system requires balancing price, usability and insight quality. In a landscape full of digital tools and data demands, it is important to prioritize platforms that deliver clear results without complicating operations.

  • glowing digital brain interacts with an open book, with stacks of books beside it

    Federal Court Rules AI Training with Copyrighted Books Fair Use

    A federal judge ruled this week that artificial intelligence company Anthropic did not violate copyright law when it used copyrighted books to train its Claude chatbot without author consent, but ordered the company to face trial on allegations it used pirated versions of the books.