The Dark Side of ChatGPT: 6 Generative AI Risks to Watch

Gartner has identified six critical areas where the use of large language models such as ChatGPT can present legal or compliance risks that enterprise organizations must be aware of — or face potentially dire consequences. Organizations should consider what guardrails to put in place in order to ensure responsible use of these tools, the research firm advised.

"The output generated by ChatGPT and other large language model (LLM) tools are prone to several risks," said Ron Friedmann, senior director analyst in Gartner's Legal & Compliance Practice, in a statement. "Legal and compliance leaders should assess if these issues present a material risk to their enterprise and what controls are needed …. Failure to do so could expose enterprises to legal, reputational and financial consequences."

Risks to consider include:

Fabricated and inaccurate answers. Writing produced by generative AI is notorious for being both convincing and potentially incorrect at the same time, Gartner noted. "ChatGPT is also prone to ‘hallucinations,' including fabricated answers that are wrong, and nonexistent legal or scientific citations," added Friedmann. "Legal and compliance leaders should issue guidance that requires employees to review any output generated by ChatGPT for accuracy, appropriateness and actual usefulness before being accepted."

Data privacy and confidentiality. Without the proper safeguards, any information entered into ChatGPT may become a part of its training dataset, Gartner pointed out. "Sensitive, proprietary, or confidential information used in prompts may be incorporated into responses for users outside the enterprise," said Friedmann. "Legal and compliance need to establish a compliance framework for ChatGPT use, and clearly prohibit entering sensitive organizational or personal data into public LLM tools."

Model and output bias. OpenAI has been transparent about its efforts to reduce bias in ChatGPT outputs. Yet bias and discrimination are "likely to persist," said Gartner. "Complete elimination of bias is likely impossible," but legal and compliance need to stay on top of laws governing AI bias, and make sure their guidance is compliant," asserted Friedmann.

Intellectual property and copyright risks. Because ChatGPT is trained on internet data — which by nature includes copyrighted material — its outputs "have the potential to violate copyright or IP protections," Gartner warned. Compounding the issue is the fact that ChatGPT does not cite sources for the text it generates. "Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn't infringe on copyright or IP rights," said Friedmann.

Cyber fraud risks. "Bad actors are already misusing ChatGPT to generate false information at scale," Garter said, offering the example of fake reviews that can influence a consumer's purchasing decisions. "Moreover, applications that use LLM models, including ChatGPT, are also susceptible to prompt injection, a hacking technique in which malicious adversarial prompts are used to trick the model into performing tasks that it wasn't intended for such as writing malware codes or developing phishing sites that resemble well-known sites."

Consumer protection risks. Here, Gartner pointed to the importance of disclosing ChatGPT usage to consumers and making sure an organization's ChatGPT use complies with any legal regulations. As an example, the firm said, "the California chatbot law mandates that in certain consumer interactions, organizations must disclose clearly and conspicuously that a consumer is communicating with a bot."

More information is available to Gartner clients in "Quick Answer: What Should Legal and Compliance Leaders Know About ChatGPT Risks?" on the web here.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.