The Dark Side of ChatGPT: 6 Generative AI Risks to Watch

Gartner has identified six critical areas where the use of large language models such as ChatGPT can present legal or compliance risks that enterprise organizations must be aware of — or face potentially dire consequences. Organizations should consider what guardrails to put in place in order to ensure responsible use of these tools, the research firm advised.

"The output generated by ChatGPT and other large language model (LLM) tools are prone to several risks," said Ron Friedmann, senior director analyst in Gartner's Legal & Compliance Practice, in a statement. "Legal and compliance leaders should assess if these issues present a material risk to their enterprise and what controls are needed …. Failure to do so could expose enterprises to legal, reputational and financial consequences."

Risks to consider include:

Fabricated and inaccurate answers. Writing produced by generative AI is notorious for being both convincing and potentially incorrect at the same time, Gartner noted. "ChatGPT is also prone to ‘hallucinations,' including fabricated answers that are wrong, and nonexistent legal or scientific citations," added Friedmann. "Legal and compliance leaders should issue guidance that requires employees to review any output generated by ChatGPT for accuracy, appropriateness and actual usefulness before being accepted."

Data privacy and confidentiality. Without the proper safeguards, any information entered into ChatGPT may become a part of its training dataset, Gartner pointed out. "Sensitive, proprietary, or confidential information used in prompts may be incorporated into responses for users outside the enterprise," said Friedmann. "Legal and compliance need to establish a compliance framework for ChatGPT use, and clearly prohibit entering sensitive organizational or personal data into public LLM tools."

Model and output bias. OpenAI has been transparent about its efforts to reduce bias in ChatGPT outputs. Yet bias and discrimination are "likely to persist," said Gartner. "Complete elimination of bias is likely impossible," but legal and compliance need to stay on top of laws governing AI bias, and make sure their guidance is compliant," asserted Friedmann.

Intellectual property and copyright risks. Because ChatGPT is trained on internet data — which by nature includes copyrighted material — its outputs "have the potential to violate copyright or IP protections," Gartner warned. Compounding the issue is the fact that ChatGPT does not cite sources for the text it generates. "Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn't infringe on copyright or IP rights," said Friedmann.

Cyber fraud risks. "Bad actors are already misusing ChatGPT to generate false information at scale," Garter said, offering the example of fake reviews that can influence a consumer's purchasing decisions. "Moreover, applications that use LLM models, including ChatGPT, are also susceptible to prompt injection, a hacking technique in which malicious adversarial prompts are used to trick the model into performing tasks that it wasn't intended for such as writing malware codes or developing phishing sites that resemble well-known sites."

Consumer protection risks. Here, Gartner pointed to the importance of disclosing ChatGPT usage to consumers and making sure an organization's ChatGPT use complies with any legal regulations. As an example, the firm said, "the California chatbot law mandates that in certain consumer interactions, organizations must disclose clearly and conspicuously that a consumer is communicating with a bot."

More information is available to Gartner clients in "Quick Answer: What Should Legal and Compliance Leaders Know About ChatGPT Risks?" on the web here.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • interconnected cloud icons with glowing lines on a gradient blue backdrop

    Report: Cloud Certifications Bring Biggest Salary Payoff

    It pays to be conversant in cloud, according to a new study from Skillsoft The company's annual IT skills and salary survey report found that the top three certifications resulting in the highest payoffs salarywise are for skills in the cloud, specifically related to Amazon Web Services (AWS), Google Cloud, and Nutanix.

  • a hobbyist in casual clothes holds a hammer and a toolbox, building a DIY structure that symbolizes an AI model

    Ditch the DIY Approach to AI on Campus

    Institutions that do not adopt AI will quickly fall behind. The question is, how can colleges and universities do this systematically, securely, cost-effectively, and efficiently?

  • minimalist geometric grid pattern of blue, gray, and white squares and rectangles

    Windows Server 2025 Release Offers Cloud, Security, and AI Capabilities

    Microsoft has announced the general availability of Windows Server 2025. The release will enable organizations to deploy applications on-premises, in hybrid setups, or fully in the cloud, the company said.

  • digital brain made of blue circuitry on the left and a shield with a glowing lock on the right, set against a dark background with fading binary code

    AI Dominates Key Technologies and Practices in Cybersecurity and Privacy

    AI governance, AI-enabled workforce expansion, and AI-supported cybersecurity training are three of the six key technologies and practices anticipated to have a significant impact on the future of cybersecurity and privacy in higher education, according to the latest Cybersecurity and Privacy edition of the Educause Horizon Report.