Study: 1 in 10 AI Prompts Could Expose Sensitive Data

A new study from data protection startup Harmonic Security found that nearly one in 10 prompts used by business users when interacting with generative artificial intelligence tools may inadvertently disclose sensitive data.

The study, conducted in the fourth quarter of 2024, analyzed prompts across generative AI platforms such as Microsoft Copilot, OpenAI's ChatGPT, Google Gemini, Claude, and Perplexity. While the majority of AI usage by employees involved mundane tasks like summarizing text or drafting documentation, 8.5% of prompts posed potential security risks.

Sensitive Data at Risk

Among the concerning prompts, 45.8% risked exposing customer data, including billing and authentication information. Another 26.8% involved employee-related data, such as payroll details, personal identifiers, and even requests for AI-assisted employee performance reviews.
The remaining sensitive prompts included:

  • Legal and finance information (14.9%): Sales pipeline data, investment portfolios, and merger and acquisition activity.
  • Security data (6.9%): Penetration test results, network configurations, and incident reports, which could be exploited by attackers.
  • Sensitive code (5.6%): Access keys and proprietary source code.

Harmonic Security's report also flagged concerns about employees using free-tier generative AI services, which often lack robust security measures. Many free-tier services explicitly state that user data may be used to train AI models, creating further risks of unintended disclosure.

Free-Tier Usage Raises Red Flags

The study revealed significant reliance on free-tier AI services, with 63.8% of ChatGPT users, 58.6% of Gemini users, 75% of Claude users, and 50.5% of Perplexity users opting for non-enterprise plans. These services often lack critical safeguards found in enterprise versions, such as the ability to block sensitive prompts or warn users about potential risks.

"Most generative AI use is mundane, but the 8.5% of prompts we analyzed potentially put sensitive personal and company information at risk," said Alastair Paterson, co-founder and CEO of Harmonic Security, in a statement. "Organizations need to address this issue, particularly given the high number of employees using free subscriptions. The adage that 'if the product is free, you are the product' rings especially true here."

Recommendations for Risk Mitigation

Harmonic Security urged companies to implement real-time monitoring systems to track and manage data entered into generative AI tools. The firm also recommended:

  • Ensuring employees use paid or enterprise AI plans that do not train on input data.
  • Gaining visibility into prompts to understand what information is being shared.
  • Blocking or warning users about risky prompts to prevent data leakage.

While many organizations have begun implementing such measures, the report highlighted the need for broader adoption of these safeguards as generative AI becomes increasingly integrated into workplace processes.

"Generative AI tools hold immense potential for improving productivity, but without proper safeguards, they can become a liability. Organizations must act now to ensure sensitive data is protected while still leveraging the benefits of AI technology," Paterson said.

The full report is available on the Harmonic Security site.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • digital illustration of a glowing padlock at the center, surrounded by abstract icons of books and graduation caps

    2025 Cybersecurity Predictions for K-20 Education

    What should K-12 and higher education institutions expect on the cybersecurity front in the coming year? Here's what the experts told us.

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.

  • smartphone with a glowing lock and shield icon at its center, surrounded by floating security symbols like a fingerprint, key, and authentication checkmark

    Jamf to Acquire Identity Automation, Combining Identity and Device Management in One Platform

    Apple mobile device management company Jamf has announced the intent to acquire Identity Automation, a provider of identity and access management (IAM) solutions for K-12 and higher education.

  • two abstract humanoid figures made of interconnected lines and polygons, glowing slightly against a dark gradient background

    Microsoft Introduces Copilot Chat Agents for Education

    Microsoft recently announced Microsoft 365 Copilot Chat, a new pay-as-you-go offering that adds AI agents to its existing free chat tool for Microsoft 365 education customers.