AI-Focused Data Security Report Identifies Cloud Governance Gaps

Excessive permissions and AI-driven risks are leaving cloud environments dangerously exposed, according to a new report from Varonis, a data security and analytics specialist.

The company's 2025 State of Data Security Report, based on an analysis of 1,000 real-world IT environments, paints a troubling picture of enterprise cloud security in the age of AI. Among its most alarming findings: 99% of organizations had sensitive data exposed to AI tools, 98% used unverified or unsanctioned apps — including shadow AI — and 88% had stale but still-enabled user accounts that could provide entry points for attackers. Across platforms, weak identity controls, poor policy hygiene, and insufficient enforcement of security baselines like multifactor authentication (MFA) were widespread.

The report surfaces a range of trends across all major cloud platforms, some revealing systemic weaknesses in access control, data hygiene, and AI governance. AI plays a significant role, Varonis pointed out in an accompanying blog post:

"AI is everywhere. Copilots help employees boost productivity and agents provide front-line customer support. LLMs enable businesses to extract deep insights from their data.

"Once unleashed, however, AI acts like a hungry Pac-Man, scanning and analyzing all the data it can grab. If AI surfaces critical data where it doesn't belong, it's game over. Data can't be unbreached.

"And AI isn't alone — sprawling cloud complexities, unsanctioned apps, missing MFA, and more risks are creating a ticking time bomb for enterprise data. Organizations that lack proper data security measures risk a catastrophic breach of their sensitive information."

Additional findings include:

  • 99% of organizations have sensitive data exposed to AI tools: The report found that nearly all organizations had data accessible to generative AI systems, with 90% of sensitive cloud data, including AI training data, left open to AI access.
  • 98% of organizations have unverified apps, including shadow AI: Employees are using unsanctioned AI tools that bypass security controls and increase the risk of data leaks.
  • 88% of organizations have stale but enabled ghost users: These dormant accounts often retain access to systems and data, posing risks for lateral movement and undetected access.
  • 66% have cloud data exposed to anonymous users: Buckets and repositories are frequently left unprotected, making them easy targets for threat actors.
  • 1 in 7 organizations do not enforce multifactor authentication (MFA): The lack of MFA enforcement spans both SaaS and multi-cloud environments and was linked to the largest breach of 2024.
  • Only 1 in 10 organizations had labeled files: Poor file classification undermines data governance, making it difficult to apply access controls, encryption, or compliance policies.
  • 52% of employees use high-risk OAuth apps: These apps, often unverified or stale, can retain access to sensitive resources long after their last use.
  • 92% of companies allow users to create public sharing links: These links can be exploited to expose internal data to AI tools or unauthorized third parties.
  • Stale OAuth applications remain active in many environments: These apps may continue accessing data months after being abandoned, often without triggering alerts.
  • Model poisoning remains a major threat: Poorly secured training data and unencrypted storage can allow attackers to inject malicious data into AI models.

The report offers a sobering assessment of how AI adoption is magnifying long-standing issues in cloud security. From excessive access permissions to shadow AI, stale user accounts, and exposed training data, the findings make clear that many organizations are not prepared for the speed and scale of today's risks. The report urges organizations to reduce their data exposure, implement strong access controls, and treat data security as foundational to responsible AI use.

The full report is available on the Varonis site (registration required).

About the Author

David Ramel is an editor and writer at Converge 360.

Featured

  • digital illustration of a glowing padlock at the center, surrounded by abstract icons of books and graduation caps

    2025 Cybersecurity Predictions for K-20 Education

    What should K-12 and higher education institutions expect on the cybersecurity front in the coming year? Here's what the experts told us.

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.

  • smartphone with a glowing lock and shield icon at its center, surrounded by floating security symbols like a fingerprint, key, and authentication checkmark

    Jamf to Acquire Identity Automation, Combining Identity and Device Management in One Platform

    Apple mobile device management company Jamf has announced the intent to acquire Identity Automation, a provider of identity and access management (IAM) solutions for K-12 and higher education.

  • two abstract humanoid figures made of interconnected lines and polygons, glowing slightly against a dark gradient background

    Microsoft Introduces Copilot Chat Agents for Education

    Microsoft recently announced Microsoft 365 Copilot Chat, a new pay-as-you-go offering that adds AI agents to its existing free chat tool for Microsoft 365 education customers.