Gartner's 2025 Magic Quadrant for Desktop as a Service reveals that while secure remote access remains a key driver of DaaS adoption, a growing number of deployments now focus on broader efficiency goals.
A recent report out of the MIT Media Lab found that despite $30-40 billion in enterprise spending on generative AI, 95% of organizations are seeing no business return.
CrowdStrike’s 2025 Threat Hunting Report found that AI tools are being weaponized and directly targeted, while cloud intrusions surge 136% in early 2025.
In a recent survey from learning platform Quizlet, 85% of high school and college students and teachers said they use AI technology, compared to 66% in 2024 — a 29% increase year over year.
A recent report from cybersecurity firm Flashpoint detected an escalation of threat activity across͏͏ multiple͏͏ fronts͏͏ during͏͏ the͏͏ first͏͏ half͏͏ of͏͏ 2025.
A recent report from Couchbase has cautioned that enterprises that do not keep pace in AI adoption face potential financial losses, calculating an average annual impact of up to $87 million for organizations that fall behind.
A new Thales report reveals that while enterprises are pouring resources into AI-specific protections, only 8% are encrypting the majority of their sensitive cloud data — leaving critical assets exposed even as AI-driven threats escalate and traditional security budgets shrink.
Ninety-three percent of students across the United States have used AI at least once or twice for school-related purposes, according to the latest AI in Education report from Microsoft.
Nearly nine out of 10 organizations are already using AI services in the cloud — but fewer than one in seven have implemented AI-specific security controls, according to a recent report from cybersecurity firm Wiz.
A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.