New Cloud Security Auditing Tool Utilizes AI to Validate Providers' Security Assessments

The Cloud Security Alliance (CSA) has announced a new artificial intelligence-powered system that automates the validation of cloud service providers' (CSPs) security assessments, aiming to improve transparency and trust across the cloud computing landscape.

Introduced at CSA's Cloud Trust Summit, Valid-AI-ted represents a major step forward for the nonprofit's Security, Trust, Assurance and Risk (STAR) program, leveraging large language models (LLMs) to perform rapid, objective reviews of STAR Level 1 self-assessments. The system is the first of its kind to offer automated scoring and detailed qualitative feedback at scale.

"Our focus on security-conscious innovation led to the creation of Valid-AI-ted and will continue to see us deliver forward-looking initiatives that push the boundaries of secure, AI-driven technology," said Jim Reavis, CSA CEO and co-founder, in a statement.

Redefining STAR Level 1 Assurance

CSA's STAR Registry, which publicly documents the security and privacy controls of cloud services, has long relied on self-assessments by CSPs as part of its Level 1 certification. However, the quality of these submissions has varied, often requiring interpretation by end users.

Valid-AI-ted aims to resolve this by introducing standardized, AI-assisted grading. The tool evaluates responses against CSA's Cloud Controls Matrix (CCM), providing granular, domain-specific scoring. Providers who meet the required benchmark earn a distinctive "Valid-AI-ted" badge, enhancing visibility on the STAR Registry.

Free for Members, Discount for Attendees

The system is offered at no cost to CSA member organizations, which are allowed unlimited assessment submissions. Non-members can resubmit assessments up to 10 times and pay a standard $595 fee — discounted to $395 through the end of June for attendees of CSA's Cloud Trust Summit.

The automated tool's benefits include:

  • Consistent quality assurance: Ensures assessments meet a robust security baseline.
  • Actionable insights: Highlights specific gaps and areas for improvement.
  • Recognition: Highlights proactive security practices to customers and regulators.
  • Path to maturity: Helps organizations transition toward STAR Level 2 third-party audits.

Market Integration and Licensing

CSA is also opening the door to third-party integration. Solution providers can embed the Valid-AI-ted scoring rubric into their own Governance, Risk, and Compliance (GRC) offerings by obtaining a CCM license.

The move underscores CSA's continued push for transparency and standardization in an increasingly complex cloud security environment. By automating the first tier of assurance, CSA hopes to accelerate both compliance and customer trust.

For more information, visit the CSA site.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • From the Kuali Days 2025 Conference: A CEO's View of Planning for AI

    How can a company serving higher education navigate the changes AI brings to ed tech? What will customers expect? CT talks with Kuali CEO Joel Dehlin, who shared his company's AI strategies with attendees at Kuali Days 2025 in Anaheim.

  • glowing blue AI sphere connected by fine light lines, positioned next to a red-orange shield with a checkmark

    Cloud Security Alliance Offers Playbook for Red Teaming Agentic AI Systems

    The Cloud Security Alliance has introduced a guide for red teaming Agentic AI systems, targeting the security and testing challenges posed by increasingly autonomous artificial intelligence.

  • Training the Next Generation of Space Cybersecurity Experts

    CT asked Scott Shackelford, Indiana University professor of law and director of the Ostrom Workshop Program on Cybersecurity and Internet Governance, about the possible emergence of space cybersecurity as a separate field that would support changing practices and foster future space cybersecurity leaders.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.