WWT, NVIDIA Introduce Framework for Secure, Scalable, Responsible AI Adoption

Technology services provider World Wide Technology and NVIDIA have jointly developed an AI security framework dubbed AI Readiness Model for Operational Resilience (ARMOR), designed to help organizations accelerate AI adoption while maintaining security, compliance, and operational resilience.

Structure of the Framework

The vendor-agnostic ARMOR framework provides "actionable, holistic guidance that embeds security across the full AI lifecycle from chip to deployment, whether cloud or on-premises," according to a news announcement. It's broken down into six domains to address the various aspects of security around AI. Those areas, as described by WWT, are:

  • Governance, Risk, and Compliance (GRC): Ensures AI operations align with regulatory requirements, organizational policies, and ethical standards, managing risks across on-premises and cloud environments.
  • Model Security: Protects AI models from threats such as poisoning, inversion threats, and theft, ensuring integrity and reliability throughout their lifecycle.
  • Infrastructure Security: Secures the hardware and network foundation, including GPUs, DPUs, and cloud regions, to prevent unauthorized access or tampering.
  • Secure AI Operations: Enables real-time monitoring and rapid response to threats, ensuring secure operation of AI platforms in interconnected systems.
  • Secure Development Lifecycle (SDLC): Embeds security into the development of AI software and services, mitigating vulnerabilities like prompt injection from design to deployment.
  • Data Protection: Safeguards datasets, whether stored in locally connected storage or in a cloud data lake, ensuring confidentiality, integrity, and regulatory compliance without stifling innovation.

Developed with Higher Education Input

Aligned with industry standards such as the National Institute of Standards and Technology's (NIST) AI Risk Management Framework, the framework was developed with real-world feedback from the Texas A&M University System, WWT said, as well as other early adopters.

"ARMOR gives us a common language and structured approach for managing AI risk," commented Adam Mikeal, chief information security officer at Texas A&M University. "It's a practical solution for real-world AI security."
  

Integration with Industry Partners

ARMOR integrates with NVIDIA AI Enterprise for scalable enterprise AI operations, including NVIDIA NeMo Guardrails for safer, more reliable AI applications, and NVIDIA NIM microservices for secure, containerized AI deployment, WWT said. The framework also utilizes NVIDIA BlueField and NVIDIA DOCA Argus for AI security operations.

"With AI factories scaling at an unprecedented pace, organizations need security that can keep up with the speed, complexity and sensitivity of modern AI pipelines," said Arik Roztal, global head of AI Cybersecurity Business Development at NVIDIA. "WWT's ARMOR, powered by NVIDIA AI, delivers the performance and protection organizations need to confidently deploy and secure AI at scale."

Additional partner perspectives are in development to align product offerings with ARMOR, WWT said.

Executive Perspective

"Organizations are in urgent need of a practical, recognized framework for securing AI deployments," said Neil Anderson, VP and CTO of Cloud, Infrastructure, and AI Solutions at WWT, in a statement. "What sets ARMOR apart is that it's not just theoretical. It's rooted in real-world applications, designed by experts, and refined through frontline engagements."

"Security and innovation can't sit on opposite sides of the table. True resilience demands foresight, integration, and a framework that evolves with the threat landscape," said Chris Konrad, vice president of Global Cyber at WWT. "The path forward is clear: no AI without ARMOR. ARMOR helps leaders answer the tough questions before adversaries or auditors do."

Additional Info

For more information, visit the ARMOR site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • Abstract tech background made of printed circuit board

    University of Kentucky Initiative to Advance AI Efforts Across the Campus and State

    The University of Kentucky has launched CATS AI (Commonwealth AI Transdisciplinary Strategy), a campuswide effort aimed at advancing AI across the institution's 17 colleges, libraries, research centers, and institutes; its academic and healthcare enterprises; and throughout the state.

  • large group of college students sitting on an academic quad

    Student Readiness: Learning to Learn

    Melissa Loble, Instructure's chief academic officer, recommends a focus on 'readiness' as a broader concept as we try to understand how to build meaningful education experiences that can form a bridge from the university to the workplace. Here, we ask Loble what readiness is and how to offer students the ability to 'learn to learn'.

  • top-down view of a collaborative team working on AI technology development

    1EdTech Launches K-20 Collaboration to Shape Responsible AI in Education

    The 1EdTech Consortium recently announced plans to lead a cross-sector collaboration "to define how AI can responsibly and effectively support teaching and learning."

  • abstract coding

    Anthropic's New AI Model Targets Coding, Enterprise Work

    Anthropic has released Claude Opus 4.6, introducing a million-token context window and automated agent coordination features as the AI company seeks to expand beyond software development into broader enterprise applications.