Report Highlights Security Risks of Open Source AI

A new report, "The State of Enterprise Open Source AI," from Anaconda and ETR, surveyed 100 IT decision-makers on the key trends shaping enterprise AI and open source adoption while also underscoring the critical need for trusted partners in the Wild West of open source AI.

Security in open source AI projects is a major concern, as the report reveals more than half (58%) of organizations use open source components in at least half of their AI/ML projects, with a third (34%) using them in three-quarters or more.

Along with that heavy usage comes some heavy security concerns.

"While open source tools unlock innovation, they also come with security risks that can threaten enterprise stability and reputation," Anaconda said in a blog post. "The data reveals the vulnerabilities organizations face and the steps businesses are taking to safeguard their systems. Addressing these challenges is vital for building trust and ensuring the safe deployment of AI/ML models."

The report itself details how open source AI components pose significant security risks, ranging from vulnerability exposure to the use of malicious code. Organizations report varied impacts, with some incidents causing severe consequences, highlighting the urgent need for robust security measures in open source AI systems.

In fact, the report finds 29% of respondents say security risks are the most important challenge associated with using open source components in AI/ML projects.

Open-Source Security Risk Map
[Click on image for larger view.] Open Source Security Risk Map (source: Anaconda).

"These findings emphasize the necessity of robust security measures and trusted tools for managing open source components," the report said, with Anaconda helpfully volunteering that its own platform plays a vital role by offering curated, secure open source libraries and enabling organizations to mitigate risks while enabling innovation and efficiency in their AI initiatives.

Other key data points in the report covering several areas of security include:

  • Security Vulnerability Exposure:
    • 32% experienced accidental exposure of vulnerabilities.
    • 50% of these incidents were very or extremely significant.
  • Flawed AI Insights:
    • 30% encountered reliance on incorrect AI-generated information.
    • 23% categorized these impacts as very or extremely significant.
  • Sensitive Information Exposure:
    • Reported by 21% of respondents.
    • 52% of cases had severe impacts.
  • Malicious Code Incidents:
    • 10% faced accidental installation of malicious code.
    • 60% of these incidents were very or extremely significant.

The lengthy and detailed report also delves into topics like:

  • Scaling AI Without Sacrificing Stability
  • Accelerating AI Development
  • How AI Leaders Are Outpacing Their Peers
  • Realizing ROI from AI Projects
  • Challenges with Fine-Tuning and Implementing AI Models
  • Breaking Down Silos

In conclusion, Anaconda listed these key takeaways:

  • Security Risk Management: Building trust in open source AI requires proactive security measures, including regular audits, the use of well-documented libraries, and collaborative efforts across teams to mitigate vulnerabilities. A secure foundation ensures innovation can thrive without compromising integrity.
  • Innovation through Open Source: Open source tools empower organizations with unparalleled flexibility and access to state-of-the-art technologies, enabling faster experimentation and deployment. This accessibility fosters a culture of collaboration and continuous improvement, essential for staying ahead in a competitive landscape.
  • Scaling with Confidence: As AI initiatives grow, maintaining system stability and managing dependencies is critical. A robust, scalable infrastructure that prioritizes reproducibility, collaboration, and performance helps organizations scale confidently while preserving operational resilience.
  • Realizing AI ROI: While many organizations anticipate returns within 12 to 18 months, addressing challenges like data quality, security, and scalability early on is key to accelerating ROI. Open source tools provide a cost-effective path to delivering value through both short-term gains and long-term strategic benefits.

Recommendations, meanwhile, include:

  • Strengthen Security Protocols:
    • Implement regular security audits and use automated tools to identify vulnerabilities in open source AI components.
    • Prioritize the selection of well-maintained open source libraries with clear security documentation and governance structures.
    • Foster collaboration between data science, IT, and security teams to ensure open source tools are used responsibly and securely.
  • Invest in Scalable Infrastructure:
    • Build infrastructure that supports the scaling of AI/ML models without compromising performance or security. Focus on managing dependencies between open source packages and minimizing model drift.
    • Leverage cloud-based or hybrid environments to ensure access to computational resources needed for large-scale AI deployments.
  • Optimize for Collaboration:
    • Use open source tools to foster collaboration across data science, IT, and business teams. By enabling multiple stakeholders to contribute to AI projects, organizations can drive better decision-making and more effective outcomes.
    • Ensure that collaboration tools and platforms support seamless integration with existing workflows, making it easier to share insights and results across departments.
  • Focus on Long-Term ROI:
    • Establish clear metrics for measuring the return on AI investments and track progress toward those goals. Organizations should prioritize initiatives that deliver both short-term value (e.g., cost savings through automation) and long-term strategic benefits (e.g., improved decision-making, enhanced customer experiences).
    • Address key challenges early, such as data quality, security risks, and integration complexities, to avoid delays in achieving ROI.
  • Embrace Innovation and Continuous Learning:
    • Encourage experimentation with new open source tools and frameworks to stay ahead in the rapidly evolving AI/ML landscape. By leveraging the latest advancements, organizations can continually refine their models and AI strategies.
    • Invest in ongoing training and upskilling of teams to ensure they have the expertise needed to maximize the potential of open source AI tools.

Anaconda said the data in this report are drawn from an August 2024 survey of IT decision-makers and practitioners involved in their organization's decisions surrounding the technologies used in AI/ML and data science initiatives. The survey gathered input from 100 participants. The full report is available on the Anaconda site (registration required).

Featured

  • computer with a red warning icon on its screen, surrounded by digital grids, glowing neural network patterns, and a holographic brain

    Report Highlights Security Risks of Open Source AI

    In these days of rampant ransomware and other cybersecurity exploits, security is paramount to both proprietary and open source AI approaches — and here the open source movement might be susceptible to some inherent drawbacks, such as use of possibly insecure code from unknown sources.

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.

  • a professional worker in business casual attire interacting with a large screen displaying a generative AI interface in a modern office

    Study: Generative AI Could Inhibit Critical Thinking

    A new study on how knowledge workers engage in critical thinking found that workers with higher confidence in generative AI technology tend to employ less critical thinking to AI-generated outputs than workers with higher confidence in personal skills.

  • university building with classical columns and a triangular roof displayed on a computer screen, surrounded by minimalist tech elements like circuit lines and abstract digital shapes

    Pima Community College Launches New Portal for a Unified Digital Campus Experience

    Arizona's Pima Community College is elevating the digital campus experience for students, faculty, and staff with a new portal built on the Pathify digital engagement platform.