Snowflake Intros New Open Source Large Language Model Optimized for the Enterprise

Data-as-a-service provider Snowflake has introduced a new open source large language model (LLM) called Snowflake Arctic. Designed to be "the most open, enterprise-grade LLM on the market," Arctic has a unique Mixture-of-Experts (MoE) architecture optimized for complex enterprise workloads. In company tests, it exceled in complex enterprise workloads, leading several industry benchmarks in SQL code generation, and instruction following, among others.

The company is releasing its weights under an Apache 2.0 license — which permits ungated personal, research, and commercial use — along with details of the research leading to how the model was trained. Snowflake also comes with code templates and flexible inference and training options, enabling users to quickly deploy and customize Arctic using their preferred frameworks.

Arctic is immediately available for serverless inference in Snowflake Cortex, Snowflake's fully managed service offering machine learning and AI solutions in the Data Cloud. It will also be accessible on Amazon Web Services (AWS) and other model gardens and catalogs.

"This is a watershed moment for Snowflake, with our AI research team innovating at the forefront of AI," said Snowflake CEO Sridhar Ramaswamy, in a statement. "By delivering industry-leading intelligence and efficiency in a truly open way to the AI community, we are furthering the frontiers of what open source AI can do. Our research with Arctic will significantly enhance our capability to deliver reliable, efficient AI to our customers."

The Snowflake AI Research Team adopted an MoE (Mixture of Experts) strategy to craft a small yet adept language model. This "dense-MoE hybrid transformer architecture" draws on the work of the DeepSpeed team at Microsoft Research. It funnels training and inference tasks to 128 experts, a substantial increase compared to other MoEs, such as Databricks' DBRX and Hugging Face's Mixtral.

Arctic's Dense-MoE Hybrid transformer architecture combines a 10B dense transformer model with a residual 128×3.66B MoE MLP, resulting in 480B total and 17B active parameters chosen using a top-2 gating. The company envisions Arctic as a versatile tool for companies to develop their own chatbots, co-pilots, and other GenAI applications.

All told, Arctic is equipped with 480 billion parameters, only 17 billion of which are used at any given time for training or inference. This approach helped to decrease resource usage compared to other similar models. For instance, compared to Llama3 70B, Arctic consumed 16x fewer resources for training. DBRX, meanwhile, consumed 8x more resources.

That frugality was intentional, said Yuxiong He, a distinguished AI software engineer at Snowflake and one of the DeepSpeed creators. "As researchers and engineers working on LLMs, our biggest dream is to have unlimited GPU resources," He said in a statement. "And our biggest struggle is that our dream never comes true."

Arctic's training process involved a "dynamic data curriculum" to emulate human learning patterns by adjusting the balance between code and language over time. Samyam Rajbhandari, a principal AI software engineer at Snowflake and another one of DeepSpeed's creators, noted that this approach resulted in improved language and reasoning skills. Arctic was trained on a cluster of 1,000 GPUs over the course of three weeks, which amounted to a $2 million investment. But customers will be able to fine tune Arctic and run inference workloads with a single server equipped with 8 GPUs, Rajbhandari said.

Snowflake is expected delve deeper into Arctic's capabilities at the upcoming Snowflake Data Cloud Summit, June 3-6 in San Francisco.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • data professionals in a meeting

    Data Fluency as a Strategic Imperative

    As an institution's highest level of data capabilities, data fluency taps into the agency of technical experts who work together with top-level institutional leadership on issues of strategic importance.

  • stylized AI code and a neural network symbol, paired with glitching code and a red warning triangle

    New Anthropic AI Models Demonstrate Coding Prowess, Behavior Risks

    Anthropic has released Claude Opus 4 and Claude Sonnet 4, its most advanced artificial intelligence models to date, boasting a significant leap in autonomous coding capabilities while simultaneously revealing troubling tendencies toward self-preservation that include attempted blackmail.

  • university building with classical architecture is partially overlaid by a glowing digital brain graphic

    NSF Invests $100 Million in National AI Research Institutes

    The National Science Foundation has announced a $100 million investment in National Artificial Intelligence Research Institutes, part of a broader White House strategy to maintain American leadership as competition with China intensifies.

  • black analog alarm clock sits in front of a digital background featuring a glowing padlock symbol and cybersecurity icons

    The Clock Is Ticking: Higher Education's Big Push Toward CMMC Compliance

    With the United States Department of Defense's Cybersecurity Maturity Model Certification 2.0 framework entering Phase II on Dec. 16, 2025, institutions must develop a cybersecurity posture that's resilient, defensible, and flexible enough to keep up with an evolving threat landscape.