Snowflake Intros New Open Source Large Language Model Optimized for the Enterprise

Data-as-a-service provider Snowflake has introduced a new open source large language model (LLM) called Snowflake Arctic. Designed to be "the most open, enterprise-grade LLM on the market," Arctic has a unique Mixture-of-Experts (MoE) architecture optimized for complex enterprise workloads. In company tests, it exceled in complex enterprise workloads, leading several industry benchmarks in SQL code generation, and instruction following, among others.

The company is releasing its weights under an Apache 2.0 license — which permits ungated personal, research, and commercial use — along with details of the research leading to how the model was trained. Snowflake also comes with code templates and flexible inference and training options, enabling users to quickly deploy and customize Arctic using their preferred frameworks.

Arctic is immediately available for serverless inference in Snowflake Cortex, Snowflake's fully managed service offering machine learning and AI solutions in the Data Cloud. It will also be accessible on Amazon Web Services (AWS) and other model gardens and catalogs.

"This is a watershed moment for Snowflake, with our AI research team innovating at the forefront of AI," said Snowflake CEO Sridhar Ramaswamy, in a statement. "By delivering industry-leading intelligence and efficiency in a truly open way to the AI community, we are furthering the frontiers of what open source AI can do. Our research with Arctic will significantly enhance our capability to deliver reliable, efficient AI to our customers."

The Snowflake AI Research Team adopted an MoE (Mixture of Experts) strategy to craft a small yet adept language model. This "dense-MoE hybrid transformer architecture" draws on the work of the DeepSpeed team at Microsoft Research. It funnels training and inference tasks to 128 experts, a substantial increase compared to other MoEs, such as Databricks' DBRX and Hugging Face's Mixtral.

Arctic's Dense-MoE Hybrid transformer architecture combines a 10B dense transformer model with a residual 128×3.66B MoE MLP, resulting in 480B total and 17B active parameters chosen using a top-2 gating. The company envisions Arctic as a versatile tool for companies to develop their own chatbots, co-pilots, and other GenAI applications.

All told, Arctic is equipped with 480 billion parameters, only 17 billion of which are used at any given time for training or inference. This approach helped to decrease resource usage compared to other similar models. For instance, compared to Llama3 70B, Arctic consumed 16x fewer resources for training. DBRX, meanwhile, consumed 8x more resources.

That frugality was intentional, said Yuxiong He, a distinguished AI software engineer at Snowflake and one of the DeepSpeed creators. "As researchers and engineers working on LLMs, our biggest dream is to have unlimited GPU resources," He said in a statement. "And our biggest struggle is that our dream never comes true."

Arctic's training process involved a "dynamic data curriculum" to emulate human learning patterns by adjusting the balance between code and language over time. Samyam Rajbhandari, a principal AI software engineer at Snowflake and another one of DeepSpeed's creators, noted that this approach resulted in improved language and reasoning skills. Arctic was trained on a cluster of 1,000 GPUs over the course of three weeks, which amounted to a $2 million investment. But customers will be able to fine tune Arctic and run inference workloads with a single server equipped with 8 GPUs, Rajbhandari said.

Snowflake is expected delve deeper into Arctic's capabilities at the upcoming Snowflake Data Cloud Summit, June 3-6 in San Francisco.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • a glowing gaming controller, a digital tree structure, and an open book

    Report: Use of Game Engines Expands Beyond Gaming

    Game development technology is increasingly being utilized beyond its traditional gaming roots, according to the recently released annual "State of Game Development" report from development and DevOps solutions provider Perforce Software.

  • abstract representation of equity at the core of AI

    Why Equity Must Be a Core Part of the Conversation About AI

    AI is an immensely powerful tool that can provide customized support for students with diverse learning needs, tailoring educational experiences to meet student’s individual needs more effectively. However, significant disparities in AI access and digital literacy skills prevent many of these same students from fully leveraging its benefits.

  • Man wearing headset working on a computer

    Internet2: Network Routing Security and RPKI Adoption in Research and Education

    We ask James Deaton, vice president of network services, about Internet2's initiatives and leadership efforts to promote routing security and RPKI adoption in research and higher education networks.

  • network of transparent cloud icons, each containing a security symbol like a lock or shield

    Okta, OpenID Foundation Propose New Identity Security Standard

    Okta and the OpenID Foundation have announced the formation of the IPSIE Working Group — with the acronym standing for Interoperability Profiling for Secure Identity in the Enterprise — dedicated to a new identity security standard for Software-as-a-Service (SaaS) applications.