AWS, Microsoft, Google, Others Make DeepSeek-R1 AI Model Available on Their Platforms

Leading cloud service providers are now making the open source DeepSeek-R1 reasoning model available on their platforms. The Chinese startup generated intense interest for its ability to leverage more efficient processing and reduce compute resource consumption, which is a key driver of high AI costs.

Amazon Web Services (AWS), Microsoft, and Google Cloud have all made the model available to their customers, but as of this writing they had yet to implement the per-token pricing structure used for other AI models such as Meta's Llama 3.

Instead, DeepSeek-R1 users on these cloud platforms pay only for the computing resources they consume, rather than for the amount of text the model generates. AWS and Google have reported that this approach aligns with existing pricing models for open-source AI.

DeepSeek launched its latest DeepSeek-V3 model in December 2024. It was followed by the release of DeepSeek-R1, DeepSeek-R1-Zero, and DeepSeek-R1-Distill on Jan. 20, 2025. The DeepSeek-R1-Zero model reportedly features 671 billion parameters, and the DeepSeek-R1-Distill lineup offers models ranging from 1.5 billion to 70 billion parameters. On January 27, 2025, the company expanded its portfolio with Janus-Pro-7B, a vision-based AI model.

DeepSeek-R1 is positioned as a cost-efficient alternative to proprietary AI models, particularly for organizations with large-scale AI deployments. The model was designed to process information more efficiently, reducing the overall compute burden.

However, cloud providers may ultimately profit more from infrastructure rentals than direct model usage fees, industry watchers have observed. And renting cloud servers for AI workloads often costs more than accessing models via APIs. AWS, for example, charges up to $124 per hour for an AI-optimized cloud server, which translates to nearly $90,000 per month for continuous usage. Microsoft Azure customers do not need to rent dedicated servers for DeepSeek, but they still pay for underlying computing power, leading to variable pricing depending on how efficiently they run the model.

In contrast, organizations using Meta's Llama 3.1 through AWS pay $3 per 1 million tokens, a significantly lower upfront cost for those with intermittent AI needs. Tokens represent processed text, with 1,000 tokens equivalent to approximately 750 words, according to AI infrastructure provider Anyscale.

Smaller cloud providers, including Together AI and Fireworks AI, have already implemented fixed per-token pricing for DeepSeek-R1, a structure that could become more common as demand for cost-effective AI models grows.

For organizations seeking the lowest cost, DeepSeek-R1 is available via its parent company's API at $2.19 per million tokens — three to four times cheaper than some Western cloud providers. However, routing AI workloads through Chinese servers raises data privacy and security concerns. Sensitive business information could be subject to Chinese government regulations, including potential data sharing under local laws. And many organizations are cautious about sending proprietary or customer data to servers outside their jurisdiction, especially in regions with less stringent privacy protections.

AWS, Microsoft, and Google have not disclosed how many customers are actively using DeepSeek-R1.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • businessmen shaking hands behind digital technology imagery

    Microsoft, OpenAI Restructure AI Partnership

    Microsoft and OpenAI announced they are redefining their partnership as part of a major recapitalization effort aimed at preparing for the arrival of artificial general intelligence (AGI).

  • stylized figures, resumes, a graduation cap, and a laptop interconnected with geometric shapes

    OpenAI to Launch AI-Powered Jobs Platform

    OpenAI announced it will launch an AI-powered hiring platform by mid-2026, directly competing with LinkedIn and Indeed in the professional networking and recruitment space. The company announced the initiative alongside an expanded certification program designed to verify AI skills for job seekers.

  • abstract metallic cubes and networking lines

    Call for Speakers Now Open for Tech Tactics in Education: Roadmap to AI Impact

    The virtual conference from the producers of Campus Technology and THE Journal will return on May 13, 2025, with a focus on emerging trends in with a focus on emerging trends in AI, cybersecurity, data, and ed tech.

  • padlock and circuit patterns

    Veeam to Acquire Securiti AI to Combine Data Resilience and AI Security

    Veeam Software has announced plans to acquire Securiti AI for $1.725 billion to unite data resilience, privacy, and AI trust in a platform aimed at helping organizations securely manage and unlock the value of their data across hybrid and multi-cloud environments.