AWS, Microsoft, Google, Others Make DeepSeek-R1 AI Model Available on Their Platforms

Leading cloud service providers are now making the open source DeepSeek-R1 reasoning model available on their platforms. The Chinese startup generated intense interest for its ability to leverage more efficient processing and reduce compute resource consumption, which is a key driver of high AI costs.

Amazon Web Services (AWS), Microsoft, and Google Cloud have all made the model available to their customers, but as of this writing they had yet to implement the per-token pricing structure used for other AI models such as Meta's Llama 3.

Instead, DeepSeek-R1 users on these cloud platforms pay only for the computing resources they consume, rather than for the amount of text the model generates. AWS and Google have reported that this approach aligns with existing pricing models for open-source AI.

DeepSeek launched its latest DeepSeek-V3 model in December 2024. It was followed by the release of DeepSeek-R1, DeepSeek-R1-Zero, and DeepSeek-R1-Distill on Jan. 20, 2025. The DeepSeek-R1-Zero model reportedly features 671 billion parameters, and the DeepSeek-R1-Distill lineup offers models ranging from 1.5 billion to 70 billion parameters. On January 27, 2025, the company expanded its portfolio with Janus-Pro-7B, a vision-based AI model.

DeepSeek-R1 is positioned as a cost-efficient alternative to proprietary AI models, particularly for organizations with large-scale AI deployments. The model was designed to process information more efficiently, reducing the overall compute burden.

However, cloud providers may ultimately profit more from infrastructure rentals than direct model usage fees, industry watchers have observed. And renting cloud servers for AI workloads often costs more than accessing models via APIs. AWS, for example, charges up to $124 per hour for an AI-optimized cloud server, which translates to nearly $90,000 per month for continuous usage. Microsoft Azure customers do not need to rent dedicated servers for DeepSeek, but they still pay for underlying computing power, leading to variable pricing depending on how efficiently they run the model.

In contrast, organizations using Meta's Llama 3.1 through AWS pay $3 per 1 million tokens, a significantly lower upfront cost for those with intermittent AI needs. Tokens represent processed text, with 1,000 tokens equivalent to approximately 750 words, according to AI infrastructure provider Anyscale.

Smaller cloud providers, including Together AI and Fireworks AI, have already implemented fixed per-token pricing for DeepSeek-R1, a structure that could become more common as demand for cost-effective AI models grows.

For organizations seeking the lowest cost, DeepSeek-R1 is available via its parent company's API at $2.19 per million tokens — three to four times cheaper than some Western cloud providers. However, routing AI workloads through Chinese servers raises data privacy and security concerns. Sensitive business information could be subject to Chinese government regulations, including potential data sharing under local laws. And many organizations are cautious about sending proprietary or customer data to servers outside their jurisdiction, especially in regions with less stringent privacy protections.

AWS, Microsoft, and Google have not disclosed how many customers are actively using DeepSeek-R1.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.

  • cloud, database stack, computer screen, binary code, and flowcharts interconnected by lines and arrows

    Salesforce to Acquire Data Management Firm Informatica

    Salesforce has announced plans to acquire data management company Informatica for $8 billion. The deal is aimed at strengthening Salesforce's AI foundation and expanding its enterprise data capabilities.

  • stylized AI code and a neural network symbol, paired with glitching code and a red warning triangle

    New Anthropic AI Models Demonstrate Coding Prowess, Behavior Risks

    Anthropic has released Claude Opus 4 and Claude Sonnet 4, its most advanced artificial intelligence models to date, boasting a significant leap in autonomous coding capabilities while simultaneously revealing troubling tendencies toward self-preservation that include attempted blackmail.

  • NVIDIA DGX line

    NVIDIA Intros Personal AI Supercomputers

    NVIDIA has introduced a new lineup of AI-powered computing solutions designed to accelerate enterprise workloads.