Anthropic Announces Claude 3.5 Sonnet, First of Three 3.5 Releases

OpenAI and Google competitor Anthropic has announced a new member of its Claude large language model (LLM) family: Claude 3.5 Sonnet. This is the first release in the forthcoming Claude 3.5 product line, and the company says it surpasses its predecessors and competitors in intelligence, speed, and cost-efficiency, offering advanced capabilities at the mid-tier model price.

Operating at twice the speed of its predecessor, Claude 3 Opus, Claude 3.5 Sonnet sets benchmarks for graduate-level reasoning, undergraduate-level knowledge, and coding proficiency, the company says, and demonstrates significant improvements in understanding nuanced and complex instructions, humor, and producing high-quality content with a relatable tone. Claude 3.5 Sonnet also excels in visual reasoning tasks, outperforming Claude 3 Opus in standard vision benchmarks. Its capabilities include accurate transcription of text from imperfect images.

The company is billing the new LLM as ideal for complex tasks such as context-sensitive customer support and multi-step workflow orchestration. An internal evaluation showed Claude 3.5 Sonnet solving 64% of problems in agentic coding tests.

Anthropic launched its Claude 3 family in March. The company says it plans to release Claude 3.5 Haiku and Claude 3.5 Opus (the other members of the Claude LLM family) later this year, along with new modalities and enterprise application integrations. The company is also exploring features like Memory to enhance personalization and efficiency.

Claude 3.5 Sonnet is available now for free on the company's website and the Claude iOS app, with enhanced rate limits for Claude Pro and Team plan subscribers. It's also accessible via the Anthropic API, Amazon Bedrock, and Google Cloud’s Vertex AI. The model is priced at $3 per million input tokens and $15 per million output tokens, featuring a 200K token context window.

The company also introduced a new feature called "Artifacts," which is a kind of dynamic workspace designed to allow enterprise users to generate and interact with content, such as code snippets and text documents in real time.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.