Skip to main content
Back to Pulse
MML Studio

Claude Sonnet 4.6 released as Anthropic's new default model

Read the full articleClaude Sonnet 4.6 Released on MML Studio

What Happened

Anthropic released Claude Sonnet 4.6 on February 17, 2026 as the new default model for Claude Code, replacing Sonnet 4.5. In internal coding tests, engineers preferred Sonnet 4.6 outputs 70% of the time. The upgrade is automatic — no configuration change required for existing users.

Our Take

Okay, genuinely — free upgrades don't happen often in this industry. Usually 'new model' means 'new bill.' So when Anthropic just quietly makes Sonnet 4.6 the default and every Claude Code user gets it without touching a config, that's worth noting.

The 70% preference rate in coding tests is the number that matters here. Not a benchmark (those are cooked). Not a blog post. Actual engineers running actual tasks and picking the new one 7 out of 10 times. That's a real signal.

For a small shop like ours, Claude Code is basically a team member at this point. Better reasoning on multi-file refactors, fewer hallucinated APIs — that compounds fast across a sprint.

(The cynical read: this is also Anthropic quietly retiring Sonnet 4.5 without making a big deal of it. Which is fine. Just notice it.)

Look, I'm not going to pretend this is some paradigm-shattering — wait, no. It's just better. That's the whole story. And better-for-free is the best kind of better.

What To Do

Run your current Claude Code session against a task that previously needed multiple clarifications — Sonnet 4.6 is already active as the default, so you're testing it right now without doing anything.

Builder's Brief

Who

teams using Claude API or Claude Code as a default coding assistant

What changes

default model swap may silently shift output behavior, cost, and latency baselines in existing integrations

When

now

Watch for

third-party evals (LMSYS, Aider leaderboard) confirming or rejecting the 70% coding preference claim

What Skeptics Say

A 70% preference rate from internal Anthropic engineers is a self-reported benchmark with no external replication; real-world gains for non-coding workflows may be marginal and the model churn creates prompt-regression risk for teams with tuned system prompts.

Cited By

React

Newsletter

Get the weekly AI digest

The stories that matter, with a builder's perspective. Every Thursday.

Loading comments...