Skip to main content
Back to Pulse
TechCrunch

OpenAI launches GPT-5.4 with Pro and Thinking versions

Read the full articleOpenAI launches GPT-5.4 with Pro and Thinking versions on TechCrunch

What Happened

GPT-5.4 is billed as "our most capable and efficient frontier model for professional work."

Our Take

"Most capable and efficient frontier model" is marketing without data. We need actual benchmarks, latency, and cost-per-token.

The real problem: "Pro and Thinking versions" means we're tracking not model versions but *modality variants*. That's fragmentation. At some point, you're paying per-mode tax and wondering which thing to use.

Flagship models should be singular. Variants are fine, but the complexity is starting to show.

What To Do

Ask OpenAI for latency and cost comparison—GPT-4o vs. 5.4 Pro vs. 5.4 Thinking—before migrating anything.

Builder's Brief

Who

teams with GPT-4o or earlier GPT-5.x models hardcoded in production

What changes

new model ID requires regression testing against existing evals before upgrading; Pro vs Thinking pricing split may change cost profiles for long-context professional workflows

When

now

Watch for

MMLU and GPQA scores from independent evaluators, not OpenAI's own benchmarks

What Skeptics Say

Naming a model GPT-5.4 with a simultaneous 'Thinking' variant is SKU proliferation, not capability transparency — it makes API migration decisions harder and hides which underlying architecture improvements actually drove gains. 'Most capable and efficient' is unfalsifiable until third-party evals publish.

Cited By

React

Newsletter

Get the weekly AI digest

The stories that matter, with a builder's perspective. Every Thursday.

Loading comments...