Skip to main content
Back to Pulse
TechCrunch

The US military is still using Claude — but defense-tech clients are fleeing

Read the full articleThe US military is still using Claude — but defense-tech clients are fleeing on TechCrunch

What Happened

As the U.S. continues its aerial attack on Iran, Anthropic models are being used for many targeting decisions.

Our Take

Look, Claude powering military targeting decisions is peak awkward for Anthropic — they can't control it, they can't claim moral high ground, and it's not their legal problem (customer use). But terrible optics anyway.

Defense contractors fleeing Claude tells you the real story: they don't trust it for weapons-grade accuracy, or they're spooked by sanctions risk and political scrutiny. When defense abandons you, it's not about ethics — it's about liability and performance.

Anthrop's safety theater doesn't extend to actual consequences. They keep making money while everyone else handles the fallout.

What To Do

If you're selling to defense, build for O1 or Grok instead — Claude's political baggage isn't worth the friction.

Builder's Brief

Who

teams building sensitive-sector or government-facing AI products on Anthropic APIs

What changes

Anthropic's brand risk creates vendor concentration exposure; procurement teams in regulated sectors need documented contingency providers

When

now

Watch for

Anthropic losing a named public-sector contract or publishing a revised acceptable use policy

What Skeptics Say

Defense-tech client attrition may be overstated — most enterprise buyers will not sacrifice model quality for brand politics, and Anthropic's actual API terms have not changed; the controversy is more reputational than contractual.

Cited By

React

Newsletter

Get the weekly AI digest

The stories that matter, with a builder's perspective. Every Thursday.

Loading comments...