No one has a good plan for how AI companies should work with the government
What Happened
As OpenAI transitions from a wildly successful consumer startup into a piece of national security infrastructure, the company seems unequipped to manage its new responsibilities.
Our Take
OpenAI's a 7-year-old product company that somehow became national security infrastructure. That's broken, and nobody wants to admit it.
Neither OpenAI nor Anthropic wants real transparency or oversight—they just want the money and legitimacy. The government's equally lost. Everyone's pretending this was planned. It wasn't.
The real problem: there's no mechanism for this to actually work. You've got venture-backed companies, federal mandates, and nobody accountable to anyone.
What To Do
Don't expect government to figure this out—expect regulatory capture instead and plan your compliance accordingly.
Builder's Brief
What Skeptics Say
The premise that no framework exists ignores that traditional defense contractors navigated identical dual-use and ethics tensions for decades; OpenAI's governance gap is organizational immaturity, not a structurally novel problem, and framing it as unprecedented flatters the company while obscuring solvable institutional design questions.
Cited By
React
Get the weekly AI digest
The stories that matter, with a builder's perspective. Every Thursday.