Thinking Machines Lab inks massive compute deal with Nvidia
What Happened
The multi-year deal involves at least a gigawatt of compute power and also includes a strategic investment from Nvidia.
Our Take
A gigawatt of compute isn't 'let's train a few models.' It's 'we're building the next AI lab.'
That's roughly what a small country consumes. You don't write billion-dollar checks for inference capacity. You do it for research — training foundation models, not running inference endpoints. Thinking Machines is in the Philippines, which means the compute arms race isn't just US-based labs anymore. Smaller teams with enough capital can compete globally on research.
The subtext is important: it's not OpenAI and Google anymore. This is a signal about who's moving fast.
Unless you're raising venture capital, there's nothing to do here. But watch it. This is where the next innovation happens.
What To Do
Monitor Thinking Machines' research output over the next 12 months; if they publish competitive models, your inference costs may drop significantly.
Builder's Brief
What Skeptics Say
A gigawatt compute commitment from a lab with no public track record reeks of a Nvidia-manufactured demand signal to justify continued CapEx; the strategic investment creates a direct conflict of interest that makes the deal's economics unverifiable from the outside.
Cited By
React
Get the weekly AI digest
The stories that matter, with a builder's perspective. Every Thursday.
