Skip to main content
Back to Pulse
TechCrunch

Thinking Machines Lab inks massive compute deal with Nvidia

Read the full articleThinking Machines Lab inks massive compute deal with Nvidia on TechCrunch

What Happened

The multi-year deal involves at least a gigawatt of compute power and also includes a strategic investment from Nvidia.

Our Take

A gigawatt of compute isn't 'let's train a few models.' It's 'we're building the next AI lab.'

That's roughly what a small country consumes. You don't write billion-dollar checks for inference capacity. You do it for research — training foundation models, not running inference endpoints. Thinking Machines is in the Philippines, which means the compute arms race isn't just US-based labs anymore. Smaller teams with enough capital can compete globally on research.

The subtext is important: it's not OpenAI and Google anymore. This is a signal about who's moving fast.

Unless you're raising venture capital, there's nothing to do here. But watch it. This is where the next innovation happens.

What To Do

Monitor Thinking Machines' research output over the next 12 months; if they publish competitive models, your inference costs may drop significantly.

Builder's Brief

Who

AI platform teams dependent on Nvidia GPU allocation

What changes

GPU supply concentration deepens; teams on waitlists or spot markets should expect sustained pricing pressure

When

months

Watch for

Thinking Machines Lab's first model release or API availability announcement

What Skeptics Say

A gigawatt compute commitment from a lab with no public track record reeks of a Nvidia-manufactured demand signal to justify continued CapEx; the strategic investment creates a direct conflict of interest that makes the deal's economics unverifiable from the outside.

Cited By

React

Newsletter

Get the weekly AI digest

The stories that matter, with a builder's perspective. Every Thursday.

Loading comments...