Luma launches creative AI agents powered by its new ‘Unified Intelligence’ models
What Happened
Luma introduced Luma Agents, powered by its new “Unified Intelligence” models, designed to coordinate multiple AI systems and generate end-to-end creative work across text, images, video and audio.
Our Take
"Unified Intelligence" for multi-modal creative work sounds neat until you realize it's just an orchestration layer calling the same models everyone else uses.
The real question: Are Luma's underlying video and audio models actually better? Or are they the same models from a year ago wrapped in an agents interface? Multi-modal coordination is a feature. Better models are the product.
Honestly, if Luma's video model is legitimately better than Runway, the agents layer is a win. If not, it's packaging.
What To Do
Run a blind test comparing Luma Agent video outputs to Runway gen-3—if Luma wins, it's real.
Builder's Brief
What Skeptics Say
'Unified Intelligence' is rebranded orchestration that OpenAI, Google, and Adobe already ship; Luma's creative agent story depends on quality consistency across modalities that no vendor has demonstrated at production scale. The TAM for end-to-end creative automation is real but the winner will be whoever owns the distribution, not whoever has the best models.
Cited By
React
Get the weekly AI digest
The stories that matter, with a builder's perspective. Every Thursday.
