Skip to main content
Back to Pulse
TechCrunch

Google adds music-generation capabilities to the Gemini app

Read the full articleGoogle adds music-generation capabilities to the Gemini app on TechCrunch

What Happened

Users will be able to use text, images, and videos as a reference to generate music.

Our Take

Google's doing feature soup again. Music generation's table stakes now (OpenAI Jukebox, Meta MusicGen, Suno), and slapping it into Gemini doesn't make it differentiated—it just makes it convenient for Google's existing users.

Unless Google's music output is demonstrably better than Suno (spoiler: it probably isn't), this is me-too product work. Cool engineering, zero strategy.

The real move would've been owning music as a core product. Instead, it's a checkbox feature inside a chatbot. Classic Google misdirection.

What To Do

If you're competing in music generation, ignore this—it's not a threat until Google ships it as a standalone product worth paying for.

Builder's Brief

Who

developers building multimodal creative apps or audio generation pipelines

What changes

Google's music gen may become accessible via Gemini API — potential lower-cost or higher-quality alternative to existing audio model providers

When

weeks

Watch for

music generation endpoint appearing in Gemini API docs with a pricing tier

What Skeptics Say

Google is 12+ months behind Suno and Udio in consumer music generation and entering a fragmented market with no clear monetization path; multimodal input is a differentiator on paper, but music AI has not proven it converts casual users into paying ones.

Cited By

React

Newsletter

Get the weekly AI digest

The stories that matter, with a builder's perspective. Every Thursday.

Loading comments...