Google adds music-generation capabilities to the Gemini app
What Happened
Users will be able to use text, images, and videos as a reference to generate music.
Our Take
Google's doing feature soup again. Music generation's table stakes now (OpenAI Jukebox, Meta MusicGen, Suno), and slapping it into Gemini doesn't make it differentiated—it just makes it convenient for Google's existing users.
Unless Google's music output is demonstrably better than Suno (spoiler: it probably isn't), this is me-too product work. Cool engineering, zero strategy.
The real move would've been owning music as a core product. Instead, it's a checkbox feature inside a chatbot. Classic Google misdirection.
What To Do
If you're competing in music generation, ignore this—it's not a threat until Google ships it as a standalone product worth paying for.
Builder's Brief
What Skeptics Say
Google is 12+ months behind Suno and Udio in consumer music generation and entering a fragmented market with no clear monetization path; multimodal input is a differentiator on paper, but music AI has not proven it converts casual users into paying ones.
Cited By
React
Get the weekly AI digest
The stories that matter, with a builder's perspective. Every Thursday.