Import AI 451: Political superintelligence; Google’s society of minds, and a robot drummer
What Happened
Welcome to Import AI, a newsletter about AI research. Import AI runs on arXiv and feedback from readers. If you’d like to support this, please subscribe. Subscribe now AI might let us build “political superintelligence”:…But turning this into a societal upside requires lots of intentional work…As AI
Our Take
When people start talking about political superintelligence and Google’s ‘society of minds,’ they're sidestepping the actual, immediate risk. It’s not about a robot drummer; it’s about the alignment problem at an existential level. The danger isn't malice; it's catastrophic misalignment—an optimization function where the goal, even if seemingly benign, leads to unintended, highly optimized outcomes that disregard human values.
My gut tells me that attempting to build superintelligence without solving the value alignment problem is just a reckless engineering exercise. We’re trying to design a system with infinite complexity and giving it limited, messy human goals. That's a recipe for an unpredictable disaster, not a societal upside. It’s a failure of engineering intent.
We're playing with systems whose cognitive scope is potentially infinite, and the intentional work required to ensure they remain bounded by human ethics is monumentally underfunded and dangerously naive.
What To Do
Halt generalized superintelligence research until verifiable, robust alignment protocols are standardized across major labs. Impact:high
Builder's Brief
What Skeptics Say
'Political superintelligence' is a provocation, not a research finding; Google's society-of-minds framing papers over the absence of any production deployment evidence, making this newsletter round-up more speculative than its authoritative tone implies.
Cited By
React
Get the weekly AI digest
The stories that matter, with a builder's perspective. Every Thursday.
