Who Will Govern Intelligence?
AI policy is power. Who decides the future?
There’s a growing movement to regulate AI before it regulates us. On the surface, this sounds responsible—ethical, even. But look closer and you’ll see a more troubling picture: a powerful moral framework, backed by billionaires and institutions, increasingly shaping the rules of innovation [1].
That movement is often associated with the philosophy of Effective Altruism (EA). And while many in the community are sincere and thoughtful, the influence of its AI worldview is expanding rapidly—through funding, talent pipelines, research agendas, and quiet lobbying efforts that could soon hard-code its assumptions into the law [2][3].
This isn’t just a philosophical disagreement. It’s a civilizational fork in the road. One path believes AI must be tightly managed, constrained, and slowed—lest it spiral out of control. The other believes AI is leverage: a generational opportunity to rebuild what’s broken, accelerate what’s stalled, and elevate what’s possible.
We’re dangerously close to choosing the wrong path.
The Cost of Overcorrecting
The dominant EA narrative about AI is built on fear. Not irrational fear, but modeled fear—carefully calculated risks, scenario trees, expected value loss. This framework elevates hypothetical future catastrophes over present, tangible benefits. It urges caution, regulation, and alignment as moral imperatives. It rewards the most alarmed voice in the room [4].
But fear is a poor foundation for progress.
By focusing almost exclusively on existential scenarios, EA-aligned institutions risk suffocating the near-term potential of AI: to streamline healthcare, to scale education, to revive dying industries, to extend the capabilities of individuals and small teams, and yes—to challenge entrenched power structures that prefer a slower, more centralized future [5].
AI isn’t just a threat to be contained. It’s a multiplier of human agency. It compresses expertise, automates drudgery, and makes powerful tools accessible. That’s not job destruction—that’s job transformation.
If we allow AI to flourish, we will see new industries emerge, new classes of entrepreneurs rise, and new ways for individuals to create, solve, and earn—at a scale that previous technological revolutions never approached.
And in the name of “safety,” we may end up entrenching a new kind of stagnation.
Models Aren’t Morality
At the heart of this push is the assumption that we can model our way to moral clarity. That if we just run enough simulations, analyze enough edge cases, and create the right alignment protocols, we can make AI safe for everyone—forever.
But the real world isn’t clean like a spreadsheet. And models, no matter how rigorous, are simplifications. They reflect the assumptions, fears, and values of those who design them. EA’s concern about value lock-in is valid. But few acknowledge the irony: by defining the “correct” risks to focus on, the “legitimate” uses of compute, and the “safe” deployment strategies, the movement risks hard-coding its own worldview into the foundation of AI policy [6].
The Hidden Dangers of Centralized Ethics
Many of the policy proposals now floating through Washington and Brussels trace their roots to EA-aligned safety labs, technical papers, or governance institutes [1][7]. These proposals often frame themselves as universal, neutral, or mathematically obvious. But they’re not. They’re based on a particular worldview—one that sees individual judgment as fragile, markets as dangerous, and open experimentation as a threat.
This framework tends to elevate consensus, committees, and top-down constraints. It treats intelligence as something to contain—not something to unleash. And it implicitly assumes that future systems must be aligned to a static vision of human values—defined and enforced by those who claim to know best [3].
But values aren’t static. And neither is intelligence.
And there’s a deeper risk here: in the name of protecting humanity, we hand over control of one of the most powerful technologies ever created to governments and NGOs populated by unelected policy experts. Not engineers. Not entrepreneurs. Not builders. But career bureaucrats and ideologues whose incentives are to manage risk through control—and to manage society the same way.
We should be less worried about what AI might do in the wrong hands, and more worried about what the wrong people might do with AI in theirs.
Once centralized regulatory regimes take hold, it won’t be the open-source researcher or the small lab shaping progress. It will be a handful of gatekeepers—issuing licenses, defining safety, deciding who builds what and when.
And if you think today’s governments struggle to manage inflation, immigration, or infrastructure—imagine giving them a superintelligent information system with the ability to influence speech, economic policy, and defense.
The greatest way to reduce the long-term risk of AI may be to reduce the control any single government—or coalition of aligned institutions—can have over it.
There Is Another Way
The best safeguard against harm isn’t regulation—it’s decentralization. It’s the ability for many people, in many domains, to build, test, critique, and improve new systems in the open [9]. It’s the permissionless nature of the internet and open-source tools that lets great ideas outcompete bad ones—not behind closed doors, but in full view of the world.
We don’t need to slow down. We need to level up.
That doesn’t mean ignoring risk. It means confronting it the way every great leap forward has: with courage, iteration, and moral clarity—earned, not imposed.
If we empower people with AI, we empower them to create the future—not just be governed by it. That’s the real risk worth managing—and the real opportunity worth defending.
In Praise of the Movement—With Caution
To be clear, Effective Altruism has brought serious thinking to the table. It has elevated moral ambition, challenged short-termism, and inspired a generation to care deeply about the future. The community is diverse, full of good-faith thinkers who debate each other more than outsiders give them credit for [10].
But intention is not immunity from critique. The more influence a philosophy gains, the more scrutiny it deserves. Especially when it starts shaping the rules that govern what others can build, test, or release.
There’s a fine line between being careful and being controlling. Between alignment and constraint. Between safeguarding the future and gatekeeping it.
The people who want to save the future may be the ones who end up constraining it.
And if we’re not careful, we’ll wake up in a world where the most powerful tool humanity has ever created was kneecapped—not by malice, but by caution. Not by tyranny, but by trust in the wrong assumptions.
If we want a future worth living in, we need more than intelligence. We need the freedom to build with it. Let’s not surrender that freedom before the real future even begins.