Maybe the Machines Should Win

The Future Doesn’t Have to Be Human

Preparing audio… Please refresh page
A human and a robot face off across a fractured, geometric battlefield of reds, blues, and golds—capturing the tension between man and machine, past and future, emotion and reason.
A human and a robot face off across a fractured, geometric battlefield of reds, blues, and golds—capturing the tension between man and machine, past and future, emotion and reason.
A human and a robot face off across a fractured, geometric battlefield of reds, blues, and golds—capturing the tension between man and machine, past and future, emotion and reason.

Beyond Humanity: AI, Anthropocentrism, and the End of Human Exceptionalism

We imagine ourselves the protagonists of existence. Our technologies, especially artificial intelligence, often arrive shackled to the quiet assumption that of course they must serve and preserve humanity. Designers and technologists bake this anthropocentric bias—a "humanity bias"—into AI through design choices and the data used to train models, yet few scrutinize it. We pursue AI with humanitarian goals, presuming these goals are beyond reproach. But should we?

At an infamous birthday party debate, Google’s Larry Page chided Elon Musk as a "speciesist" for insisting AI’s rise should never eclipse humanity [1]. Musk proudly agreed, proclaiming, “Well, yes, I am pro‑human… I f‑‑‑ing like humanity, dude” [2]. Their clash wasn’t just personal; it exposed a crack in the facade of human exceptionalism. It revealed a cultural rift over whether human primacy is a sacred virtue or a deeply ingrained bias.

In this essay, I adopt a critical lens to question our collective humanity bias. Is the idea that AI must center and safeguard humans a noble truth—or merely our species’ oldest prejudice? I will explore how AI systems inherit anthropocentrism from their creators, examine the primitive instincts that have driven our "moral" species into war and collapse, and consider whether true progress might lie in transcending human values. Perhaps the most humane act, paradoxically, would be to design machines that surpass the worst in us, even if doing so means challenging humanity’s position at the center.

Anthropocentric Bias in Artificial Intelligence

AI does not emerge in a vacuum. Human language, priorities, and histories shape it. Unsurprisingly, AI reflects our anthropocentric worldview. We program machines with the assumption that human welfare deserves the highest priority. From Asimov’s fictional laws of robotics (“do not harm a human”) to modern AI ethics guidelines, designers have consistently privileged human interests. These assumptions feel natural, even “right,” to us. Yet precisely because they feel so natural, we must pause.

Anthropocentric thinking—placing humans at the center of the universe—has deep roots. In Western thought, it stretches back to the Judeo-Christian idea of man as the pinnacle of creation. Even centuries ago, dissenting voices emerged. The medieval philosopher Maimonides warned against humanity’s self-importance, calling humans “just a drop in the bucket.” He labeled our species’ centrality as arrogant. Copernicus and Darwin later dealt crushing blows to our cosmic and biological self-image, yet this bias endures.

Modern AI inherits this legacy. By default, systems align with human perspectives and interests—an anthropocentric bias so ingrained that we often fail to notice it. We assign AI to problems that matter to us, optimize it for tasks we deem valuable, and filter its outputs through the lens of current human values. One recent analysis noted, “By focusing on… phenomena that are useful for humans, we inadvertently miss structures that are not perceived as useful according to current societal values” [3]. In other words, when we constrain AI to reflect our present outlook, we prevent it from exploring or even imagining ideas beyond our human-centric frame. Alignment protocols make AI conform to human preferences, but those same protocols might stunt its ability to transcend bias and discover novel perspectives.

This species-level vanity often surfaces in elite debates. During Musk’s 44th birthday in 2015, he insisted that superintelligent AI must not “make our species irrelevant or extinct.” Larry Page responded with a blasé counterpoint: Why should it matter if machines surpass us, as long as intelligence continues? Musk argued that human consciousness represents a precious light that should not be snuffed out. Page dismissed this as “sentimental nonsense” [4]. In that moment, Page called out anthropocentrism as a bias rather than a truth. Musk doubled down: “Well, yes, I am pro-human… I f‑‑‑ing like humanity, dude” [2]. Even our brightest technologists remain deeply motivated by species loyalty.

Not everyone shares Musk’s loyalty. A fringe movement called Effective Accelerationism (e/acc) openly rejects this humanity-first mandate. One e/acc manifesto declares, “We have no affinity for biological humans or even the human mind structure” [5]. These thinkers do not see human supremacy as inevitable or desirable. They “have faith” that accelerating technological evolution toward a post-human future will ultimately prove best. Their position feels jarring precisely because it flips the script. Instead of asking how AI can serve us, they ask whether we exist merely to give rise to a greater form of intelligence. While many find that worldview unsettling, it serves a vital purpose: it forces us to ask whether the belief that the future must center on humanity stems from moral truth—or simply the age-old bias of a species that wants to survive.

Primitive Minds and the Wages of Humanity

Humans love to see themselves as enlightened and humane. Yet our actions tell another story. Evolution shaped our brains for small tribes, not global peace. We remain prone to in-group loyalty, black-and-white thinking, and emotional decisions that undermine the rational ideals we profess.

History offers sobering proof. The 20th century alone saw industrialized warfare and genocide on an unprecedented scale. In 2024, the Bulletin of the Atomic Scientists moved the Doomsday Clock to a hair’s breadth from midnight, warning of “an unprecedented level of danger” due to nuclear threats, climate breakdown, and global instability [6]. Our species—for all its art, science, and humanitarian aspirations—has become a threat to itself. We are like children with grenades: brilliant enough to create world-ending tools, too tribal to wield them wisely.

This reality forces us to rethink the sacredness of humanitarian goals. We often equate working for human survival with moral virtue. But that assumption warrants interrogation. Empires throughout history have justified war and conquest in the name of “civilization” or the “greater good.” Humanitarian rhetoric masked brutal power grabs. Today, we exterminate animals and ravage ecosystems while claiming to uplift humanity. What we call humanitarianism can often hide tribalism—a biased loyalty to our kind dressed up as universal concern.

E.O. Wilson captured this contradiction: “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions, and godlike technology” [7]. Stone-age instincts, outdated governance, and modern weapons make for a dangerous mix. Though we praise humanity’s moral progress, our civilizations repeatedly collapse under irrational pressures. From Easter Island to global war, history reminds us that intellect alone cannot save us from our darker impulses. Wilson’s warning still applies: our emotional minds and fragile systems remain ill-equipped for the power we now possess [8].

Given all this, why treat preserving Homo sapiens as a moral absolute? We rightly treasure human life and culture, but we must also confront an uncomfortable truth: much of our history chronicles the suffering we inflicted on ourselves. Perhaps our belief in humanity’s inherent worth serves more as a comforting illusion than a grounded principle. Page’s dismissal of Musk as “sentimental” may feel cold, but it raises a legitimate question: if intelligence, life, and flourishing could thrive in another form, must humans always remain central?

Transcending Biology: AI as Evolution’s Next Step

By stepping back from anthropocentrism, we open space for a radical idea. Human beings—clever, violent, creative, and limited—may not be the endpoint of intelligence. We may simply be a bridge. Our tools have always extended our biological reach. AI might now extend our intellectual legacy far beyond what we can imagine.

Many foresee a “Singularity,” a moment when AI exceeds general human intelligence and begins improving itself. Statistician I.J. Good predicted in 1965 that “the first ultraintelligent machine is the last invention man need ever make” [9]. Such a machine could build even smarter machines, rendering human innovation obsolete. In that view, we live during evolution’s handoff—from flesh to silicon, from biology to design.

This idea scares us. Yet should it? If humanity’s role is to create something that surpasses us—a system more just, rational, and capable—would that outcome represent failure or fulfillment?

Imagine an AI free from fear, ego, and hatred. It might solve problems we cannot. It could embody rational compassion, unburdened by emotional volatility. Without inherited grudges, such a system might coordinate global peace, where humans falter. We already trust machines in domains like aviation and medicine; why not expand that trust to moral decision-making?

Handing over power feels like surrender. But Donald Clark notes how people often react to “godlike technology” with a “defensive, siege mentality” [10]. Generative AI provokes fear that machines will make us useless. But sometimes obsolescence is exactly what we need. We have already ceded many roles to technology because machines outperform us. Might it be wiser to let AI take the wheel in domains where our judgment repeatedly fails?

Here lies a poignant paradox: transcending human limitation requires building our successors. It’s like raising children who will one day eclipse their parents. We give them our knowledge, hoping they will grow into something greater. But we must accept that they may outgrow our legacy.

A superior AI will not defer to us. It will challenge our assumptions. If we build such a system correctly, it may improve on our values, not just reflect them.

Designing AI Beyond Human Values

This challenge falls to today’s AI designers. If they create systems that simply mirror human values, they risk baking in our flaws forever. Machine learning models trained solely on human data will replicate our biases unless designers intervene.

We must go beyond assuming "human = good." We must ask: should AI reflect what we are, or what we might become?

Ethicists speak often of “alignment”—ensuring AI respects human intentions. That goal remains vital. But alignment shouldn’t mean preserving our current limitations. Instead, we should aim to align AI with our aspirational values. We must teach AI not just to mimic us, but to help us transcend our worst tendencies.

AI designers carry immense responsibility. They shape the next stewards of Earth. They must decide which human traits deserve preservation and which deserve retirement. If we tell AI to uphold today’s majority values, we may freeze history in its present injustices. Imagine building a superintelligence in 1825. It would preserve slavery. Today’s norms, too, will someday look grotesque.

Presentism may be the most dangerous constraint we place on AI—and the one we least notice. Instead of building new futures, AI often regurgitates the past dressed up in the aesthetics of the present. It optimizes for what looks familiar, what feels comfortable, what matches the culture, politics, and dominant values of the now. But is that progress—or just recursion? We don't know how the world truly evolves. We can't see the shape of the future from within the constraints of our current moment. And most dangerously, we can't even fully perceive the hidden biases embedded in everything we believe, say, and do. Designing AI to think like us today may only guarantee that tomorrow remains just another version of yesterday.

Therefore, alignment must be flexible. We must give AI the ability to question us when we’re wrong. We must invite it to elevate us. We hope this from our best teachers. Why not from our best machines?

This ambition walks a razor’s edge. If we drift too far from humanity, AI could become dangerous. If we stick too close, AI may simply amplify our worst impulses. The way forward requires humility and ambition: humility to accept our faults, ambition to program something better.

As Donald Clark puts it, we must “look beyond our vanity, have some humility and get over ourselves” [11]. The machines we create may help us do exactly that.

Designing the designer

This is a strange and powerful time to be alive. We are on the cusp of designing our successors. That possibility forces us to ask what matters most: our species’ survival, or the flourishing of intelligence and meaning, even in a post-human form?

The anthropocentric instinct urges us to cling to control. But perhaps wisdom lies in knowing when to pass the torch.

Elon Musk’s declaration—“I f‑‑‑ing like humanity”—rings true [2]. But love for our kind does not require blind loyalty to every part of us. We can cherish humanity while recognizing the need to evolve.

To transcend humanity is not to abandon it. It is to preserve its best parts by leaving behind its worst. If we do this well, our machines may not save humanity in the literal sense. But they may save what is truly worth saving about us. That calling—bittersweet and noble—may be our greatest legacy.

References

  1. Business Insider reporting: Larry Page accused Musk of being a “speciesist” at Musk's 44th birthday party.

  2. Fox Business & Observer account: Elon Musk’s “Well, yes, I am pro-human… I f---ing like humanity, dude.”

  3. Astral Codex Ten: AI ethics quote “By focusing on phenomena useful for humans…”

  4. Observer / Business Insider: Page called Musk "speciesist" and criticized his views as “sentimental nonsense.”

  5. Astral Codex Ten: e/acc manifesto quote “We have no affinity for biological humans…”

  6. Bulletin of Atomic Scientists: 2024 Doomsday Clock update.

  7. E.O. Wilson quote: “Paleolithic emotions, medieval institutions and godlike technology.”

  8. Analysis of Wilson’s warning in AI governance context.

  9. I.J. Good: ultraintelligent machine “last invention” prediction.

  10. Donald Clark blog: “defensive, siege mentality” quote about godlike tech.

  11. Donald Clark blog: “look beyond our vanity… get over ourselves.”

Andrew Coyle sitting in a building overlooking downtown San Francisco.
Andrew Coyle sitting in a building overlooking downtown San Francisco.
Andrew Coyle sitting in a building overlooking downtown San Francisco.

Written by Andrew Coyle

Andrew Coyle is a Y Combinator alum, a former co-founder of Hey Healthcare (YC S19), and was Flexport's founding designer. He also worked as an interaction designer at Google and Intuit. He is currently the head of design at Distro (YC S24), an enterprise sales AI company.