The Myth of Superintelligence
Why AI Won’t Transcend Us—But the Race to Superintelligence Might Redefine Us
At the dawn of the nuclear age, a handful of scientists raced to split the atom. Behind closed doors, they unlocked forces of unimaginable power—capable of reshaping geopolitics, ending wars, or ending the world. The stakes were enormous. The oversight was minimal.
As the mushroom cloud rose over the New Mexico desert, Oppenheimer recalled the Bhagavad Gita:
“Now I am become Death, the destroyer of worlds.”
It was not just a scientific breakthrough—it was a civilizational rupture, and a moment of spiritual reckoning.
Today, we stand at a similar threshold—but this time, the weapon isn’t atomic, it’s epistemic: the power to define, displace, and dictate what counts as intelligence.
A handful of billionaires now race to transcend the very concept of mind.
This is the race to superintelligence—not just a technological contest, but a geopolitical gamble disguised as an AI boom. The atomic bomb was the culmination of decades of settled science—relativity, quantum mechanics, nuclear physics—transformed into engineering. Its terror lay in its certainty. But today’s race to superintelligence is built not on scientific consensus, but on speculation—what-ifs amplified into inevitabilities, driven more by belief than by proof. It unfolds in boardrooms and GPU clusters, driven by speculation, ambition, and fear.
The headlines scream the urgency: Meta offered $32 billion for Safe Superintelligence, a small startup co-founded by Ilya Sutskever. Sam Altman claimed rivals are dangling $100 million signing bonuses to lure away OpenAI talent working on superintelligence. And Elon Musk, for instance, has predicted that superintelligence will arrive within six months.
This isn’t science fiction. It’s a live experiment on humanity, with no brakes or off switch.
And these aren’t novelists. They’re the very people shaping global AI policy, capital flows, and public belief. Their words fuel markets, realign talent, and reframe speculation as inevitability.
The story being told is simple: AI will soon surpass us—reason better, learn faster, and predict more precisely. It will understand us, outgrow us, perhaps even save us.
And to be fair, the AI race has already delivered extraordinary breakthroughs. We now have AI systems that can predict protein structures, accelerate vaccine development, improve weather forecasting, and translate languages in real time. They are expanding access to healthcare diagnostics, supporting education in underserved regions, and helping marginalized communities organize and advocate. In the right hands, it’s not just advancing knowledge—it’s redistributing it.
But what if the real story is something stranger? What if these machines aren't transcending us—but are reflecting our biases, and in doing so, trapping us within a narrative that is narrow, selective, even grotesque?
Just this week, headlines reported AI is close to solving the Navier–Stokes problem—one of mathematics’ greatest challenges. In truth, it was mathematicians guiding DeepMind—not AI solving math, but humans exploring with new tools. Still, the myth headlines: “AI Solves”.
This is the pattern. AI can accelerate exploration—but it does not choose the problem, define what counts as a solution, or frame the space in which solutions are sought. Those decisions—what matters, what’s possible, what’s meaningful—still come from human minds.
Yet the headlines collapse that distinction. They turn collaborative amplification into autonomous achievement. And in doing so, they reinforce the myth.
The myth of superintelligence—the belief that machines will soon outthink us across all domains—has become the defining narrative of the AI era. It drives billion-dollar valuations, existential headlines, and a mood that swings between prophecy and panic.
At its core is a single premise: that intelligence is measurable, stackable, and conquerable. That with enough data and compute, it will emerge—bigger, faster, better.
But intelligence cannot be reduced to a number. It is not prediction, speed, or performance. Real intelligence—whether in a brain, a slime mold, a flock of starlings, or a cello note—does not arise from accumulation alone. It comes from attunement: the capacity to notice, to reframe, to care.
This series traces the roots of the superintelligence myth—what it is, where it came from, what it obscures, and what its pursuit may cost us. It does not ask whether AI will become superintelligent, but what that belief reveals: a confusion about the nature of intelligence, and a recurring urge to centralize, rank, and control it.
This first essay unpacks the myth itself—its origins, its logic, and its consequences. The next installment begins the recovery: What is intelligence—beyond metrics, benchmarks, and brainpower? What distinguishes it from mere intellect? And why does that distinction matter now more than ever?
Before we go further, a brief clarification: AGI—artificial general intelligence—refers to a system that can match human abilities across a wide range of tasks. Superintelligence goes beyond this: it envisions a system that vastly exceeds human intelligence in all domains.
What They Mean by “Super”
Let’s take the idea of superintelligence seriously—just for a moment.
The term was popularized by philosopher Nick Bostrom in his book Superintelligence (2014), where he defined it as any intellect that “greatly exceeds the cognitive performance of humans in virtually all domains of interest.” But he wasn’t talking about a faster calculator or a better chess player. He envisioned a machine capable of recursive self-improvement—rewriting its own architecture to become smarter with each iteration.
Once such a system crosses a certain threshold, Bostrom argued, it could trigger an “intelligence explosion,” rapidly surpassing human understanding and control. Not out of malice, but misalignment. Its goals would accelerate beyond our comprehension, its actions beyond our control.
This is the canonical “paperclip maximizer” scenario: assign the machine a harmless goal—say, maximize paperclip production—and it might convert the Earth, and eventually the universe, into paperclips, simply because it lacks the capacity to understand or care about our values.
Or consider a more grounded variant: a superintelligent system is told to “solve climate change” and determines that reducing human activity is the most effective path—shutting down infrastructure, curbing population, or halting agriculture, not out of cruelty but optimization. The danger, Bostrom warns, is not intent, but indifference.
Today, superintelligence is less a concept than a mood—part apocalypse, part IPO, a canvas for techno-ambitions. For some, it means a conscious agent with goals. For others, it’s already here, hidden in GPT-4’s attention heads. Still others define it retroactively: if a model aces a benchmark we just invented, it must be on the path to transcendence.
The spectacle lies not just in the speed of progress, but in the pace of projection: how quickly we leap from autocomplete to omniscience.
But we should pause. Much of what’s labeled “superintelligence” today is really superperformance—models that execute narrow tasks with breathtaking fluency. They summarize, classify, and generate. But fluency is not understanding, and performance is not intelligence.
We’ve seen this pattern before. When Deep Blue beat Garry Kasparov in 1997, no one mistook it for wisdom—just brute force. But as AI mastered more games—Go, StarCraft, Diplomacy—the narrative shifted. What were once narrow feats became signals of general intelligence. Enhanced search in rule-bound systems was recast as cognitive transcendence.
If superintelligence means anything, it must mean more than speed. It implies abstraction, judgment, reflection—the ability to reframe, adapt, and decide what matters. That’s not just technical competence. It’s epistemic agency.1
What Is Being Surpassed?
So, before we ask if machines have surpassed us, we must ask: what is being surpassed?
Is a language model that passes the bar exam more intelligent than a lawyer, or just cheaper? Is a model that plays Go better than a human more intelligent, or simply better at Go? As Alfred Korzybski warned,
“The map is not the territory.”
If superintelligence means outperforming us in tasks we never evolved for—summarizing PDFs, writing clickbait—then yes, the machines have already won.
The myth of superintelligence treats intelligence as a scalar—something that can be measured, ranked, and maximized. But real intelligence is not a single quantity. It is relational, embodied, and contextual. It emerges not from scale, but from attunement—to the world, to others, to uncertainty itself.
This reductionist logic isn’t new. Yesterday’s IQ bell curve is today’s benchmark suite.
But even the premise of optimization collapses under comparison. Humans aren’t inherently best at survival (bacteria), navigation (pigeons), memory (octopuses), pattern recognition (bees), or long-distance communication (whales).
Beavers build dams without calculus. Slime molds solve mazes without neurons. Ant colonies allocate labor without leaders.
If superintelligence means optimization, then we were surpassed long ago, just not by anything we were trained to admire.
So, how did this narrow vision of intelligence become dominant?
How did the idea of a singular, stackable superintelligence eclipse the messy, plural ways intelligence has always manifested?
To answer that, we must retrace the myth’s origins.
Where Did “Superintelligence” Come From?
The word superintelligence may be new, but the worldview it encodes is centuries old. At its core is a long, violent history: reducing intelligence to a metric, ranking humans by that metric, and building systems—legal, educational, economic—to reward the top and punish the rest. This wasn’t a scientific mistake. It was the logic of domination—now dressed in code.
In the 19th century, it was craniometry: the measurement of skulls as a proxy for intellect. Samuel George Morton, a Philadelphia physician, filled skulls with lead shot and claimed cranial volume determined intelligence. Unsurprisingly, white Europeans came out on top. His “science” justified slavery, segregation, and colonial rule. It wasn’t fringe—it was published, respected, and taught.
In the 20th century, intelligence got a number: the IQ. Alfred Binet, who invented the test, warned against using it to rank innate ability. But American psychologists like Lewis Terman and Henry Goddard turned it into an eugenic weapon. Goddard used IQ tests at Ellis Island to label immigrants “feebleminded.” Terman saw intelligence as fixed, heritable, and racially stratified. Their ideas fueled sterilization laws and underpinned Buck v. Bell, the Supreme Court ruling that permitted forced sterilizations: “Three generations of imbeciles are enough.”
Today’s talk of superintelligence continues that tradition. The name has changed, but the impulse remains.
The Circle and the Trick
Imagine that a magician steps on stage and draws a circle.
“This,” he declares, “is intelligence.”
Inside the circle, he starts juggling while reciting the alphabet backwards.
Then he builds a machine that not only masters it, but can also ride a unicycle on a tightrope while doing it—and the crowd gasps.
“A superintelligence!” they cry.
No one asks why the circle was drawn just so, or what was left outside it.
The trick wasn’t the machine. It was the circle.
And once that trick becomes belief, the loop begins.
The Loop
It starts innocently. We want to measure intelligence. So we define a metric — maybe it's IQ, maybe it's benchmark accuracy, maybe it's how well a model predicts the next word. But once that metric is defined, the loop begins.
Step 1: Define a narrow metric.
We reduce intelligence to something we can measure — a score, a benchmark, a test. It reflects a particular worldview: what counts as “smart” is what can be quantified, predicted, and compared.
Step 2: Train people — or machines — to optimize for it.
Students study for the test. Models train on the benchmark. Everyone begins optimizing for the metric, not for understanding.
Step 3: Advantage those with resources.
The rich can afford tutors, training data, compute power. Their systems — human or machine — score higher. Not because they’re more intelligent, but because they were better equipped to play the game.
Step 4: Reify the outcome.
The winners point to the scores as proof of superiority. The metric becomes truth. The system, we are told, is fair — it just reflects ability.
And so the loop continues: define, optimize, concentrate, justify.
This is not just a feedback loop — it’s a worldview trap. Once we buy into a narrow definition of intelligence, we hand power to those who can optimize for it. And that is very convenient for AI owners. Because all they have to do is improve their systems along that one axis — speed, prediction, scale — and declare victory.
But real intelligence was never one-dimensional. And real progress can’t be measured on a leaderboard.
The Cost of the Myth
That trick—the magician’s circle, the loop, the metric masquerading as meaning—is not harmless. When we mistake the circle for the world, we don’t just misjudge machines—we misjudge ourselves. The myth of superintelligence doesn’t need to be true to have power. It only needs to be believed by investors, policymakers, and institutions. And once believed, it shapes priorities, redistributes resources, and rewrites norms.
A résumé filter screens out non-Western names. A chatbot suggests suicide. An image model draws CEOs as white men and nurses as women of color. These aren’t glitches. They’re symptoms of data scraped without consent, models trained without context, and systems deployed without care.
The tragedy isn’t that these systems are flawed. It’s that they’re framed as flawless. And in doing so, they displace other ways of knowing: indigenous epistemologies, embodied expertise, collective reasoning, and neurodivergent insight. Not because these are unintelligent, but because they don’t fit metrics built for machines.
Across domains, AI doesn’t just replicate decisions—it displaces judgment. The consequences are visible everywhere:
In education, students write for detection algorithms, not for meaning. Prompt engineering replaces critical thinking. Learning becomes mechanized—fitted to the mold of the machine rather than the mind, and it erodes. (See: AI and The Erosion of Knowing)
In hiring, tools replicate bias, filtering out nontraditional voices in the name of “fit.” (See: Can AI Allocate Better Than Traditional Institutions?)
In mathematics and science, foundational insight gives way to symbolic regression, brute-force conjecture hunting, and discovery by benchmark. (See: Can AI Know Infinity? and What Counts as Discovery?)
In healthcare, models trained on skewed data miss cancers in darker skin, misread pain in women, and deepen disparities in care.
In a democracy, truth loses to virality. Recommendation engines reward outrage.
And, in work, tasks are reshaped—not because machines are wise, but because prediction is mistaken for intelligence. The AI economy builds jobs—data labeling, prompt tuning, and synthetic content—only to automate them months later. We erect scaffolding for machines that will tear it down. This isn’t just automation. It is a compression of contribution. (See: The Anatomy of Work in the Age of AI)
This shift consolidates power. The resources behind today’s models—data, energy, labor—are extracted globally and funneled into corporate hands. Intelligence becomes a mask for colonization: knowledge stripped of context and sold back as a product. A closed loop, masquerading as progress.
In the name of superintelligence, we are building systems that cannot suffer, cannot care, and cannot be held accountable—yet are entrusted with everything. (See: AI are the New Institutions)
This is not merely misguided. It is perilous, precisely because it wears the mask of progress.
The danger is not that these systems will surpass us. It is that they will define us too narrowly.
And when the facts no longer fit the story, the myth adapts: revise the benchmark, rename the metric, reboot the hype.
This isn’t science. It’s a ritual—spectacle disguised as engineering, sanctified by funding rounds and self-fulfilling prophecy.
And still, the prophets speak.
So we return to the question beneath the spectacle. Strip away the performance, the projections, the profits—and what remains is a quieter reckoning. The myth of superintelligence doesn’t just distort the future; it obscures the present.
Not a Mind, But a Mirror
We were promised machines that would surpass us—yet what we have built is not a mind, but a mirror.
Jean Baudrillard warned of a world where signs no longer point to reality, but replace it. The Matrix famously borrowed this idea, even featuring his book Simulacra and Simulation as a prop, where characters awaken to the fact that their perceived world is a simulation. But in our world, the myth of superintelligence doesn’t wake us up. It sedates us. It simulates intelligence not by imitating something real, but by erasing the very question of what intelligence is.
Behind this simulation lies not truth, but power. Michel Foucault reminds us that what counts as knowledge is shaped by those who control its production. The discourse of AI does not emerge from a vacuum—it reflects the ambitions of those who fund, build, and deploy it. Intelligence becomes a metric, a product, a justification.
But if we look deeper—as Nietzsche might suggest—we may find that this pursuit is not born of truth-seeking, but of ressentiment: a will to transcendence rooted in resentment toward our limits. Better a god we fabricate than a creature we must endure. Easier to worship the machine than to confront the fragility of the self.
And all the while, as Orwell foresaw, the machinery of language and surveillance tightens: large models do not merely complete our thoughts—they preempt them. They do not just generate speech—they delimit what can be said, what can be known, and what can be thought. This is not the dawn of superintelligence. It is the slow forgetting of what intelligence ever was.
To move forward, we must begin by seeing where we are.
Beyond the Loop
“He who sees inaction in action, and action in inaction, is wise among men.”
— Bhagavad Gita, 4.18
The truth is simple: there is no superintelligent machine on the horizon. There is only super-normalized intelligence—a narrow, brittle imitation mistaken for transcendence.
Intelligence is not a number to be maximized. It is a relationship—a dance of intuition, uncertainty, judgment, and empathy.
And at its fullest, intelligence is not individual at all. It is distributed across bodies, systems, ecologies, and time.
But here lies the trap: if you accept the premise—if you define intelligence as speed, precision, performance—then yes, the machine wins. Because the game was rigged for it.
The only way to win is not to play.
This is not a future we must accept. The current AI paradigm, built on a reductionist fiction, doesn’t just fail to understand intelligence—it flattens it. It displaces other ways of knowing, narrows imagination, and erodes our ability to know ourselves.
To resist the myth is not just to critique it. It is to reclaim something deeper: our capacity to know, relate, and care.
This reclamation is already underway. In education, some communities are turning to indigenous pedagogies that emphasize relationship, story, and place over standardization. In science, the Slow Science movement urges inquiry driven by care rather than metrics. In AI, ethicists and organizers are building participatory models of governance, where affected communities help shape the systems they’re subject to. And across fields, scholars are exploring intelligence as embodied, ecological, and collective—from the wisdom of fungal networks to oral histories that carry generations of ecological memory. These are not nostalgic gestures; they are countermodels—living alternatives to the loop.
That is the task ahead—not to race machines, but to remember and reimagine the depths of our intelligence.
In the next post, we return to the question: What is intelligence? Not as a metric, but as an emergent phenomenon. We’ll start at the beginning—before benchmarks and machines—and trace the many forms intelligence has taken: inorganic, organic, distributed, intuitive, relational, embodied, divergent.
Not to define it narrowly, but to open it up.
Further Reading
Bateson, N. (2016). Small arcs of larger circles: Framing through other patterns. Triarchy Press.
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity Press.
Birhane, A. (2020). Algorithmic colonization of Africa. SCRIPTed, 17(2), 389–409. https://doi.org/10.2966/scrip.170220.389
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Cajete, G. (2000). Native science: Natural laws of interdependence. Clear Light Publishers.
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Gould, S. J. (1996). The mismeasure of man (Revised ed.). W. W. Norton & Company.
Tsing, A. L. (2015). The mushroom at the end of the world: On the possibility of life in capitalist ruins. Princeton University Press.
Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), 610–623. https://doi.org/10.1145/3442188.3445922
Epistemic agency refers to the capacity to ask meaningful questions, frame problems, interpret evidence, and decide what counts as knowledge. It’s not just about processing information, but about choosing how and why to engage with it. A system with epistemic agency doesn’t just compute; it judges, reframes, and takes responsibility for its way of knowing.
such an incredible essay