
For years, Silicon Valley has sold the public a story in which superintelligent machines are just around the corner, poised to eclipse human minds and maybe even threaten our survival. A growing group of critics now argues that this narrative is not just premature but deeply misleading, and one of the sharpest is a British philosopher who insists that the tech giants have misunderstood what intelligence actually is. Her challenge lands at a moment when investors, regulators, and ordinary users are all trying to work out whether today’s AI boom is a revolution or a mirage.
At the center of this debate is Dr Victoria Trumbull, a philosopher of mind whose work cuts directly against the confident predictions of AI bosses. Rather than treating intelligence as a single scale that machines can climb, she argues that human thinking is woven from consciousness, memory, and social meaning in ways that current systems do not touch. Her critique does not deny that AI is powerful, but it does question whether the industry’s favorite thought experiment, a runaway “superintelligence,” has much to do with the systems we are actually building.
Meet the philosopher puncturing Silicon Valley’s favorite myth
Dr Victoria Trumbull has become a rare kind of public philosopher, one who is willing to confront the biggest names in technology on their own terrain. In recent interviews she has been introduced with the blunt description that she is the thinker who says big tech has got it wrong on superintelligence, a label she has accepted because it captures her central claim that the industry’s story about machine minds is conceptually confused. Trumbull’s training in philosophy of mind and language gives her a different starting point from engineers who see intelligence as pattern recognition at scale, and she has pressed that advantage by asking what it would really mean for a machine to “surpass” human intelligence in the first place, rather than simply perform narrow tasks faster.
Her growing profile reflects how hungry the public is for voices that can decode AI rhetoric without being captured by it. In one widely shared profile, readers were invited to “meet Dr Victoria Trumbull” as a kind of counterweight to the breathless forecasts coming from the likes of Sam Altman and Elon Musk, two of the most visible evangelists for a future dominated by artificial minds. That piece stressed that Trumbull is not a reflexive Luddite but a careful analyst who thinks the current conversation has been framed on terms that favor the companies building these systems, and it highlighted how her questions about what counts as intelligence cut against the way Altman and Musk talk about inevitable machine supremacy, positioning her as a necessary skeptic in a debate that has been skewed toward hype.
Why she says “superintelligence” is a philosophical mistake
Trumbull’s core argument is that the word “superintelligence” smuggles in assumptions that do not survive basic philosophical scrutiny. She points out that talk of a single scale of intelligence, on which humans sit at one point and hypothetical AIs can simply climb higher, ignores the messy reality of how cognition works in biological creatures. In a detailed discussion of her work, she is described as someone whose research on consciousness, AI, memory, and more reminds readers to be cautious about claims that machines will straightforwardly challenge human intelligence, precisely because those claims rely on a thin and often undefined notion of what intelligence even is, one that strips away embodiment, emotion, and culture in favor of test scores and benchmarks.
That same coverage emphasizes that Trumbull is interested in the boundary between science and philosophy, and in how definitions of consciousness are built. She argues that when engineers talk about “emergent” superintelligence, they are often stepping over that boundary without realizing it, treating speculative metaphysics as if it were settled engineering. A separate account of her work frames this as part of a longer tradition that asks where science ends and philosophy begins, a question that has preoccupied great thinkers since the Age of Reason and that remains live whenever researchers claim to have captured the essence of the mind in code. By insisting that these conceptual foundations matter, Trumbull is effectively saying that the superintelligence story is not just technically uncertain but philosophically undercooked.
Engineers who quietly agree the hype is off
Trumbull is not alone in thinking that the current wave of AI rhetoric has outrun the evidence. Some of the sharpest criticism now comes from technologists who work with these systems every day and see their limitations up close. Stephen Klein, who is listed as Founder and CEO of Curiouser.AI and teaches at Berkeley, has argued in a widely circulated essay that we are not on the verge of superintelligence at all, and that the hype obscures how brittle today’s models remain. Klein points to the difficulty current systems have with even modest reasoning tasks, and he stresses that scaling up data and compute has not magically produced robust understanding, a view that dovetails with Trumbull’s insistence that raw processing power is not the same thing as genuine thought.
Even some of the field’s most celebrated pioneers have started to push back against the idea that current machine learning is on a straight path to godlike minds. Yann LeCun, a leading figure in deep learning, has said in an interview that while machine learning is great, the notion that we are going to just scale up existing architectures and suddenly get human level or beyond is, in his words, “absolutely not” how intelligence works. LeCun’s skepticism about simple scaling as a route to artificial general intelligence aligns with Trumbull’s more philosophical critique, since both suggest that something important is missing from the models that dominate today’s AI landscape, whether you call that missing piece common sense, world models, or a richer account of consciousness.
Why philosophers are being pulled into AI’s biggest arguments
One reason Trumbull’s voice carries weight is that there is a broader recognition that the deepest questions raised by AI are not purely technical. When Google’s former AI chief Geoffrey Hinton talks about existential threats, he distinguishes between immediate harms and more speculative long term risks, and that distinction has prompted some commentators to argue that philosophers are better equipped than AI experts to handle the latter. A recent analysis of these debates explicitly calls for philosophers, not just engineers, to grapple with the biggest existential questions, invoking Hinton’s own reflections to show how quickly discussions of AI risk slide into territory that looks more like ethics and metaphysics than computer science.
That call for philosophical engagement is not just about abstract puzzles, it is also about power. Moral philosopher Émile Torres has argued that many Silicon Valley elites, including figures like Musk, Peter Thiel, and Altman, are guided by a worldview sometimes labeled “TESCREALism” that treats the long term future of digital minds as more important than the lives of present day humans. In a detailed critique of this ideology, Torres suggests that these tech capitalists do not care about humans in any ordinary moral sense, particularly when they talk about AI and human extinction, because their focus is on preserving a speculative cosmic destiny rather than addressing concrete injustices. Trumbull’s insistence that we interrogate the concepts of intelligence and consciousness before accepting superintelligence narratives fits neatly into this broader push to challenge the moral and political assumptions baked into Silicon Valley’s grand stories.
More from Morning Overview