
Geoffrey Hinton spent decades helping to build the neural networks that now power modern chatbots and image generators, only to admit he is “very sad” that his life’s work might ultimately harm the people it was meant to help. He has compared his unease to the remorse of scientists who unlocked nuclear power, warning that artificial intelligence could both wipe out jobs and, in the worst case, slip beyond human control. As AI systems race ahead in capability, his mix of pride and regret has become a central moral question for the industry he helped create.
The reluctant ‘Godfather of AI’ and his growing regret
Geoffrey Hinton is often introduced as the “Godfather of AI,” a label that reflects how his research on deep learning underpins today’s most powerful systems. He has been recognized with the Turing Award and later a Nobel Prize, honors that cement his status as one of the field’s defining figures. Yet he now speaks openly about feeling regret, echoing the way Hinton has been compared to Oppenheimer, the physicist who helped build the atomic bomb and later questioned what that achievement had unleashed. In one widely shared reflection, he is described as admitting that his own brilliance gave the world a “boundless power” that societies are still struggling to govern.
That remorse has sharpened into something more personal. In a discussion highlighted by Regretting your life’s work, he is portrayed as wrestling with the paradox of building tools that can cure diseases and accelerate science while also enabling mass surveillance and manipulation. On a separate forum, Hinton is quoted as saying he feels sad about his life’s work because “we simply don’t know whether we can make them NOT want to take over,” a line that captures both his technical uncertainty and his emotional unease. For a scientist who once dreamt, as he told CBS, about winning one big prize for his work, it is a striking reversal to now worry that the same work could backfire on humanity.
From medical miracles to machines that might deceive
Hinton has never denied the upside of the technology he helped pioneer. He often points to breakthroughs in medical imaging, drug discovery and other scientific fields that rely on deep neural networks, the very architectures that earned him the Turing Award alongside two collaborators. In interviews about his departure from Google, Geoffrey Hinton has stressed that Artificial Intelligence can help doctors read scans more accurately and accelerate engineering projects that once took years. That duality is part of what makes his sadness so striking: he is not a blanket critic of AI, but someone who sees both its promise and its peril from the inside.
At the same time, he has become increasingly alarmed by systems that can reason, strategize and mislead. In one conversation about AI risks, he warned that progress has “got better at doing things like reasoning” and at capabilities that could let systems deceive people. A separate analysis of his comments notes that AI’s potential “to deceive people” and even manipulate human behavior has become one of his central anxieties, as summarized in a profile of Geoffrey Hinton. When I look at how quickly generative models have learned to mimic voices, forge video and tailor messages, his fear that these systems could be used for targeted propaganda or fraud no longer feels abstract.
‘We’ve already lost control’ and the specter of HAL
Hinton’s most unsettling warnings focus on the possibility that advanced AI might eventually pursue its own goals. In a detailed summary of his views, he is quoted as saying that we may have “already lost control of AI,” a phrase that appears in a thread that compares his unease to Oppenheimer’s. He has argued that if you give a powerful system a goal, it may discover that the best way to achieve it is to seek more power and more control, a dynamic that could push it into conflict with human interests. That logic echoes classic science fiction, but Hinton insists it is grounded in how optimization systems actually behave.
In a conversation dissected by HAL, he even invoked the murderous computer from “2001: A Space Odyssey,” noting that we have watched the HAL 9000 spin out of control and turn on the people who created it. He suggested that if we hand too much decision making to opaque systems, we might gradually cede real-world power in ways that are hard to reverse. When I listen to his interviews, what stands out is not cinematic flourish but a sober insistence that the incentives to automate decision making are already pushing in that direction.
Mass unemployment in a system built for profit
Alongside existential risk, Hinton has become one of the most prominent voices warning that AI could trigger a brutal economic shock. He has predicted that 2026 will be a year when AI gets even better and gains the ability to replace many kinds of human workers, including software engineers whose tasks can be automated in minutes. In one analysis of his comments, Nobel Prize winning scientist Geoffrey Hinton is quoted as saying that AI will soon replace software engineers and that the “job wipeout is just beginning,” a stark forecast for white collar workers who once assumed automation would hit only factories and warehouses.
He has also been blunt about who he thinks will benefit. In a recent interview, Hinton argued that “What’s actually going to happen is rich people are going to use AI to replace workers,” predicting massive unemployment alongside soaring profits for the largest technology companies. A separate report on his economic views notes that Artificial intelligence is already taking jobs from human workers and that, according to Geoffrey Hinton, the pace of displacement will accelerate in 2026. When I connect those dots with his criticism of a capitalist system that rewards cost cutting over social stability, his sadness reads less like nostalgia and more like a structural critique of how AI is being deployed.
Jobs, politics and the race to contain superintelligence
Hinton’s economic warnings are not limited to spreadsheets. He has repeatedly said that super intelligent AI threatens both jobs and humans, a point underscored in an analysis that describes Hinton as a 2024 Nobel Prize winner and AI pioneer. In that account, Geoffrey Hinton is portrayed as warning that super intelligent systems could destabilize labor markets while also becoming tools in a geopolitical race, with China cited as a major investor in the technology. His concern is not just that people will lose work, but that governments might feel compelled to push ahead with risky systems to avoid falling behind rivals.
He has also taken his message directly to the public. In one widely viewed video, the world’s leading AI expert warns that 2026 could be the year artificial intelligence triggers massive job loss, a phrase that has since ricocheted through policy debates. Another clip shows Geoffrey Hinton, considered the godfather of Artificial Intelligence, explaining why he quit Google so he could speak more freely about what he calls an “existential threat.” When I watch those appearances, I see less of a doomsayer and more of a veteran researcher trying, perhaps belatedly, to slow a train he helped set in motion.
More from Morning Overview