Warren Buffett stood before tens of thousands of Berkshire Hathaway shareholders in Omaha in May 2025 and offered a blunt warning about artificial intelligence: the technology’s capacity for harm, he said, rivals that of nuclear weapons, and the “genie is out of the bottle.”
The 94-year-old investor grounded the comparison in something personal. He told the crowd he had recently watched a deepfake video of himself so realistic it could have fooled his own family. “If I was interested in investing in scamming, it’s going to be the growth industry of all time,” Buffett said, according to reporting by the Associated Press. The remark drew nervous laughter, but Buffett was not joking. He framed AI-powered deception not as a fringe concern but as a systemic financial threat, one that could empower criminals far more efficiently than any tool that came before.
The nuclear parallel Buffett chose deliberately
To convey the scale of what he sees coming, Buffett reached for the most consequential technology analogy of the 20th century. He compared the current moment in AI to the period just before the Manhattan Project, when a small group of physicists realized that nuclear chain reactions could produce weapons of unprecedented destructive power. Once that knowledge existed, no government could contain it.
The historical episode is well documented. On August 2, 1939, Albert Einstein and physicist Leo Szilard sent a joint letter to President Franklin D. Roosevelt warning that recent breakthroughs in nuclear physics made it feasible to build extraordinarily powerful bombs. The letter, preserved in U.S. Department of Energy archives, urged Roosevelt to accelerate American research before Nazi Germany could act first. It set in motion the chain of decisions that produced the atomic age.
Buffett’s point was direct: AI represents a similar inflection point, a technology whose benefits and dangers arrived together with no clear mechanism for separating the two.
One detail adds texture to the analogy. While the 1939 letter is often attributed to Einstein alone, photographic reproductions of both pages confirm that Szilard drafted much of the text. Einstein contributed his signature and global reputation. The collaboration between a theorist and a practical scientist sounding an alarm about a technology they helped create has a modern echo: some of the researchers who built today’s most powerful AI systems are now among the loudest voices warning about misuse.
Why Buffett focused on fraud, not killer robots
Notably, Buffett did not dwell on the science-fiction scenarios that dominate many AI debates, such as autonomous weapons or superintelligent machines turning on their creators. His concern was more immediate and more financial: the ability of generative AI to fabricate trust at scale. A convincing deepfake video, a cloned voice, a perfectly forged email from a CEO to a bank. These are tools that already exist, and Buffett argued they will only improve.
The warning carries particular weight given Buffett’s decades-long reputation as one of the sharpest risk assessors in global finance. He has historically been slow to comment on new technologies, famously avoiding tech stocks for years because he said he did not understand them well enough. That he chose to speak so forcefully about AI suggests he views the threat as falling squarely within his area of expertise: the ways trust and deception move money.
Buffett is not alone in drawing the nuclear comparison. Geoffrey Hinton, the computer scientist widely regarded as a pioneer of deep learning, resigned from Google in 2023 specifically to speak freely about AI’s existential risks, telling The New York Times he feared the technology could pose dangers on par with nuclear war. Yoshua Bengio, another foundational AI researcher, has made similar public statements and advised governments on containment strategies. But where Hinton and Bengio focus on longer-term existential scenarios, Buffett zeroed in on the near-term, practical damage AI can do to ordinary people and markets through fraud.
What Buffett did not address
The nuclear analogy, while vivid, has limits that Buffett did not explore in his reported remarks. The development of atomic weapons was concentrated in a handful of state-run programs requiring enormous resources, vast industrial infrastructure, and tightly controlled supply chains of fissile material. AI development looks nothing like that. It is distributed across thousands of private companies, open-source communities, and university labs worldwide. A talented graduate student with a laptop and cloud computing credits can fine-tune a large language model in ways that would have required a corporate research lab five years ago.
That decentralization makes the “genie out of the bottle” framing both more apt and more complicated. The genie is not in one bottle held by one government. It is in millions of bottles, and many of them are already open. Whether the policy tools that (imperfectly) managed nuclear proliferation, such as treaties, export controls, and international inspections, can translate to AI governance is a question that technologists and policymakers are actively debating. Buffett offered no prescription, only the diagnosis.
No full transcript or official video of Buffett’s AI remarks has been released publicly as of May 2026. The verified account relies on contemporaneous reporting from journalists present at the meeting. That reporting captures specific quotes and descriptions, but the complete scope of what Buffett said, including any qualifications or caveats, is not available for independent review.
Where the AI fraud threat stands now
Buffett’s prediction that AI-driven scamming will become a major industry is forward-looking, not a statement of current measured fact. He did not cite specific dollar figures, and no regulatory body has yet published an authoritative estimate of the total cost of AI-enabled fraud. The FBI’s Internet Crime Complaint Center reported that Americans lost more than $12.5 billion to online fraud in 2023, but that figure predates the widespread availability of the most capable generative AI tools and does not isolate AI-assisted schemes from traditional ones.
What is clear is that the building blocks Buffett described, such as convincing deepfake video, voice cloning, and AI-generated phishing, are no longer theoretical. They are commercially available, increasingly cheap, and improving rapidly. The question is not whether criminals will use them but how quickly the scale of damage will grow, and whether financial institutions, regulators, and law enforcement can adapt fast enough to contain it.
Buffett, who announced at the same May 2025 meeting that he would step down as Berkshire Hathaway CEO at year’s end, framed the AI warning as part of his broader farewell message to shareholders. It was, in effect, a parting piece of advice from a man who spent six decades evaluating risk for a living: the biggest new risk he sees is a technology that can make anyone believe anything.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.