Elon Musk told a federal jury in Oakland in April 2026 that artificial intelligence could wipe out humanity, delivering the stark warning from the witness stand during his high-profile lawsuit against OpenAI and its CEO, Sam Altman. Musk, who contributed roughly $50 million to help launch OpenAI in 2015, accused the company’s leadership of betraying the nonprofit mission he said he was promised when he wrote those checks.
“You can’t just steal a charity,” Musk told jurors, according to Washington Post reporting from inside the courtroom. The line captured the tone of his entire testimony: less corporate grievance, more moral indictment.
What Musk told the jury
Testifying before U.S. District Judge Yvonne Gonzalez Rogers, Musk walked jurors through OpenAI’s founding story. He described the organization as a nonprofit counterweight to the growing concentration of AI power at companies like Google, and said he provided early funding and lent his public reputation on the explicit understanding that OpenAI’s work would remain open and mission-driven.
That understanding, Musk argued, was shattered when OpenAI created a capped-profit subsidiary in 2019 and later moved toward a full for-profit conversion. The structural shift is the core of his complaint, filed in the U.S. District Court for the Northern District of California under case number 4:24-cv-04722-YGR. Musk alleges that OpenAI’s leaders broke written and verbal commitments about the organization’s structure and purpose.
His testimony also veered into broader territory. Musk warned jurors that poorly controlled AI development poses an existential threat to the human species, a claim he has made for years in public forums but one that carries different weight when delivered under oath in a federal proceeding. Whether that warning resonates with jurors as relevant context or strikes them as theatrical overreach could shape how they evaluate his credibility on the narrower legal questions.
What OpenAI has at stake
OpenAI is no longer the scrappy research lab Musk helped bankroll. Backed by billions in investment from Microsoft and others, the company has grown into one of the most valuable private companies in the world. Its flagship product, ChatGPT, has hundreds of millions of users, and its technology underpins tools across industries.
The company’s legal team has not yet presented its full defense in open court, but OpenAI has previously argued that its structural evolution was necessary to attract the capital required for cutting-edge AI research. The company has also suggested that Musk was aware of and even supportive of changes to the organization’s governance before he departed the board in 2018.
How OpenAI explains the gap between its original nonprofit charter and its current commercial ambitions will be central to the trial’s outcome. If the defense can produce governance documents or communications showing Musk endorsed the pivot, his “stolen charity” narrative could unravel on cross-examination.
The conflict of interest question
Musk is not a disinterested party in the AI industry. He founded xAI, a direct competitor to OpenAI, and has been building its Grok chatbot as a rival to ChatGPT. That business interest creates an obvious tension: is Musk suing to restore a broken promise, or to hobble a competitor that surpassed him?
Neither side has fully litigated that question yet, but it looms over the proceedings. Jurors will eventually have to decide whether Musk’s motivations matter, or whether the legal merits of his contract and fiduciary claims stand on their own regardless of what he might gain commercially from an OpenAI loss.
What the trial has not yet resolved
Several significant unknowns remain. No official transcript of Musk’s testimony has been released through the federal PACER system, so the public record of his exact words still depends on journalists’ notes from the courtroom. The specific damages or remedies Musk is seeking, whether an injunction blocking the for-profit conversion, a return of his donations, or monetary damages, have not been fully detailed in public reporting from the trial.
Sam Altman has not yet testified, and his account of OpenAI’s founding commitments could differ sharply from Musk’s. The jury will ultimately weigh competing versions of conversations and agreements from nearly a decade ago, a task complicated by the fact that startup founders often operate on handshake understandings that look different in hindsight.
Judge Gonzalez Rogers will also play a gatekeeping role in determining how much of Musk’s broader AI doomsday testimony the jury can consider during deliberations. His warnings about humanity’s extinction speak to why he cares about OpenAI’s mission, but the legal questions turn on contracts, fiduciary duties, and corporate governance, not on whether AI will actually destroy civilization.
Why this case reaches beyond the courtroom
Whatever the verdict, the trial is already forcing a public reckoning with questions the tech industry has avoided answering clearly. Can a nonprofit pivot to a for-profit model worth hundreds of billions of dollars without accountability to its original donors and stated mission? How strictly should courts police that kind of mission drift?
Musk’s decision to make his AI safety warnings under oath, in a federal courtroom, stakes his personal credibility on claims he has previously made in more casual settings. If the jury sides with him, the ruling could force structural changes at OpenAI and set a precedent that chills similar nonprofit-to-profit conversions across the tech sector. If he loses, critics will have ammunition to cast his safety advocacy as a competitive weapon rather than a principled stand.
The trial continues in Oakland, with additional witnesses expected in the coming weeks. The outcome will not settle the scientific debate over whether AI truly threatens human survival, but it will test whether the legal system can hold powerful institutions to the promises they made when the stakes were still theoretical.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.