Elon Musk, under cross-examination in an Oakland federal courtroom, acknowledged that his AI company xAI used OpenAI’s models during the development of its chatbot Grok. The testimony, delivered during Musk’s own lawsuit accusing OpenAI of abandoning its nonprofit mission, handed opposing counsel a potent line of attack: the man suing OpenAI for betraying open-science ideals may have quietly relied on that same company’s technology to build a competitor.
The admission, reported by the Associated Press from inside the courtroom during proceedings in late May 2026, has injected new volatility into a trial already freighted with questions about who controls the future of artificial intelligence and on what terms.
Inside the courtroom clash
The case, Musk v. Altman et al (4:24-cv-04722-YGR), is being heard by Judge Yvonne Gonzalez Rogers in the U.S. District Court for the Northern District of California. Musk filed the suit alleging that OpenAI CEO Sam Altman and other leaders steered the organization away from its founding charter as a nonprofit dedicated to developing AI “for the benefit of humanity” and toward a for-profit structure that enriched insiders. OpenAI has countered that the restructuring was necessary to raise the billions of dollars required to remain competitive in frontier AI research.
During cross-examination, OpenAI’s attorneys pressed Musk on xAI’s own development practices. According to courtroom reporting from The Guardian, Musk confirmed that xAI had trained Grok using OpenAI’s models. The precise wording of his testimony has not yet appeared in a publicly available transcript, and court transcripts typically become accessible through PACER after a delay. But reporters present in the courtroom described the exchange as a significant moment in the proceedings, one that visibly shifted the dynamic between Musk and the opposing legal team.
Judge Gonzalez Rogers also intervened at another point to cut short Musk’s testimony when he began warning about existential risks posed by artificial intelligence. She directed the attorneys and the witness back to the specific agreements and governance arrangements that defined OpenAI’s early years, signaling that the trial would stay focused on contractual and corporate governance questions rather than broader philosophical debates about AI safety.
Why the Grok revelation matters
The practice Musk appears to have described is sometimes called “model distillation.” In simple terms, it means feeding one AI system’s outputs into another system as training data, allowing the second model to absorb patterns, reasoning styles, or knowledge from the first. It is a known technique in the AI industry, but it is also one that OpenAI’s terms of use explicitly prohibit when the purpose is developing a competing model.
The legal consequences hinge on how xAI accessed OpenAI’s technology. If xAI engineers used OpenAI’s publicly available API and fed the outputs into Grok’s training pipeline, that would likely constitute a breach of contract, a serious but relatively contained legal problem. If xAI somehow accessed proprietary model weights or internal data without authorization, the exposure would be far greater, potentially triggering trade-secret claims under federal and state law or even liability under the Computer Fraud and Abuse Act.
This distinction is not purely hypothetical. The Wall Street Journal reported in late 2024 that xAI engineers had used OpenAI’s outputs during Grok’s early development, a detail that predates the current testimony and suggests the practice was not a one-off experiment. Neither xAI nor OpenAI has issued a public statement responding directly to Musk’s courtroom remarks about Grok’s training methods.
The credibility problem for Musk
Even if the Grok training issue does not directly undermine Musk’s legal claims about OpenAI’s nonprofit conversion, it creates a credibility problem that is hard to ignore. Musk built this lawsuit around the argument that OpenAI sold out its ideals. If his own company benefited from OpenAI’s work while he was publicly attacking the organization, a judge or jury weighing his motives may view his claims with greater skepticism.
OpenAI’s attorneys appear to recognize this leverage. The cross-examination’s pivot toward xAI’s development practices was not accidental; it was designed to paint Musk as someone willing to exploit the very technology he accuses OpenAI of hoarding. Whether that framing holds up will depend on the documentary evidence, including emails, texts, and internal notes from OpenAI’s early years that were discussed during the proceedings but have not been released publicly in full.
Some of those materials may eventually be filed in redacted form on the public docket. Others could remain sealed if the judge determines they contain sensitive business information. Any ruling that compels xAI to open its internal technical files to discovery could itself set an important precedent for future disputes between AI companies.
What the technical record does and does not show
No independent technical analysis has confirmed whether Grok’s architecture or outputs bear measurable traces of OpenAI model training. Researchers have developed forensic techniques, such as analyzing output distributions and detecting statistical fingerprints, that can sometimes reveal when one model has been trained on another’s outputs. But no such study has been published in connection with this case, and without that kind of empirical work, outside observers are left relying on legal filings and testimony rather than direct comparisons of the systems.
That gap matters. Courtroom testimony about training methods is filtered through legal strategy; both sides have incentives to characterize the facts in ways that serve their arguments. A technical audit, whether court-ordered or conducted independently, would provide a more objective basis for evaluating the claims. Whether Judge Gonzalez Rogers orders such an analysis, or whether either party commissions one, remains to be seen.
What this trial could reshape
The boundary between legitimate competitive research and improper use of a rival’s technology has been a gray area in the AI industry for years. Companies routinely benchmark their models against competitors, study published research, and in some cases test rivals’ products to understand their capabilities. But training directly on a competitor’s outputs crosses a line that most major AI labs have drawn in their terms of service, even if enforcement has been inconsistent.
Musk v. Altman is now positioned to test that boundary in a federal courtroom. How Judge Gonzalez Rogers handles discovery disputes, evidentiary objections, and any eventual ruling on liability will influence how aggressively AI companies police each other’s conduct going forward. A ruling that treats API-output distillation as a clear contractual violation could prompt every major AI lab to tighten its monitoring and enforcement. A ruling that treats it as a minor infraction, or finds that the terms were too vague to enforce, could effectively greenlight the practice across the industry.
The verified facts so far establish a high-stakes conflict over OpenAI’s transformation and Musk’s role in its founding. The unresolved details around Grok’s training remain, for now, allegations and courtroom exchanges rather than settled findings. But the trial has already surfaced a question that will outlast any verdict: in an industry built on shared research and open publications, where does inspiration end and misappropriation begin?
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.