Image Credit: Steve Jurvetson - CC BY 2.0/Wiki Commons

OpenAI is facing a fresh internal revolt, this time from inside its own economics shop. A departing staffer has accused the company of steering research away from neutral analysis and toward messaging that bolsters its commercial agenda, sharpening long‑running questions about how the lab balances science, safety, and self‑interest. The dispute lands on top of a broader wave of exits by safety and policy specialists who say the company is drifting from its original mission.

At stake is not just one researcher’s job, but the credibility of the economic case for generative AI at a moment when governments, unions, and regulators are hungry for independent evidence. If OpenAI’s own insiders believe that work is being bent into advocacy, the company’s influence over debates about jobs, inequality, and regulation could start to look less like expertise and more like lobbying in a lab coat.

The staffer who said the numbers were being massaged

The core allegation is stark: a member of OpenAI’s economics team resigned after concluding that internal studies of AI’s impact on work were being nudged away from uncomfortable findings and toward narratives that favor rapid deployment. According to reporting on the departure, the staffer argued that what had been framed as neutral economic research was increasingly expected to support the company’s preferred story about productivity and growth, rather than follow the data wherever it led. Four sources close to the situation described a pattern in which critical or ambiguous results were softened, sidelined, or delayed, while upbeat conclusions about job creation and efficiency were fast‑tracked.

Those concerns were amplified by claims that OpenAI had become more guarded about publishing research that highlighted the potentially negative impact that its systems could have on employment and wages. The departing economist reportedly felt that the team’s mandate had shifted from understanding how AI might reshape labor markets to defending the idea that the technology would be a net positive, even when internal models suggested more mixed outcomes. That tension, and the sense that Economic Research Is Drifting Into AI Advocacy, ultimately led the staffer to walk away, a move that was later echoed in a community discussion where critics said Staffer Quits, Alleging Company had lost its appetite for uncomfortable truths.

Inside the economic research pivot toward advocacy

From the outside, economic modeling might sound like a dry, technical function, but inside a company like OpenAI it is central to how executives argue that their products deserve trust, investment, and light‑touch regulation. The departing staffer’s account suggests that this function was gradually repurposed, with researchers encouraged to emphasize scenarios in which generative models complement workers rather than replace them, and to foreground long‑term gains over short‑term dislocation. In practice, that meant framing job losses as temporary “reallocations,” downplaying sectors where automation risks were acute, and highlighting case studies where AI tools boosted output without cutting headcount.

According to people familiar with the team’s work, the shift was not announced in a memo but emerged through subtle pressures: which drafts received detailed feedback from leadership, which charts were selected for slide decks, and which caveats were trimmed in the name of clarity. Over time, the staffer came to believe that the economic group was being used to generate talking points for policymakers and corporate partners, rather than to produce dispassionate analysis of AI’s impact on jobs and inequality. That perception was reinforced by claims that OpenAI had become more cautious about releasing studies that highlighted risks, with critics on internal forums and in outside discussions arguing that OpenAI has allegedly become more focused on promoting solutions than on airing uncomfortable evidence.

A pattern of exits from OpenAI’s safety and research ranks

The economist’s departure did not occur in isolation. Over the past two years, OpenAI has seen a steady stream of senior researchers and safety experts leave, often with public criticism of the company’s direction. Earlier this year, Jan Leike, a prominent figure in AI alignment, left after arguing that major labs were taking a “very” risky approach to the race for advanced systems. Leike openly blamed the lack of safety concerns at the company for his departure, saying that over the past years, safeguards and careful evaluation had been overshadowed by the drive to ship ever more capable models, a critique that resonated far beyond the research community when Leike openly blamed the lack of institutional caution.

Leike’s exit followed other high‑profile departures that chipped away at OpenAI’s image as a haven for long‑term safety work. The New York Times carried a major piece on Suchir Balaji, an AI researcher who spent nearly four years at OpenAI before leaving in August, detailing his concerns about how safety and copyright questions were being handled as the company pushed products like ChatGPT into the mainstream. Balaji’s story, and the broader coverage of his time at the lab, underscored how people who had joined to work on careful, principled AI development were increasingly uneasy with the pace and style of deployment, a theme that echoed through reporting on The New York Times, Suchir Balaji and others who had once been central to the company’s safety narrative.

Top technical leaders who said OpenAI lost its balance

The unease has not been limited to economists and policy staff. Some of OpenAI’s most senior technical leaders have also walked away, often with carefully worded but pointed parting messages. When Sustskever, one of the company’s co‑founders and its former chief scientist, resigned, he framed his departure as a vote of confidence in the long‑term project while still hinting at internal disagreements. In his post announcing his departure, Sustskever wrote that he was “confident that OpenAI will build AGI that is both safe and beneficial,” but the timing and context of his exit, after months of internal turmoil, raised questions about how aligned he remained with the company’s day‑to‑day choices around deployment and governance of Sustskever, AGI.

Other senior figures have been more blunt in private than in public, describing a culture where product launches and partnerships increasingly set the agenda, with safety and interpretability work scrambling to keep up. The cumulative effect is a picture of a lab that once prided itself on internal dissent and long‑term thinking, but that now struggles to retain people whose primary focus is on guardrails rather than growth. For staffers in adjacent areas, such as economic modeling or policy analysis, watching top technical leaders depart can reinforce the sense that critical voices are losing influence, and that the company’s center of gravity has shifted toward marketing and deal‑making.

Leadership churn after the attempted ouster of Sam Altman

The backdrop to these individual departures is a period of intense leadership churn that began after the dramatic attempt to remove chief executive Sam Altman in 2023. That episode, which briefly saw Altman out of the company before he returned with a reconstituted board, signaled deep disagreements over how fast and how boldly OpenAI should move. In the months that followed, several top leaders left, suggesting that the internal compromise that brought Altman back did not fully resolve the underlying tensions about risk, governance, and the company’s relationship with its largest investor.

On Wednesday, the company’s Chief Technology Officer, Mira Murati, announced her resignation, joining a growing list of senior figures who have stepped away since the boardroom crisis. Her exit, alongside other leaders such as Bob McGrew and Barret Zoph, underscored how fragile the post‑crisis equilibrium remained, and how difficult it was to keep a unified leadership team in place while navigating pressure from investors, regulators, and employees. For rank‑and‑file researchers, watching the Chief Technology Officer, Mira Murati and other long‑time colleagues depart has been a reminder that the company’s internal compass is still being contested, a point that was driven home when coverage detailed how On Wednesday, Chief Technology Officer, Mira Murati joined the exodus.

Safety teams “jumping ship” and what that signals

Beyond the C‑suite, the most worrying trend for outside observers has been the pace at which safety and governance staff are leaving. Reports have described how world‑renowned AI researcher Andrej Karpathy and former OpenAI staffers Daniel Kokotajlo and Cullen O’Keefe also do not work at the company anymore, despite having been central to its early safety and policy efforts. These departures, along with others who signed internal letters warning about existential risks and then left, have fueled the perception that the people tasked with making sure AI does not go rogue are “jumping ship” faster than they can be replaced.

For a company that has built its brand on the promise of responsible AI, losing figures like World, Andrej Karpathy and, Daniel Kokotajlo and Cullen, Keefe is more than a human‑resources problem. It raises questions about whether internal critics feel heard, and whether the structures meant to slow or reshape risky decisions actually have teeth. When safety specialists conclude that they can have more impact from the outside, or at rival labs, it suggests that internal checks are being outpaced by commercial imperatives, a dynamic that was captured starkly in coverage of how World, Andrej Karpathy and, Daniel Kokotajlo and Cullen, Keefe and others chose to leave after raising alarms.

Altman’s brief ouster and the safety debate it exposed

The attempted removal of Sam Altman did more than reshuffle the board; it exposed a deep rift over how OpenAI should weigh safety against speed. Altman’s temporary exit from OpenAI in November raised eyebrows due to concerns about AI safety and the company’s role, with some insiders arguing that the board had moved against him because it feared the company was racing ahead without adequate guardrails. His rapid reinstatement, backed by employees and investors, signaled that a critical mass inside and outside the company prioritized continuity and momentum over the more cautious approach favored by some directors.

For staffers in research and safety roles, that episode was a clarifying moment. It showed that when safety and governance concerns collide with the company’s growth trajectory, the latter can win decisively, at least in the short term. The economist who recently resigned would have watched that drama unfold, and may have drawn the conclusion that internal checks on how AI is marketed and deployed are ultimately subordinate to leadership’s strategic goals. The broader debate about whether OpenAI can both lead the race to advanced systems and act as a responsible steward was sharpened by coverage that noted how Altman’s temporary exit crystallized long‑simmering worries about the company’s role in setting norms for the entire industry.

From “Open” AI to a guarded, Microsoft‑aligned powerhouse

Part of the sting in the latest resignation comes from how far OpenAI has traveled from its founding ideals. One early backer recalled that “OpenAI was created as an open source (which is why I named it ‘Open’ AI), non‑profit company to serve as a counterweight to Googl,” a vision that emphasized transparency, shared research, and a mission that was not beholden to any single corporate patron. That origin story has become harder to square with the company’s current structure, in which a capped‑profit entity sits atop a powerful commercial operation and a multibillion‑dollar partnership with a single tech giant.

Critics argue that this evolution has made it more difficult for OpenAI to publish research that cuts against its own business interests, whether on safety, copyright, or labor markets. When a company is effectively controlled by a major investor and depends on enterprise contracts for revenue, the incentives to present AI as an unalloyed good are strong, and the space for internal teams to publish inconvenient findings can shrink. The economist who resigned over advocacy concerns was, in effect, challenging whether the “Open” in the company’s name still reflects its practices, a question that has been raised repeatedly in discussions where people point back to the original claim that Open was meant to be a counterweight to Googl rather than a conventional tech platform.

Why economic research has become a political battleground

The fight over how OpenAI frames its economic research matters because those studies are increasingly used to shape public policy. Governments weighing rules on automation, unions negotiating contracts that reference AI tools, and regulators assessing competition all look to quantitative forecasts about job displacement and productivity. If those forecasts are produced by teams that feel pressured to tell a particular story, the risk is that entire policy regimes are built on partial or optimistic assumptions, leaving workers and communities exposed when reality diverges from the slide deck.

Inside OpenAI, the economist’s complaint can be read as a warning that the line between research and lobbying is blurring. When internal models show that certain sectors, such as customer support or routine programming, face significant automation risk, but public‑facing reports emphasize only the potential for “augmentation,” the company is not just managing its image, it is shaping how lawmakers and the public understand the trade‑offs. In that context, the staffer’s decision to leave rather than continue to produce work they saw as advocacy is a reminder that the integrity of economic analysis is itself a form of safety infrastructure, one that can either illuminate the costs of rapid deployment or help to obscure them.

What the exodus means for OpenAI’s credibility

Viewed together, the economist’s resignation, the exits of safety leaders like Jan Leike and Suchir Balaji, the departure of Sustskever, and the churn among executives such as Mira Murati paint a picture of a company in flux. OpenAI remains a technical powerhouse, but its claim to be uniquely thoughtful about the societal impact of its work is harder to sustain when so many of the people hired to embody that thoughtfulness are leaving. Each departure chips away at the narrative that the lab is both leading the race to advanced systems and setting the standard for responsibility.

For now, OpenAI still commands enormous influence over how AI is discussed in boardrooms and parliaments, and its economic reports and safety frameworks continue to be cited as benchmarks. Yet the growing chorus of former insiders who say the company is drifting toward advocacy and away from open, critical inquiry suggests that its moral authority is under strain. If more staffers follow the latest economist out the door, and if external researchers and policymakers begin to treat the company’s analysis as self‑interested rather than neutral, OpenAI may find that the hardest problem in AI is not alignment of models, but alignment between its public promises and its internal practices.

More from MorningOverview