Morning Overview

MAGA and left-wing groups join to challenge AI, drawing Musk scrutiny

A coalition of former OpenAI employees and nonprofit leaders has petitioned California and Delaware attorneys general to block the company’s shift to a for-profit structure, just as Governor Gavin Newsom signed a new AI transparency law. The simultaneous actions reflect a growing, ideologically mixed pushback against the concentration of AI power in corporate hands, and they have drawn attention from Elon Musk, whose own AI venture competes directly with OpenAI.

California Signs AI Transparency Into Law

Governor Newsom signed SB 53, the Transparency in Frontier Artificial Intelligence Act, on September 29, 2025. The law requires developers of the most powerful AI models to disclose safety testing results to state regulators. Because California hosts the headquarters of nearly every major AI company, the measure effectively sets a national floor for disclosure, even though it is a state law.

The timing matters. SB 53 arrives while federal AI legislation remains stalled in Congress, leaving states to fill the gap. California’s decision to act alone gives regulators a concrete tool to demand accountability from companies that have, until now, self-reported safety benchmarks on a voluntary basis. For ordinary users of tools like ChatGPT, Gemini, or Claude, the practical change is that the companies behind those products will face mandatory scrutiny rather than relying on their own assurances.

The law also creates a new registration and reporting infrastructure. State government agencies will need to build systems capable of receiving and evaluating complex technical disclosures from AI developers. How quickly that infrastructure takes shape will determine whether SB 53 functions as a real enforcement mechanism or a symbolic gesture. Agencies such as the Department of Tax and Fee Administration already manage digital registration portals, and the state’s broader civil service apparatus, accessible through CalCareers, will likely need to recruit technical staff to handle the new workload.

Supporters of SB 53 argue that transparency requirements are a minimal first step for systems that could affect everything from financial markets to critical infrastructure. They point out that other high-risk industries, such as aviation and pharmaceuticals, already operate under rigorous testing and reporting regimes. Critics, including some industry lobbyists, warn that state-level rules could create a patchwork of obligations that slow innovation or push companies to relocate. For now, California is betting that its status as a tech hub gives it enough leverage to set de facto national standards.

Former OpenAI Staff Push Back on For-Profit Conversion

Separately, a coalition of former OpenAI employees and nonprofit representatives petitioned the attorneys general of California and Delaware to block OpenAI’s planned conversion from a nonprofit to a for-profit entity. The petitioners argue that the restructuring would betray the company’s founding commitment to developing artificial general intelligence for the benefit of all people, not shareholders.

OpenAI was originally incorporated as a nonprofit, with its commercial arm structured as a “capped profit” subsidiary. The proposed conversion would remove that cap and transform the organization into a conventional corporation. For the petitioners, this is not an abstract governance question. They contend that a profit-driven OpenAI would face pressure to prioritize revenue over safety research, accelerating deployment timelines for systems that have not been adequately tested. The petition asks state officials to use their authority over nonprofit assets to intervene before the conversion is finalized.

The petition’s legal theory rests on a specific point: nonprofit assets belong to the public interest, and converting them to private ownership requires attorney general approval. Under this view, OpenAI’s accumulated research, models, and brand value were built under a charitable mandate and cannot simply be transferred into a profit-maximizing structure without public oversight. If California or Delaware acts on the request, the resulting legal proceedings could delay or reshape OpenAI’s corporate plans at a moment when the company is seeking large new investments and racing rivals to deploy more capable systems.

OpenAI’s current leadership has defended the shift toward a more conventional corporate model as necessary to fund expensive AI development. Training and operating cutting-edge models requires vast computing resources, and executives argue that only a fully for-profit structure can attract the capital needed to compete globally. The petitioners counter that this logic effectively concedes that whoever can raise the most money will control the trajectory of AI, undermining the original promise of a nonprofit stewarding the technology for humanity as a whole.

An Unlikely Alliance Takes Shape

What makes this moment unusual is the ideological range of the people raising alarms. MAGA-aligned critics of Big Tech and left-wing advocacy groups have arrived at overlapping conclusions about AI risk, even if their motivations differ. Conservatives suspicious of Silicon Valley’s cultural influence and progressives worried about labor displacement and surveillance both see unchecked AI development as a threat. That convergence has produced joint pressure on state regulators that neither faction could generate alone.

The coalition behind the OpenAI petition illustrates this dynamic. Former employees who left the company over safety disagreements are joined by nonprofit representatives whose concerns range from corporate governance to the broader social effects of AI. Their shared demand, that attorneys general exercise oversight, cuts across traditional partisan lines. The result is a lobbying effort that state officials will find harder to dismiss as the complaint of a single interest group.

This cross-ideological alignment also complicates the usual political framing of AI regulation. In Washington, tech regulation has often split along party lines, with Democrats favoring consumer protection rules and Republicans warning about stifling innovation. The emergence of a coalition that includes voices from both camps suggests that the debate is shifting. AI governance is becoming less a question of left versus right and more a question of public accountability versus corporate autonomy.

At the same time, the alliance is fragile. Some right-leaning critics focus on perceived political bias in AI outputs, while many left-leaning advocates emphasize economic inequality and civil rights. Their shared skepticism of concentrated corporate power may hold for now, but disagreements over specific remedies (such as bans on certain applications, unionization in tech, or data privacy rules) could quickly reopen partisan divides. For regulators, the current window of overlapping concern may be an opportunity to act before consensus splinters.

Musk’s Competing Interests Add Tension

Elon Musk occupies a peculiar position in this story. He co-founded OpenAI, departed acrimoniously, and now runs xAI, a direct competitor. His public criticism of OpenAI’s leadership and direction has been sharp and sustained. He has also filed lawsuits challenging the company’s nonprofit-to-profit transition, making him both a market rival and a legal adversary.

Musk’s involvement draws scrutiny to the coalition’s motives. Critics point out that blocking OpenAI’s for-profit conversion would directly benefit xAI by limiting a competitor’s access to capital. Supporters counter that Musk’s commercial interests do not invalidate the underlying legal and ethical arguments about nonprofit asset protection. Both readings can be true at the same time, and the tension between them is precisely what makes this fight so charged.

His broader stance on AI regulation has been inconsistent. Musk has called for federal oversight of AI development while simultaneously pushing xAI products to market at speed. That contradiction has not gone unnoticed by either the MAGA-aligned groups that view him as an ally or the progressive organizations that see him as a self-interested actor. His visibility ensures that any move against OpenAI is interpreted not only as a question of law and ethics, but also as a maneuver in a high-stakes commercial rivalry.

What State Action Could Mean for AI’s Future

If California or Delaware were to intervene in OpenAI’s restructuring, the impact would reach far beyond a single company. A decision to slow or condition the conversion could establish a precedent that large tech nonprofits cannot easily convert public-benefit assets into private equity without scrutiny. Other organizations experimenting with hybrid structures would have to reassess how they promise public benefit while courting investors.

Combined with SB 53, such action would signal that states are willing to use both corporate and regulatory law to shape the AI landscape. Transparency rules would force companies to reveal more about how they test and deploy powerful systems, while nonprofit oversight could limit how quickly mission-driven entities pivot toward shareholder control. Together, these tools amount to a nascent model of democratic oversight over technologies that have so far been governed mostly by internal company policies.

Industry leaders warn that aggressive state action could push AI development to more permissive jurisdictions. Yet California’s size and centrality to the tech economy mean that many firms will find it difficult to abandon the state entirely. Instead, they may adapt by building stronger internal compliance teams, engaging more directly with regulators, and formalizing safety processes that were previously ad hoc.

For the public, the stakes are straightforward even if the legal questions are not. AI systems are moving rapidly into workplaces, classrooms, hospitals, and government agencies. Whether they are governed by transparency mandates and public-interest fiduciary duties, or primarily by the demands of capital markets, will shape who benefits from the technology and who bears the risks. The outcome of the OpenAI petition and the rollout of California’s new law will offer an early indication of how much control democratic institutions can exert over the next wave of AI development.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.