Image Credit: Gage Skidmore from Peoria, AZ, United States of America - CC BY-SA 2.0/Wiki Commons

Joseph Gordon-Levitt has spent the past two years turning a vague cultural anxiety about artificial intelligence into a pointed accusation: the most powerful AI companies are operating in what he sees as a law-free zone. Rather than treating AI as a neutral tool, he is arguing that the way it is built, trained, and deployed is already redistributing power and money, and that lawmakers are letting it happen without basic rules.

From Hollywood stages to tech conferences and political campaigns, Gordon-Levitt has been building a case that AI should be bound by the same expectations that govern other industries: pay for what you use, respect democratic oversight, and do not hide behind self-regulation. I see his campaign less as a celebrity side project and more as an early test of whether creative workers and voters can force guardrails onto a technology that is racing ahead of the law.

From residuals to rights: how Gordon-Levitt framed AI as an economic fight

Gordon-Levitt’s critique of AI did not start with existential risk, it started with money. As a working actor and filmmaker, he has argued that if artificial intelligence systems ingest a performer’s voice, a writer’s script, or a musician’s catalog to generate new content, those systems should pay the people whose work they rely on. In his view, the logic is the same as the residuals that television actors receive when a show is rerun or streamed, only now the rerun is a model training pass that quietly absorbs decades of creative labor.

He has pushed the idea of a modern residuals program that would track when creative work is used to train or power AI tools, then route payments back to the original artists, coders, and trainers. That proposal, which he has described as technically complex but morally straightforward, treats AI not as a magical author but as a remix engine built on human effort, and it underpins his argument that current practices amount to uncompensated extraction of value from other people’s work, a stance he laid out in detail when he wrote that if artificial intelligence uses your work, it should pay you in a piece attributed to Jul.

“Follow any laws”: the Fortune stage where he sharpened the attack

By the time Gordon-Levitt appeared at a high-profile tech conference hosted by Fortune, his message had hardened into a blunt accusation: AI companies, in his words, do not have to “follow any laws.” He was not claiming that these firms literally exist outside the legal system, but that the core practices that make their products possible, from scraping data to deploying generative models at scale, are happening in a regulatory vacuum that would be unthinkable for industries like pharmaceuticals or aviation. To him, the contrast between the scrutiny on a new drug and the relative freedom to roll out a powerful chatbot is the definition of a double standard.

That critique landed directly on companies like Meta, which has poured resources into generative AI systems and integrated them into products that reach billions of people. When Gordon-Levitt raised alarms about how these tools are being built and governed, a Meta spokesperson, Andy Stone, publicly pushed back and highlighted that Gordon-Levitt’s wife had previously worked on the company’s board, an attempt to frame his criticism as politically motivated even as he continued to argue that the sector is on a “pretty dystopian road” without binding rules, a clash captured in reporting on his comments and Stone’s response to Meta.

Calling the current trajectory “dystopian”

On that same Fortune stage, Gordon-Levitt went beyond complaints about unfairness and described the current AI trajectory as “pretty dystopian.” He was not talking about science fiction robots, but about a world where a handful of companies control the most advanced models, harvest data at industrial scale, and then use those systems to optimize advertising and engagement rather than public benefit. In his telling, the dystopia is not a far-off nightmare, it is the quiet normalization of surveillance, manipulation, and creative displacement wrapped in friendly product launches.

He tied that warning to specific failures of self-regulation, pointing to incidents where AI tools have amplified misinformation, generated harmful content, or been deployed without adequate safety checks. For Gordon-Levitt, these episodes are proof that voluntary ethics boards and internal guidelines are not enough, and that leaving AI governance to the same executives who profit from rapid deployment is a recipe for abuse, a concern he pressed when he said the industry is heading down a “pretty dystopian road” in a conversation recounted in coverage of how Gordon-Levitt framed the stakes.

Why he keeps singling out Meta and Big Tech politics

Gordon-Levitt has not limited himself to abstract critiques of “the industry.” He has repeatedly named Meta as a symbol of what he sees as AI’s worst impulses, from aggressive data collection to the use of generative tools to keep users scrolling. In his view, the company’s history with social media harms makes it a particularly troubling steward of powerful AI models, especially when those models are integrated into products used by teenagers and children. That is why he has focused so much of his ire on the company’s leadership and its political influence.

He has argued that Meta and other Big Tech firms are not just building AI, they are also shaping the rules that will govern it through lobbying and campaign spending. According to Gordon-Levitt, a network of Big Tech super PACs is working to water down or block regulations that would limit how these companies can use data or deploy AI systems, and he has urged voters to “let our lawmakers know” that they should not be taking their cues from those corporate interests, a warning he delivered while asking why Joseph Gordon-Levitt is going after Meta and the Big Tech super PACs.

Clashing with Gavin Newsom over a vetoed AI bill

Gordon-Levitt’s frustration with what he calls a law-free AI zone has not been reserved for tech executives. He has also gone after elected officials who, in his view, have failed to stand up to the industry. When California governor Gavin Newsom vetoed an AI regulation bill that would have imposed new guardrails on powerful models, Gordon-Levitt publicly accused him of being “too scared to sign it.” For an actor who has spent much of his career in California, it was a pointed rebuke of a governor who often brands the state as a global leader in tech policy.

He framed the veto as a missed chance for California to set a standard for transparency, accountability, and safety testing before AI systems are deployed at scale. By rejecting the bill, Gordon-Levitt argued, Newsom sided with companies that want to move fast and avoid liability rather than with residents who will live with the consequences of flawed or biased systems, a criticism he leveled explicitly when Joseph Gordon-Levitt, in a piece headlined Joseph Gordon, Levitt Calls Out Gavin Newsom for Vetoing AI Regulation Bill and called the governor “Too Scared” to “Sign It.”

Escalating the pressure on California’s political class

The clash with Newsom was not a one-off outburst. Gordon-Levitt has used it to illustrate a broader pattern in which state leaders talk about responsible innovation but balk at concrete rules once industry lobbyists push back. He has argued that California, home to many of the world’s most powerful AI labs, has a special responsibility to show that innovation and regulation can coexist, and that vetoing even modest oversight sends the opposite message. In his telling, the state is at risk of becoming a haven for companies that want the benefits of its talent and infrastructure without the burden of strong consumer protections.

By publicly shaming a high-profile Democrat over AI, Gordon-Levitt has also signaled that he sees this as a cross-partisan accountability issue rather than a simple left-right fight. He has suggested that voters should scrutinize any politician, regardless of party, who echoes industry talking points about stifling innovation while ignoring the concrete harms that unregulated AI can cause, a stance he reinforced when Joseph Gordon, in a follow-up piece titled Joseph Gordon, Levitt Slams Gavin Newsom for Vetoing AI Regulation Bill, doubled down on his criticism of The California governor.

From Hollywood to Capitol Hill: urging Congress to act on superintelligence

Gordon-Levitt’s activism has also expanded into the realm of national security and long-term risk. Alongside other public figures and technologists, he has backed calls for Congress to consider strict limits on the development of so-called superintelligent AI systems that could, in theory, surpass human capabilities across a wide range of tasks. He has endorsed language that warns of “unprecedented health and prosperity” on one hand and “even potential human extinction” on the other, arguing that such stakes demand more than voluntary pledges from corporate labs.

In that context, he has urged lawmakers to treat advanced AI research more like nuclear technology or high-risk biotech, with licensing regimes, mandatory safety evaluations, and clear lines of democratic control. The goal, as he describes it, is not to freeze progress but to ensure that decisions about systems with civilization-scale impact are not left solely to a small group of executives and investors, a position reflected in a letter that described Innovative AI tools as capable of enormous benefits but warned that, alongside those tools, many leading AI companies are pursuing capabilities that could lead to catastrophic outcomes.

Taking on Mark Zuckerberg’s AI vision

If Meta is Gordon-Levitt’s corporate foil, Mark Zuckerberg is the personification of the AI path he rejects. He has said it is “hard to describe how angry” Meta’s AI strategy makes him, particularly the push to embed generative assistants across products used by young people. For Gordon-Levitt, the problem is not only what the models can generate, but the business model behind them, which he sees as optimized for engagement and ad revenue rather than for truth, mental health, or civic health.

He has warned that allowing a company with Meta’s track record on misinformation and content moderation to dominate consumer AI risks locking in a future where the same incentives that warped social media now shape the next computing platform. That is why he has supported efforts in Congress to block federal laws that would preempt states from regulating AI and to craft specific protections for children interacting with AI chatbots, concerns he voiced when Joseph Gordon, Levitt, Mark Zuckerberg

Warning from the Utah AI Summit: data, democracy, and the “law-free” feeling

When Gordon-Levitt took the stage at the 2025 Utah AI Summit at the Salt Palace Convention Center, he brought his Hollywood credibility into a room filled with policymakers, researchers, and local leaders. There, he described his fears about how AI systems are trained on vast troves of personal and creative data without meaningful consent or compensation, and how that dynamic feeds the sense that AI companies are operating in a space where traditional legal and ethical norms do not apply. For him, the issue is not just privacy in the abstract, but the erosion of individual control over one’s own work and information.

He also connected AI governance to broader democratic concerns, arguing that if a small number of companies can shape what billions of people see, hear, and interact with through AI-driven feeds and assistants, then they effectively wield a kind of private regulatory power over public discourse. That is why he has praised efforts to strengthen data protection initiatives and to involve communities in decisions about how AI is deployed in schools, workplaces, and public services, themes he emphasized when Joseph Gordon, Levitt, Utah AI Summit, Salt Palace Conv attendees heard him lay out his concerns about data protection and democratic oversight.

Amplifying the “no laws” critique across social platforms

Gordon-Levitt’s “why don’t they have to follow any laws” line has resonated in part because it is so easy to clip and share. A short video of him making that point at Fortune’s Brainstorm AI event has circulated widely on social platforms, where he reiterates that he is “not against the technology” but deeply skeptical of the power structures around it. In that clip, he argues that if AI were “set up” differently, with clear rules and shared benefits, it could “propel a great” wave of progress rather than entrenching existing inequalities.

By leaning into that nuance, he has tried to distinguish his position from blanket techno-pessimism, framing himself instead as a critic of unaccountable power. The viral nature of his remarks has helped push his core message, that AI companies should not be allowed to operate as if they are above the law, into feeds that might otherwise be dominated by product demos and hype, a dynamic captured in a reel where Dec, Joseph Gordon, Levitt stresses that he is not opposed to AI itself.

From viral clips to policy pressure: can celebrity outrage move the needle?

As Gordon-Levitt’s comments have bounced from conference stages to LinkedIn posts and news write-ups, they have helped crystallize a simple narrative: AI companies are racing ahead while lawmakers lag behind. A widely shared post highlighting his question about why AI firms do not have to “follow any laws” has turned that phrase into a shorthand for broader unease about regulatory capture and technological exceptionalism. In a political environment where attention is scarce, that kind of memorable framing can matter as much as white papers and hearings.

The open question is whether this blend of celebrity outrage and policy detail can translate into concrete rules. Gordon-Levitt is betting that by repeatedly calling out specific companies, governors, and members of Congress, and by tying AI to pocketbook issues like creative pay and child safety, he can help build a constituency for stronger oversight. His challenge to the “law-free zone” of AI is now part of a larger public conversation, one that was amplified when Dec, Actor Joseph Gordon, Levitt, Fortune shared his remarks and turned a conference sound bite into a rallying cry for people who feel that the rules of the digital future are being written without them.

More from MorningOverview