
The race to build artificial intelligence is not just a story about clever code or faster chips. It is a struggle over who will own the predictive engines that allocate jobs, credit, security, and political attention. A growing group of economists and technologists argue that the richest players are deliberately steering AI to entrench their power, and that the strategy is already reshaping the economy and the social contract.
From Wall Street to Washington, the same pattern keeps surfacing: concentrated wealth, concentrated data, and concentrated compute, all converging in the hands of a narrow elite. The result is an AI boom that promises efficiency and innovation, but is also quietly shifting control over work, profits, and even regulation itself away from the public and toward a small, well-capitalized class.
How inequality set the stage for AI capture
Before AI became the latest gold rush, the ground had already been prepared by decades of widening inequality. According to the Peter G Petersen Foundation, the top 20% of wealthiest households in America have seen their income rise 165% from the late 1970s, while the bottom 20% have barely moved, putting them at a disadvantage. That imbalance means the people with the most to invest in AI are the same people who have already benefited most from the last era of technological and financial change, and they are now positioned to buy up the infrastructure, talent, and political influence that AI requires.
Professor Scott Galloway has argued that in America, “income inequality is out of control” and that the country’s tax policy has “gone full oligarch,” warning that “we risk revolution” if the current trajectory continues. In his view, the United States has “no excuse” for allowing such a skewed distribution of gains, and he has called for aggressive Redistribution to save the middle class from a system that increasingly serves the top one percent. When that same skewed system meets a general-purpose technology like AI, the risk is that the benefits will be captured by those already at the top, while the costs, from job losses to higher energy prices, are pushed down the income ladder.
AI as a new engine of oligarchic power
Galloway has described the current U.S. economy as one where “America isn’t working” for most people, with political power and regulation tilted toward entrenched interests. He argues that the combination of concentrated corporate clout and weak antitrust enforcement has produced markets where a handful of firms dominate entire sectors, from tech to finance, and that AI is arriving as a force multiplier for this imbalance. In his analysis of the U.S. economy, Professor Scott Galloway links this structural tilt to a political system that is increasingly responsive to donors and corporate lobbyists rather than to the broad electorate.
AI fits neatly into this pattern because it rewards scale: the more data, compute, and capital a company can marshal, the better its models and the wider its moat. Galloway has warned that when 40% of the S&P 500 is riding on just 10 companies, “if they get cut in half, nobody gets out alive,” underscoring how much market power is already concentrated in a small cluster of tech giants. As those same firms pour billions into AI, they are not just building products, they are building a new layer of infrastructure that other businesses, governments, and citizens will depend on, deepening their leverage over the rest of the economy.
Prediction machines and the question of control
Economist Maximilian Kasy argues that AI is, at its core, a system for making predictions about people and the world, and that the central political question is not how smart those systems become but who directs them. In his work on the political economy of algorithms, he frames AI as a tool that can be used either to empower workers and citizens or to surveil, discipline, and extract value from them, depending on who owns the models and the data they ingest. As he puts it, the key question for AI is, “Who controls it?”, a point he develops in detail in his discussion of prediction and power When Maximilian Kasy breaks down how these systems work.
Kasy’s framing helps explain why AI has become such an attractive asset for the wealthy. Whoever controls the predictive engines that decide who gets a mortgage, who is flagged for extra police scrutiny, whose resume is surfaced to a recruiter, or which political ad is shown to which voter, effectively controls a quiet but pervasive layer of social coordination. If those engines are owned by a small group of corporations and investors, then AI becomes a mechanism for centralizing decision making in private hands, even as it appears to operate through neutral code. That is the deeper sense in which AI can be said to be “seizing control” on behalf of those who already hold it.
Profits, unemployment, and the “capitalist system”
On the technical side, some of the people who helped build modern AI are now warning that its economic logic is tilted toward capital rather than labor. Geoffrey Hinton, often described as the Godfather of AI, has said that the technology will create massive unemployment and send profits soaring, adding that “that is the capitalist system.” In his view, the same breakthroughs that allow machines to perform tasks once reserved for humans will, if left to market forces alone, primarily enrich the owners of those machines while eroding opportunities at the entry level, a dynamic he has outlined in detail when discussing how AI could reshape jobs and profits Godfather of AI.
Hinton has also argued that tech giants cannot fully profit from their AI investments unless human labor is replaced, a blunt assessment of the incentives facing the largest firms. In his conversations about the business models behind large-scale AI, Geoffrey Hinton has stressed that the return on massive capital expenditures in data centers and chips depends on substituting algorithms for people, not just augmenting them. That logic aligns neatly with the interests of large shareholders, but it raises obvious questions about what happens to workers whose bargaining power is already weakened by decades of wage stagnation and declining union density.
Tax policy, UBI, and who pays for the AI boom
As AI threatens to displace jobs, some technologists and investors have floated universal basic income as a kind of social shock absorber, a way to keep consumption going even if traditional employment contracts fray. In a recent exchange, a viewer named “Hi Scott” asked Galloway about eventual UBI “when AI takes all the jobs,” prompting him to push back on the idea that a simple cash transfer can fix deeper structural problems. In that discussion, captured in his Hi Scott Office Hours segment, he questioned why the conversation so often jumps to UBI instead of focusing on who owns the AI and how its gains are taxed.
Galloway has suggested that corporations benefiting from AI should face an alternative minimum tax of “30 or 40%” so that the public shares in the upside of automation. In a short video on the energy and capital demands of AI, he argued that companies drawing heavily on public infrastructure and subsidies should not be able to zero out their tax bills, calling for a floor that would ensure a meaningful contribution to the broader society that makes their profits possible, a point he underscored when discussing who pays for the AI power boom 40%. The underlying argument is that without such guardrails, AI will accelerate a tax system that already favors capital over labor, deepening the sense that the game is rigged.
Energy, infrastructure, and the public’s hidden subsidy
Behind the sleek interfaces of chatbots and image generators lies a sprawling physical footprint of data centers, transmission lines, and power plants, much of it financed or facilitated by public policy. Galloway has warned that the AI boom is driving a surge in electricity demand that will require massive new investment, and that the costs of this buildout are likely to be socialized even as the profits are privatized. When he talks about the need for an alternative minimum tax on corporations, he is not just focused on fairness in the abstract, but on the concrete reality that taxpayers are effectively underwriting the infrastructure that makes large-scale AI possible.
Other analysts have made a similar point in different language, arguing that if AI is going to rely on public grids, public land, and in some cases public subsidies, then the returns should not be captured solely by shareholders. One proposal gaining traction is to channel a portion of AI-related profits into a social wealth fund that would pay dividends to citizens, treating AI as a kind of shared asset rather than a purely private one. As one detailed argument for such a fund puts it, we are potentially on the verge of an artificial intelligence revolution that promises to simultaneously boost productivity and concentrate power, and that everyone should benefit from AI through mechanisms like a social wealth fund. The core idea is that if the public is effectively co-investing in AI’s infrastructure, it should also be a co-owner of the returns.
Regulation, conflicts of interest, and the Sacks controversy
The politics of AI are not playing out in the abstract, they are unfolding in real time in Washington, where President Donald Trump’s administration is grappling with how to regulate a technology that many of its allies are heavily invested in. One of the most vivid examples is David Sacks, a prominent venture capitalist who serves as a tech adviser to Trump and has made vast AI investments. In public remarks, Sacks has said that “Federal AI legislation is essential” and that “There’s no bigger issue for Little Tech, the builders who create the future, for whom AI is an existential threat, than Federal AI legislation, and yet we haven’t seen it,” a statement reported in coverage of how his dual role as investor and adviser is raising conflict-of-interest questions Federal AI.
Within the president’s own coalition, MAGA factions are clashing over how aggressively to regulate AI, with some pushing for strict rules to protect workers and smaller firms, and others aligning with industry executives who want a lighter touch. Reporting on these internal battles has highlighted how Sacks recently secured a major victory that large tech executives had wanted for months, even as critics warn that his financial stake in AI could skew policy in favor of big investors. The controversy has become a flashpoint in the broader debate over whether AI rules will be written to serve the public interest or to lock in the advantages of those who already dominate the market, a tension captured in accounts of how MAGA factions are fighting over AI regulation.
The looming AI bubble and bailout fears
Behind the regulatory wrangling lies a more basic financial worry: that the AI boom could turn into a bubble, and that the same investors now pushing for rapid expansion will later demand public support if it bursts. Concerns about a potential bailout have been amplified by Sacks’ history, including his role in organizing efforts to secure government help during previous financial shocks. In recent reporting, critics have warned of a “Fear of a bailout if AI bubble bursts,” noting that if heavily leveraged AI bets go bad, the same voices that championed market discipline may suddenly argue that the sector is too important to fail, a scenario that has been raised explicitly in coverage of how Fear of a bailout is shaping the debate.
Galloway’s warning that markets now have “nowhere to hide” if a handful of giant firms stumble underscores how fragile this setup can be. When 40% of the S&P 500 is tied to just 10 companies, and those companies are also the primary drivers of AI investment, any sharp correction in AI valuations could ripple through retirement accounts, municipal budgets, and the broader economy. In that context, the question is not just whether AI is overvalued, but who will bear the downside risk if the current exuberance proves misplaced: the investors who reaped the gains, or the public that may be asked to clean up the mess, a dilemma Galloway highlighted when he stressed that concentration in the index means “if they get cut in half, nobody gets out alive” Galloway.
Redistribution, resistance, and what comes next
Faced with these overlapping risks, Galloway has been blunt that “Redistribution is needed to save the middle class,” arguing that without a deliberate rebalancing of who gets what, the combination of AI and existing inequality could push America toward a breaking point. In his broader critique of how the United States has handled globalization, technology, and tax policy, he insists that the country “has no excuse” for allowing such extreme concentration of wealth and power, and that the arrival of AI makes the need for reform more urgent, not less. His call for higher corporate taxes, stronger antitrust enforcement, and investments in education and healthcare is rooted in a belief that the gains from AI should be shared rather than hoarded by the top one percent, a theme he has returned to repeatedly when explaining why Income inequality threatens social stability.
Economists like Kasy, technologists like Hinton, and policy thinkers advocating social wealth funds are, in different ways, converging on a similar message: AI will not automatically democratize opportunity. Left to current market and political dynamics, it is more likely to deepen an already oligarchic order, giving those at the top new tools to monitor, predict, and shape the behavior of everyone else. Whether that trajectory can be altered will depend less on the next breakthrough in model architecture than on choices about ownership, taxation, and regulation that are being made right now, often behind closed doors, by the very people who stand to gain the most from keeping AI’s new levers of control in their own hands.
More from MorningOverview