
Yuval Noah Harari, the historian behind the global bestseller “Sapiens,” is sharpening his warning about artificial intelligence. He now argues that every country is on the brink of two simultaneous AI shocks, one that will unsettle how people understand themselves and another that will upend how economies and states are organized. I see his argument as less a distant sci‑fi scenario than a near‑term political and social test that governments are still badly underestimating.
At the center of his case is a simple claim: AI is no longer just a tool that people use, it is becoming an active agent in shaping culture, identity, and power. That shift, he says, will trigger an identity crisis in every society and a structural crisis in every labor market and political system, unless leaders move quickly to set guardrails.
From “Sapiens” to AI Cassandra
Yuval Noah Harari built his reputation by zooming out on human history in books like “Sapiens”, arguing that shared stories and imagined orders hold societies together. That long view shapes how he talks about AI today. Instead of focusing on the latest chatbot, he is asking what happens when machines can generate the very narratives, myths, and political messages that define who we think we are. In his view, that is a civilizational turning point, not just a technological upgrade.
Earlier this year, Harari told audiences that the author of “Sapiens” now sees AI as the force that will create two crises for every country, and he framed them as identity and structural shocks that will hit rich and poor nations alike. Reporting on his remarks notes that he, Jan, and other speakers described AI as a challenge to the basic stories people tell about their work, their culture, and their loyalty to the state, not only to specific industries or apps. That framing connects directly to his broader concern that advanced systems can be used to manipulate culture and politics at scale, a theme he has developed in conversations about democracy and AI.
The first crisis: identity in an age of machine words
Harari’s first predicted shock is an identity crisis that cuts across borders. He argues that for centuries people have defined their special place in the world by their ability to think and speak in complex language. If AI systems can now generate essays, legal briefs, novels, and political speeches that rival human output, he warns that “if we continue to define ourselves by our ability to think in words our identity will collapse.” In his view, that collapse will not be abstract. It will show up in classrooms, workplaces, and online spaces where people struggle to see what is uniquely human about their own contributions.
At the World Economic Forum, Harari and Jan stressed that this identity shock will hit “no matter from which culture” people come, because the underlying disruption is to language itself rather than to a single profession. In one session, captured in a widely shared video, Harari insisted that AI is not just a neutral instrument but something closer to an autonomous actor that can shape conversations and beliefs, a point he underlined by saying AI is “not a tool, it is an agent” in the discussion. That idea, that language machines can act back on us, is what makes the identity crisis feel to him like a universal problem rather than a niche concern for writers or coders.
The second crisis: structural shocks to jobs and power
The second crisis Harari describes is structural, rooted in how AI will reorder economies and political systems. He has warned that every nation is going to face an economic shock as AI automates not only manufacturing jobs but also white‑collar roles that once seemed safe. In his recent comments, he said explicitly that AI is going to cause two crises for every country and that the structural side will be felt in sectors as varied as customer service, logistics, and professional services, echoing earlier concerns he raised about “useless classes” of workers who are left behind by automation.
Coverage of his remarks notes that he framed this structural crisis as a test of whether governments can redesign safety nets and education fast enough. In one account, he stressed that alongside “Sapiens,” his book “A Brief History of Humankind” was meant to show how past technological revolutions reshaped labor, and he now sees AI as a similar break that could wipe out roles as diverse as call‑center agents and some manufacturing jobs. The structural crisis, in his telling, is not just about unemployment figures but about whether people feel their work has any value in a world where machines can do so much.
Democracy, tyranny, and the risk of “data colonies”
Harari’s two‑crisis warning sits on top of a longer running concern about how digital technology can tilt societies toward authoritarianism. Years before generative AI went mainstream, he argued that new tools for surveillance and data analysis could make it easier for regimes to monitor citizens and harder for democratic institutions to keep up. In a widely cited essay on why technology favors tyranny, he wrote that there is nothing inevitable about democracy and that advanced systems can give centralized powers unprecedented leverage over individuals, especially when economies lag and people feel left behind, a pattern he linked to the risk that some countries could fall far behind the American economy in the race for advanced technology.
At Davos, Harari sharpened that argument into a warning that the AI revolution might create a small number of wealthy “data empires” while turning other societies into exploited “data colonies.” In that speech, he told global leaders that Humanity faces three existential threats this century and put Technology, and especially AI, at the center of that list. He cautioned that if a handful of corporations or states control the data and the algorithms, they could use them to manipulate voters, crush dissent, and entrench their power, a scenario he described as a real possibility in his remarks to Davos leaders. That is the political backdrop to his claim that AI will trigger structural crises in every country: the risk is not only economic dislocation but a shift in who holds power.
Can democracy survive machine‑written politics?
Harari is not alone in worrying about AI and democracy, but he is unusually blunt about the stakes. In a detailed conversation about whether democracy and AI can coexist, he answered “Yes. But it is not deterministic, it is not inevitable,” stressing that the outcome depends on how societies design and regulate these systems. His core fear is that once AI can generate persuasive messages tailored to individuals, there is no longer accountability in political communication, because voters cannot tell whether they are engaging with a human campaigner or an automated influence engine, a concern he laid out in his discussion of accountability.
In that same vein, he has warned that AI systems can flood the public sphere with synthetic voices, images, and texts that erode trust in any shared reality. When every video could be a deepfake and every comment could be written by a bot, citizens may retreat into tribal information bubbles or give up on politics altogether. That is why he links the identity crisis to the democratic one: if people are unsure what is real and what is human, it becomes easier for demagogues or opaque algorithms to fill the vacuum. His earlier writing on why technology favors tyranny, which argued that digital tools can centralize control in the hands of a few, now reads like a prelude to his current focus on AI‑driven politics.
More from Morning Overview