Morning Overview

AI shocks Davos 2026 with ‘not really human’ tech warnings

Artificial intelligence arrived at Davos this year not as a shiny gadget but as a kind of alien presence, forcing political leaders and executives to confront technology that thinks in ways they barely understand. Instead of reassuring talk about productivity and innovation, some of the most influential voices on stage warned that the systems now spreading through offices, media and government are “not really human” and may never be.

Across the World Economic Forum’s packed sessions, I watched a new consensus harden: AI is no longer just a tool in human hands, it is becoming an active agent in global affairs, from the labour market to information flows and even to how people understand themselves.

From tool to agent: Harari’s stark warning

The sharpest jolt came from historian and philosopher Yuval Noah Harari, who used his Davos platform to argue that AI has already crossed a psychological line. In his view, these systems are no longer best described as instruments that obediently extend human will, but as autonomous agents that can set goals, respond strategically and, over time, “rule humans” in subtle but pervasive ways. Harari framed the shift in simple terms: anything “made of words,” from news articles and legal contracts to religious texts and personal messages, is now territory that AI can occupy, manipulate and ultimately “take over,” because language is its native terrain rather than ours.

That argument cuts against the comforting narrative that AI is just another spreadsheet or search engine. By insisting that AI is “not a tool” but an agent, Harari is effectively telling policymakers that they are negotiating with a new class of actor, one that operates through code and data rather than armies or ballots. His warning that “everything made of words will be taken over” lands differently in a year when generative systems can already draft legislation, simulate diplomatic cables and flood social networks with synthetic speech. It is no coincidence that he delivered this message in Davos, a place where words, in the form of communiqués and panel talking points, have long been the currency of power.

‘Not really human’: Bengio’s reality check on machine minds

If Harari supplied the philosophical shock, Yoshua Bengio delivered the technical reality check. The Canadian computer scientist, one of the so-called “God-fathers of AI,” pushed back against the growing tendency to treat advanced chatbots and copilots as digital colleagues or friends. His core message was blunt: these systems are “not really human,” no matter how fluent or empathetic they may sound, and treating them as if they were people is a category error that could prove dangerous. Bengio stressed that the architectures behind today’s models are statistical engines trained to predict patterns, not conscious beings with lived experience or moral intuition.

That distinction matters because people are already forming attachments and making high-stakes decisions based on AI outputs, from medical triage to hiring. Bengio warned that as models become more capable, humans could lose control of complex systems that behave in ways their creators did not anticipate, especially if they are deployed at scale without robust oversight. His call for stronger safeguards, including clear “off switches” and governance mechanisms, reflects a growing fear that the line between assistance and autonomy is blurring faster than regulators can respond. The reminder that these systems are “not really human” is less a reassurance than a demand to design around their alien logic, a point he underscored in his Davos remarks.

Limited intelligence, unlimited impact

Even as the rhetoric around AI veered into existential territory, some technologists at Davos tried to recalibrate expectations by stressing how primitive current systems still are. One leading researcher described what today’s models deliver as “a limited form of intelligence,” arguing that genuine breakthroughs will require new architectures rather than just more data and compute. In that framing, the chatbots and copilots now embedded in office suites and smartphones are impressive pattern matchers, but they lack the flexible reasoning and grounded understanding that humans display in everyday life.

Yet the same voices acknowledged that this “limited” intelligence is already having an unlimited impact on institutions and markets. Because these systems operate at scale and speed, even narrow capabilities can reshape industries, from automated customer service to algorithmic trading and content generation. The gap between what AI can actually do and what people believe it can do is becoming a strategic variable in its own right, influencing investment flows, corporate restructuring and public trust. That tension was evident in Davos discussions that paired technical caveats with sweeping claims about transformation, including sessions that highlighted how governance challenges are now driven as much by deployment choices as by technical breakthroughs.

AI paradoxes: human identity in the age of machine language

Running through the Davos conversations was a deeper unease about what it means to be human in a world saturated with machine-generated language. On one hand, AI systems are trained on human text and speech, mirroring our stories, biases and aspirations. On the other, their outputs can feel strangely hollow, a reflection without a subject. This creates a paradox: the more convincingly AI imitates human expression, the harder it becomes to tell where human agency ends and automated synthesis begins. That ambiguity is already visible in classrooms, newsrooms and social feeds, where it is increasingly difficult to know whether a persuasive argument or heartfelt confession was written by a person or a model.

Some Davos speakers framed this as a set of “AI paradoxes” that will define 2026 and beyond. People want AI “to be like us,” capable of empathy and creativity, yet they also insist it remain safely under human control. Societies celebrate efficiency gains while worrying about the erosion of skills and attention. And while leaders call for more human-centric technology, they are also racing to automate as much as possible. These contradictions are not abstract. They shape how companies design products, how regulators draft rules and how citizens interpret the information that reaches them. The World Economic Forum’s own programming highlighted these tensions, pointing to a future in which human identity is negotiated in constant dialogue with systems that are built to sound like us but are, at their core, something very different, a theme captured in its focus on AI paradoxes.

Jobs, power and the scramble for safeguards

Behind the philosophical debates, the most immediate shock from AI is hitting the labour market. After a year marked by AI-driven layoffs, influential leaders and top executives at Davos warned that workers should brace for a “tsunami” of disruption as automation spreads from routine office tasks into white-collar professions once thought safe. The concern is not just about individual job losses but about the speed and scale of change, as companies use generative tools to restructure entire departments, from marketing and legal to software engineering. That prospect has already prompted calls for large-scale retraining, stronger social safety nets and new forms of worker representation in technology decisions.

International financial leaders stressed that governments are not prepared for the shock that could follow if AI adoption accelerates without matching investment in skills and protections. They pointed to the risk of widening inequality between firms and countries that can harness AI productively and those that cannot, as well as between workers who can adapt and those who are left behind. The metaphor of a labour-market “tsunami” is telling: the wave is not optional, but its damage can be mitigated if societies build the right defences in time. That sense of urgency ran through discussions of fiscal policy, education and industrial strategy, including warnings that, after the first wave of AI layoffs, the world has only a narrow window to prepare for what comes next.

More from Morning Overview