Demis Hassabis, co-founder and CEO of Google DeepMind, has identified three specific areas where even the most advanced AI systems still fall short of human cognition. His remarks, delivered as a featured speaker at the India AI Impact Expo 2026 hosted by the Indian government’s IndiaAI platform, arrive at a moment when the gap between what AI can do and what it cannot is shaping research priorities and investment decisions worldwide.
Gold Medals and Blind Spots
DeepMind’s technical achievements have been striking. A recent paper authored by Google DeepMind and collaborators details how AlphaGeometry2 achieved gold-medalist performance on International Mathematical Olympiad geometry problems spanning 2000 to 2024. The system’s improvements came through upgrades in language model coverage, search algorithms, symbolic engine design, and synthetic data generation. By any competitive benchmark, the results are exceptional.
Yet Hassabis has consistently framed these wins as partial. Solving structured math problems, even at an elite level, is not the same as thinking the way humans do. The Olympiad results show that AI can master closed-domain reasoning when given well-defined rules and training data, but they also expose a pattern: the system’s strength depends on the boundaries of the problem being clearly drawn in advance. Outside those boundaries, the story changes.
Three Domains Where Brains Still Win
A peer-reviewed analysis published in the journal Neural Networks lays out the scientific case behind Hassabis’ argument. The paper, titled “Social impact and governance of AI and neurotechnologies,” identifies at least three domains in which present AI cannot compete with brain science. One of those domains is energy efficiency, a gap that carries direct consequences for how AI scales in resource-constrained settings. The human brain operates on roughly the power of a dim light bulb while running billions of parallel processes. Modern AI training runs, by contrast, consume electricity at industrial scale, and inference costs continue to climb as models grow larger.
The second gap involves adaptability in unfamiliar environments. Humans routinely transfer skills across radically different contexts with minimal exposure, a capacity sometimes called few-shot generalization in cognitive science. Current AI systems, including those built by DeepMind, still require enormous volumes of curated data to handle new tasks, and they tend to fail unpredictably when conditions shift beyond their training distribution. The third gap centers on what researchers describe as intuitive reasoning: the ability to make sound judgments under ambiguity, draw on embodied experience, and integrate emotional signals into decision-making. These are not fringe capabilities. They are central to how humans navigate daily life, from crossing a busy street to reading social cues in a negotiation.
Why Energy Efficiency Shapes the AGI Timeline
Of the three gaps, energy efficiency may carry the most immediate practical weight. As governments and corporations push AI adoption into healthcare, agriculture, and education, the electricity demands of large-scale inference threaten to limit deployment in exactly the regions that stand to benefit most. India’s own AI strategy, visible in the scope of the IndiaAI program and the dedicated Impact Expo exhibitor space, depends on AI systems that can run affordably on local infrastructure. A model that matches human reasoning but requires a data center to do so is not a practical tool for a rural clinic or a district school.
This is where neuroscience-inspired hardware enters the conversation. Neuromorphic chip designs, which mimic the brain’s event-driven signaling rather than running continuous calculations, represent one plausible path toward closing the efficiency gap. Hassabis has repeatedly pointed to brain science as a roadmap for next-generation AI, and his appearance among expo speakers signals that DeepMind sees India’s growing AI infrastructure as a testing ground for these ideas. Whether neuromorphic approaches can deliver meaningful gains within a five-year window is an open question, but the direction of research spending suggests major labs are betting on it.
Davos Context and the Stakes of the Claim
Hassabis set the stage for these remarks weeks earlier. In a Bloomberg interview at Davos during the World Economic Forum on January 20, 2026, he described an AI shift bigger than the Industrial Age. That framing is worth scrutinizing. If the shift is that large, the gaps he identified are not minor technical footnotes. They are structural limits that will determine whether AI augments human capability broadly or remains a powerful but narrow tool concentrated in well-resourced settings.
The tension between DeepMind’s competitive achievements and Hassabis’ candid assessment of AI’s shortcomings also challenges a common narrative in the industry. Many AI companies emphasize benchmark performance as proof of progress toward general intelligence. Hassabis is effectively arguing the opposite: that benchmark dominance, even at the gold-medal level, can obscure how far the technology remains from genuine cognitive flexibility. That distinction matters for policymakers deciding how to regulate AI, for investors pricing in AGI timelines, and for researchers choosing where to focus their efforts.
What the Gaps Mean for AI Research Priorities
Hassabis’ framing carries an implicit critique of the current scaling paradigm. The dominant approach in AI development over the past several years has been to make models bigger, feed them more data, and increase compute budgets. AlphaGeometry2’s methodology, as described in the arXiv member documentation, leans heavily on this pattern: larger language backbones, more powerful search, and vast synthetic problem sets. That recipe works impressively in constrained domains, but it does little to address the three gaps Hassabis highlighted. A system that requires orders of magnitude more energy than the brain, struggles outside carefully curated benchmarks, and cannot reason intuitively under uncertainty is still far from human-like intelligence, no matter how many Olympiad problems it solves.
Redirecting research priorities toward these gaps would mean rebalancing incentives across the AI ecosystem. Funding agencies and corporate labs could support more work at the intersection of neuroscience and machine learning, including biologically inspired representations and learning rules that promise better sample efficiency. Infrastructure providers might experiment with hardware-software co-design, pairing neuromorphic accelerators with models tuned for sparse, event-driven computation. Even the tools that support AI research, such as the arXiv help resources used by scientists to share preprints, reflect an ecosystem optimized for rapid scaling rather than deliberate reflection on long-term constraints.
From Global Benchmarks to Local Impact
The India AI Impact Expo offers a concrete lens on how these abstract gaps translate into policy and deployment choices. Organizers have positioned the expo as a showcase for applications that can deliver social and economic value across sectors, from precision agriculture to digital public services. Exhibiting companies, registered through the dedicated expo portal, are expected to demonstrate not only technical sophistication but also feasibility in the Indian context, where intermittent connectivity, cost sensitivity, and linguistic diversity all shape real-world performance. In that environment, the energy and adaptability limitations of current AI systems are not academic; they determine whether pilots can move beyond flagship projects in major cities.
Hassabis’ emphasis on brain-inspired efficiency and flexible reasoning aligns with this local focus. A model that can run on modest hardware while handling noisy, multilingual inputs would be far more transformative for India than another leaderboard-topping system confined to cloud data centers. At the same time, building such models requires sustained investment in basic research and infrastructure. Community platforms that underpin scientific exchange, supported in part through initiatives like arXiv donation campaigns, help ensure that advances in neuroscience, cognitive science, and machine learning circulate quickly enough for labs worldwide—including those in India—to build on one another’s work.
For now, Hassabis’ three gaps function as both a warning and a roadmap. They underscore that headline-grabbing breakthroughs do not automatically translate into systems that match the versatility, frugality, and intuitive depth of the human brain. But they also point toward specific lines of inquiry (energy-efficient architectures, robust generalization, and richer models of human reasoning) that could narrow the distance. As governments, companies, and research institutions converge at events like the India AI Impact Expo, the question is less whether AI will be transformative and more whether its evolution will be guided by these constraints or continue to race ahead, gold medals in hand, while the most human-like capabilities remain out of reach.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.