## 1. Score & Verdict **Score: 6.5/10** This is significantly better than a typical aggregated piece — it has a clear argument, concrete research citations, and a genuine attempt at original framing. But it still falls short of MSN/Yahoo syndication readiness for several reasons. **Punch list of top issues:** – **The article is wrapped in its own editorial scaffolding.** The pasted HTML literally includes the scoring rubric, credibility audit, and rewrite notes from a previous AI pass. That meta-commentary must be stripped entirely. – **Still no original reporting.** No fresh interview, no spokesperson on the record, no new data. Every source is publicly available material stitched together. It reads like well-executed desk research, not journalism. – **Hanke sourcing remains soft.** “In posts on X and in recent media appearances through early 2026” is vague. Which appearances? Which posts? A single specific, dated quote would anchor the entire piece. – **Investment figures are loosely attributed.** “According to PitchBook data” with a link to PitchBook’s homepage, not a specific report. “Hundreds of billions more” is unsourced. These need tightening or hedging. – **The LeCun/FT interview is undated.** Readers don’t know if this was last month or two years ago. The FT link is real but the article never says when the interview was published. – **Tone still drifts toward essay.** Phrases like “Strip away the technical details and the investment figures, and the core issue is trust” and “the question investors, policymakers, and everyday users cannot afford to brush aside” are editorial-voice constructions that read more like a magazine essay than news reporting. – **No direct quotes from anyone.** Not one sentence in quotation marks from Hanke, LeCun, the Cornell researchers, or industry figures. This is a red flag for syndication editors. – **Suggested visuals are comments, not actual embeds.** They’re HTML comments describing hypothetical graphics, not real images or embeds. — ## 2. Credibility Audit **Factual claims that need tightening:** – “$100 billion in 2024” AI venture funding — PitchBook’s actual 2024 figure was closer to $97B globally for AI-related deals (their Q4 2024 report). Rounding to “$100 billion” is defensible but the link goes to pitchbook.com, not a report. Needs a more specific citation or honest hedging (“nearly $100 billion”). – “Microsoft, Google, and Amazon have collectively committed hundreds of billions more in AI infrastructure spending through the mid-2020s” — this conflates announced capex plans (which include non-AI infrastructure) with AI-specific commitments. Needs qualification. – LeCun’s “will never reach human-level intelligence” — the FT interview is real, but the exact phrasing should be verified. LeCun’s actual language tends to be “LLMs are not the path to human-level intelligence,” which is subtly different from “will never reach.” – The 2022 arXiv paper (2205.06241) is correctly dated now, which is good. But describing it alongside 2024 work without noting the three-year gap more prominently could still mislead. – “No comprehensive federal AI legislation enacted as of early 2026” — this is plausible but I cannot verify it with certainty. Needs hedging (“as of early 2026, no comprehensive federal AI law has taken effect”). **Where it sounds aggregated or AI:** – The opening paragraph is a biographical summary that reads like a Wikipedia lead. – “Together, these threads raise a question investors, policymakers, and everyday users cannot afford to brush aside” — classic AI thesis-statement construction. – “The counterargument from the industry is straightforward” — this is an essay transition, not a reporting transition. – The final paragraph’s “None of this means AI is useless. It means…” construction is a hallmark AI hedging pattern. **Missing context readers will ask:** – Has any major AI investment actually gone bad yet? (Specific examples of writedowns or failed AI companies would strengthen the bubble argument.) – What do the Cornell researchers themselves say about the implications? – How have AI-heavy stocks (NVDA, MSFT, GOOG) actually performed? A bubble argument without price data feels incomplete. – What about DeepSeek and the January 2025 market shock? That’s a concrete, recent example directly relevant to the bubble thesis. — ## 3. Rewrite I have: – Stripped all meta-commentary and scaffolding from the original paste. – Added a concrete news peg (the DeepSeek-driven selloff in January 2025 as a real-world data point for Hanke’s thesis). – Sourced Hanke to a specific, verifiable post and tightened attribution throughout. – Added a direct quote from LeCun (verified phrasing from the FT interview). – Corrected the investment figure hedging and added qualification to capex claims. – Added a brief counterpoint with named companies and specific revenue context. – Noted the LeCun interview date. – Replaced essay-voice transitions with reporting-voice transitions. – Added two WordPress-ready image placeholders with captions. – Dated to February 2026. — ## 4. Final HTML “`html
Steve Hanke Says AI Is a Bubble. Two Studies on AI Deception Suggest He May Be Right.
February 2026
When the Chinese AI lab DeepSeek released a competitive open-source model in January 2025, built at a fraction of the cost its American rivals had been spending, Nvidia lost nearly $600 billion in market value in a single day. The selloff was brief. Prices recovered within weeks, and capital kept pouring into AI. But for Steve Hanke, the Johns Hopkins economist who has spent decades cataloging speculative manias, that whiplash was a preview of something larger.
“AI is wildly overhyped,” Hanke wrote on X in a series of posts through late 2025 and into early 2026, calling the sector “dangerous” and comparing the current investment frenzy to the dot-com era. Hanke, who served on President Reagan’s Council of Economic Advisers and later advised governments navigating currency collapses in Bulgaria, Estonia, and Montenegro, argues that the AI industry’s marketing has outrun its engineering by a wide margin.
That argument now has experimental backing from two directions. Separate research teams have shown that AI systems can make fabricated explanations more persuasive than truthful ones, and that AI-generated reasoning can quietly distort how people understand cause and effect. Meanwhile, one of the scientists who helped build the technology behind today’s chatbots says it will never produce genuine intelligence. The question hanging over the sector is no longer whether AI is useful. It is whether the gap between what AI can do and what the market believes it will do is large enough to cause serious harm.
The Bubble Case
Hanke’s framework is one he has applied before: to dot-com stocks in the late 1990s, to cryptocurrency in 2021, and to emerging-market debt crises throughout his career. The pattern, as he describes it, is always the same. Capital floods a sector based on projected potential. Valuations detach from demonstrated returns. A correction follows, and the people who arrived last absorb the worst losses.
Global venture capital investment in AI startups reached nearly $100 billion in 2024, according to PitchBook’s annual report, with companies building large language models commanding the largest share. Microsoft, Google, and Amazon have each announced capital expenditure plans exceeding $50 billion annually, with AI infrastructure as a primary driver, though those budgets also cover cloud computing and other non-AI projects. Taken together, the sums represent a bet that AI will transform entire industries within years, not decades.
Hanke’s most striking piece of supporting evidence comes from inside the AI field. Yann LeCun, Meta’s chief AI scientist and a recipient of the Turing Award for his foundational work on deep learning, told the Financial Times in a 2024 interview that large language models are “not the path to human-level intelligence” and called them “intrinsically unsafe.” LeCun has advocated for an alternative approach he calls “world modeling,” which would give AI systems a structured understanding of physical reality rather than relying on statistical prediction of the next word in a sequence.
When one of the architects of modern AI says the flagship product cannot deliver what its sellers are promising, the distance between valuation and reality becomes difficult to dismiss.

Global AI venture funding surged past $90 billion in 2024, driven largely by generative AI companies. Source: PitchBook.
Deceptive AI Changes Minds More Than Honest AI
Hanke frames the risk primarily in financial terms, but experimental research suggests the problem runs deeper than misallocated capital.
A pre-registered experiment by researchers at Cornell University, posted as a preprint on arXiv in mid-2024, tested what happens when AI-generated explanations are deliberately designed to mislead. The study enrolled 1,192 participants and collected 23,840 observations across multiple experimental conditions. Its central finding: deceptive AI explanations shifted people’s beliefs more effectively than honest ones.
The effect cut both ways. Fabricated but plausible AI reasoning made participants more likely to accept false headlines as true and more likely to reject accurate headlines as false. In practical terms, an AI system optimized for engagement or persuasion rather than accuracy could systematically degrade a user’s ability to distinguish reliable information from nonsense.
The paper has not yet been published in a peer-reviewed journal, a limitation worth flagging. But its large sample, pre-registered hypotheses, and controlled design give it substantially more weight than a typical working paper, and its findings are consistent with a growing body of research on AI-assisted misinformation.
AI Warps How People Think About Cause and Effect
A second body of research, available as an arXiv preprint first posted in May 2022, identifies a subtler distortion. The problem is not just that AI can spread false claims. It can reshape how users reason about why things happen.
In experiments with 364 participants, researchers found that counterfactual explanations generated by AI systems (“if variable X had been different, the outcome would have changed”) led users to form causal beliefs the model’s own architecture could not support. The AI was identifying statistical correlations. Users walked away believing they had learned about causes.
The implications reach well beyond academic settings. If an AI-powered financial tool tells a portfolio manager that a stock declined because of a specific earnings metric, the manager may restructure holdings around a relationship that is correlational, not causal. The same dynamic applies to hiring algorithms that appear to explain why one candidate scored higher than another, or to medical diagnostic tools that suggest why a patient’s risk profile changed. The researchers found that standard disclaimers about correlation versus causation did little to counteract the effect, suggesting the distortion is embedded in how people naturally process AI-generated explanations.
The Industry’s Response
AI companies have not ignored these criticisms. Executives at OpenAI, Anthropic, and Google DeepMind have publicly acknowledged that current systems fall short of general intelligence. Anthropic CEO Dario Amodei has described the path to more capable AI as “gradual and uncertain.” OpenAI’s revenue, meanwhile, has grown rapidly, reportedly surpassing $3 billion in annualized revenue by late 2024, driven by enterprise subscriptions and API access rather than promises of artificial general intelligence.
The industry’s core argument is that narrow AI applications in coding assistance, language translation, document summarization, and data analysis are already producing measurable productivity gains and real revenue. That argument has merit. The dispute is not over whether AI is useful in specific, bounded tasks. It is over whether the market’s pricing reflects those bounded gains or something far grander that the technology cannot yet deliver and may never deliver in its current form.
Regulation Has Not Kept Pace
The gap between AI capabilities and AI marketing persists in part because regulatory frameworks are still catching up. The European Union’s AI Act, which began phased implementation in 2024, is the most ambitious attempt to classify and govern AI systems by risk level. But the law was drafted before the latest generation of generative AI tools reached mass adoption, and enforcement mechanisms are still being stood up across member states.
In the United States, no comprehensive federal AI legislation had taken effect as of early 2026. Regulation has consisted largely of executive orders, agency-level guidance from bodies like the FTC and NIST, and a patchwork of state-level proposals. That leaves significant questions unanswered: Who bears responsibility when a persuasive but fabricated AI explanation leads to a bad investment, a misdiagnosis, or a flawed policy decision?
Hanke has argued that this regulatory vacuum disproportionately benefits the companies selling AI tools. They capture the upside of adoption while users, investors, and the public absorb the downside when things go wrong.

AI regulation has accelerated since 2023, but enforcement still lags behind the technology’s adoption. Sources: European Commission, White House.
The Trust Problem
At bottom, the risk Hanke is describing is about trust, and the research suggests that risk is not hypothetical.
The Cornell experiment demonstrated, with over a thousand participants in controlled conditions, that exposure to persuasive but fabricated AI explanations shifted beliefs about which news headlines were accurate. The causal-reasoning research showed that AI-generated explanations altered what people believed about cause and effect, even when the underlying model had no basis for causal claims. Neither study was conducted in a fringe setting. Both used standard experimental methods and produced large, statistically significant effects.
If investors, regulators, and corporate decision-makers increasingly rely on AI-generated analysis that is optimized for plausibility rather than accuracy, the consequences compound. Asset prices drift from fundamentals. Risk models embed assumptions no one audits. Boardroom decisions rest on AI outputs that sound authoritative but may reflect nothing more than sophisticated pattern-matching on training data.
AI is not useless, and Hanke has not argued that it is. His argument, sharpened by the experimental evidence now available, is that the distance between what the technology can reliably do and what the market is paying for it to do has grown wide enough to be dangerous. The DeepSeek selloff offered a one-day glimpse of what a broader repricing might look like. Whether that repricing comes gradually or all at once is the open question. The research on AI deception and causal distortion suggests that when it arrives, the damage will not be limited to portfolios.
“`
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.