Morning Overview

Proposal calls for a “digital harm tax” as kids lean on ChatGPT

Nearly six in ten American teenagers have used ChatGPT, according to the most comprehensive national survey on the subject to date. That single data point, drawn from a December 2025 Pew Research Center report, has become the centerpiece of a growing policy argument: if generative AI is now woven into adolescent life, who should bear the cost when something goes wrong?

The idea of a “digital harm tax” on AI companies has surfaced in policy commentary during early 2026, loosely modeled on environmental levies like carbon pricing. Revenue would be earmarked for youth mental health services and digital literacy programs. No formal legislation exists yet, and no lawmaker has attached their name to a bill. The concept remains, as of May 2026, an advocacy-stage notion without a named author, a sponsoring organization on the public record, or a published white paper laying out its mechanics. It has appeared in secondary reporting and opinion pieces, but readers should understand that no primary source document has been identified. The proposal’s visibility owes less to any single champion than to a collision of hard usage data, federal regulatory action, and mounting parental anxiety about tools that can talk back to their kids.

What the data actually shows

Pew’s national survey found that 59% of U.S. teens ages 13 to 17 have used ChatGPT, making it the most widely adopted AI chatbot among the platforms measured. Older teens and boys reported the heaviest daily use. The numbers are striking not just for their size but for their speed: ChatGPT launched publicly in late 2022, meaning it reached majority teen penetration in roughly three years.

What the Pew data does not measure is harm. The survey tracks adoption rates with rigor, but it does not assess mental health outcomes, academic performance shifts, or emotional dependency. That distinction matters enormously for the tax debate, because the “harm” in “digital harm tax” remains defined more by concern than by clinical evidence.

Federal regulators are already asking questions

The U.S. Federal Trade Commission did not wait for a completed harm study before acting. In September 2025, the agency launched a formal inquiry into AI chatbots marketed as companions, issuing Section 6(b) orders to seven companies. The orders demanded detailed information about safety testing, age verification, and monitoring practices for products that simulate emotional relationships with users.

The FTC’s concern centered on potential harms to children and teens, ranging from emotional manipulation to developmental interference. But the inquiry targeted companies offering AI companion products specifically, firms like Character.AI and others building chatbots designed to act as friends or romantic partners. OpenAI, the maker of ChatGPT, was not among the seven companies named. That creates an awkward gap in the policy conversation: the chatbot most used by teens is a general-purpose assistant, while the regulatory spotlight has focused on a different category of product.

“We are not going to wait until a child is harmed to act,” FTC Chair Lina Khan said when the inquiry was announced, framing the orders as a precautionary step rather than a response to documented injury. OpenAI has introduced age restrictions and parental controls for younger users, but critics argue those measures are largely self-policed and easy to circumvent. No public follow-up from the seven targeted companies has appeared in official records since the FTC orders were issued, leaving the inquiry’s findings an open question as of May 2026.

The carbon tax analogy and its limits

Advocates for a digital harm tax have pointed to the European Union’s Emissions Trading System as proof that pricing externalities can reshape corporate behavior. An April 2026 update from the European Commission confirmed that the EU’s cap-and-trade system has sustained a long-term downward trend in covered emissions. The logic is straightforward: force companies to pay for the damage they cause, and they will cause less of it.

The analogy is appealing but strained. Carbon dioxide can be metered at the smokestack. Digital harm to a teenager cannot. There is no agreed-upon unit of measurement, no equivalent of parts per million. Would the tax be assessed per user under 18? Per minute of interaction? Per reported incident? No published framework has proposed answers to these design questions, and without a measurement standard, the levy risks being either too blunt to change behavior or too complex to administer.

The missing pieces of the proposal

Beyond measurement, several fundamental questions remain unresolved. The first is scope. If the tax targets AI companion chatbots, it would miss ChatGPT entirely. If it covers all generative AI platforms, it would sweep in tools that millions of teens use productively for homework, language learning, and creative projects. No available proposal language specifies which companies would be covered.

The second is attribution. The risks associated with teen technology use arise from a sprawling ecosystem: device manufacturers, app stores, internet service providers, school policies, and parental oversight all play a role. Singling out AI developers requires a clear argument for why they bear unique responsibility. Proponents point to generative AI’s capacity for highly personalized responses and simulated empathy as qualitatively different from a search engine or social media feed. That argument is plausible but unquantified in peer-reviewed research.

The third is cost pass-through. A substantial levy on AI companies would likely be absorbed, at least partly, by paying customers or reflected in reduced investment in free-tier products. If the tax makes it harder for companies to offer free tools, it could narrow access for the lower-income students who benefit most from them. Supporters counter that earmarking revenue for digital literacy and youth mental health services would more than compensate. Critics say poorly designed taxes could restrict beneficial technology without measurably reducing risk.

It is also worth noting that the digital harm tax is not emerging in a legislative vacuum. The Kids Online Safety Act passed the U.S. Senate with broad bipartisan support in 2024, and multiple states have pursued their own youth online safety laws. A tax proposal would need to fit within, or distinguish itself from, this existing patchwork of regulatory efforts.

Why the “digital harm tax” debate is still mostly a question mark

The most honest summary of the current moment is this: teen adoption of AI chatbots is a confirmed, large-scale reality. Federal regulators consider the trend serious enough to investigate. And the policy response is still in its earliest, most speculative phase.

The digital harm tax, as it stands in May 2026, is an advocacy concept, not pending legislation. It has no named sponsor, no rate structure, and no measurement framework. That does not make it irrelevant. Policy ideas often circulate for years before gaining institutional backing, and the combination of 59% teen adoption and active FTC scrutiny creates fertile ground for proposals that might once have seemed far-fetched. The question is whether the debate will ultimately be shaped by evidence or by analogy.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.