Morning Overview

Australia and Europe tighten rules limiting kids’ social media use

When Australia’s social media age law took effect in late 2025, platforms like TikTok, Instagram, and Snapchat faced a hard legal deadline: prove you can keep children under 16 off your services, or face regulatory consequences. Now, as enforcement ramps up in the first half of 2026, the European Union is advancing its own parallel crackdown, layering new guidelines, enforcement signals, and a purpose-built age-verification app on top of the Digital Services Act. Together, these efforts represent the most aggressive government interventions yet aimed at restricting how young people access social media, and they are forcing the tech industry to navigate conflicting compliance demands across two continents.

Australia’s under-16 ban moves from law to enforcement

Australia’s Online Safety Amendment (Social Media Minimum Age) Act 2024 received Royal Assent on 10 December 2024. The law defines an “age-restricted user” as any Australian child under 16 and requires providers of “age-restricted social media platforms” to take “reasonable steps” to prevent those children from holding accounts. It amends the existing Online Safety Act 2021, replacing the loose, self-policed age gates that platforms had relied on for years with a nationally enforceable standard.

The Australian government has publicly identified Snapchat, TikTok, Facebook, Instagram, and X as examples of services in scope. Platforms were given 12 months from passage to develop and deploy compliance measures, a window that closed in December 2025. Australia’s eSafety Commissioner is now overseeing implementation and has signaled that regulatory guidance, new technical tools, and potential enforcement actions are on the table. In its public communications, the eSafety Commissioner’s office has stated that its approach involves “collaboration among government, industry, and young people,” framing the rollout as a shared responsibility rather than a top-down mandate.

The law’s explanatory materials lay out the policy rationale, citing concerns about children’s exposure to harmful content, grooming risks, and what the documents describe as the cumulative toll of algorithmically driven feeds. Notably, the government chose not to mandate a single age-verification technology. Instead, the “reasonable steps” standard is designed to let platforms innovate, whether through government-issued ID checks, AI-based age estimation, parental consent flows, or some combination. The explanatory materials argue that “a flexible standard” will “allow innovation and adaptation as age-assurance tools evolve.” That flexibility, however, also means the practical meaning of compliance is still being defined.

As of spring 2026, the specific benchmarks platforms must meet, such as acceptable error rates for age checks or standards for parental consent, have not been published in binding form. That gap leaves platforms, parents, and child-safety advocates watching closely for the first enforcement signals.

The EU builds a layered defense under the Digital Services Act

The European Union has taken a different path. Rather than setting a single age cutoff for social media access, EU regulators are layering obligations on platforms through the Digital Services Act, which already bans targeted advertising directed at minors when that targeting relies on profiling or personal data.

The European Commission has adopted formal guidelines on protecting minors under Article 28(1) of the DSA, published in the Official Journal, giving them legal weight as an enforcement reference. These guidelines spell out expectations around risk mitigation, default privacy settings, and safety-by-design principles that platforms must follow when serving younger users.

A central piece of the EU’s strategy is a privacy-preserving “tokenised” age-assurance model. Under this approach, a trusted intermediary verifies a user’s age using passport or national ID data and issues a cryptographic token. Platforms can check the token to confirm a user’s age bracket without ever seeing the underlying identity documents. The Commission has announced that a European age-verification app built on this model is ready for deployment. According to the Commission, the app passes along “only a confirmation of age or age band” rather than full identity details. No binding date has been set, however, for when member states or platforms must integrate it.

The European Board for Digital Services and the Commission have established a dedicated workstream focused on minors’ protection, and officials have indicated that investigatory and enforcement steps will follow for platforms that fall short. Meanwhile, the European Parliament’s Internal Market and Consumer Protection (IMCO) committee has added political pressure by adopting report A10-0213/2025, which calls for even stricter measures, including discussion of an EU-wide “digital minimum age.” That report details concerns about screen-time harms, attention-capturing design patterns, and the role of algorithms in amplifying risky content. It calls for “evidence-based guidelines, expert panels, and collaboration with the World Health Organization” on assessing long-term impacts of social media use on children and adolescents.

Two models, one shared problem

A structural tension runs through both regimes. Australia drew a bright line: under 16, no account, full stop. The EU has so far avoided a single binding age threshold, opting instead for a web of platform obligations around advertising, design, and age assurance. The IMCO report’s push for a digital minimum age signals that some European lawmakers want to move closer to Australia’s approach, but that debate has not yet produced enacted legislation.

Both jurisdictions also face the same practical challenge: making age gates actually work. In Australia, the question is what the eSafety Commissioner will accept as “reasonable steps.” A strict interpretation could require high-confidence age checks for every new and existing user, introducing friction for adults and raising privacy concerns. A more flexible reading might focus enforcement on the features most likely to harm minors, such as direct messaging with strangers or algorithmic recommendations.

In Europe, fragmented implementation is the risk. National regulators may move at different speeds, and platforms could adopt varying age-assurance methods that technically meet DSA standards but produce inconsistent user experiences across borders. Until the Commission brings enforcement actions tied specifically to minors’ protections, and until those cases produce public penalties or compliance orders, platforms are operating in a zone of legal uncertainty.

Then there is the question that neither government has fully answered: Do these measures actually protect kids? Neither Australia nor the EU has published independent longitudinal research confirming that age-gating alone reduces the mental health harms associated with social media use among young people. The specific studies underpinning these new laws are not detailed in the available public documents. Critics also point out that determined teenagers have long found ways around age restrictions, from borrowing a parent’s ID to using VPNs, and that pushing young users onto less regulated platforms could create new risks.

What comes next will be measured in data, not declarations

For all the legislative momentum, the strongest evidence so far is about what these laws say and what regulators intend, not about what happens when rules meet real users, real platforms, and real workarounds. Australia’s compliance deadline has passed, and the EU is beginning to enforce its DSA guidelines in earnest. The missing pieces are the ones that matter most to parents and policymakers alike: how many underage accounts are actually blocked or removed, how often age checks fail, what unintended consequences emerge, and whether young people end up safer or simply displaced to corners of the internet with even less oversight.

No regulator on either continent has published test results for the age-assurance tools being promoted, including error rates, demographic biases, or rates of circumvention. There is also no consolidated public reporting on whether these interventions are changing patterns of social media use among minors. As Australia’s eSafety Commissioner begins acting on its mandate and the EU moves from guidelines to enforcement cases, the next phase of this global experiment will be defined not by the ambitions written into law but by the compliance data, enforcement outcomes, and real-world impacts that follow.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.