Australia’s under-16 social media ban enters its active enforcement phase on December 10, 2025, placing the burden squarely on platforms to block young users or face fines reaching $33 million, The policy, first announced in 2024 and endorsed by National Cabinet, now shifts from legislative promise to operational reality. How regulators plan to hold tech companies accountable, and whether the technical tools exist to do so without compromising privacy, are the central tensions shaping this next chapter.
What is verified so far
The legal foundation is the Online Safety Amendment (Social Media Minimum Age) Bill 2024, which requires platforms to take “reasonable steps” to prevent children under 16 from holding accounts. The measure sits within a broader online safety framework that can be traced through the parliamentary record, including background material available via ParlInfo. Enforcement falls to the eSafety Commissioner, whose detailed bill digest and associated regulatory guidance outline how compliance is expected to work in practice.
The law carries no penalties for children or parents. Instead, the financial risk sits entirely with platforms, which face maximum civil penalties for non-compliance according to the Australian Government’s policy hub. Those penalties can reach up to $33 million per violation, as reported by the Associated Press, underscoring that the regime is designed to reshape corporate behaviour rather than punish individual users.
The accompanying legislative instrument, the Online Safety (Age-Restricted Social Media Platforms) Rules 2025, spells out which services are covered and which are not. The formal rules document excludes standalone messaging apps, email, voice and video calling, online games, product and service review platforms, technical support services, professional networking tools, education and health support services, and certain school or healthcare communications functions. That carve-out matters because it means services like WhatsApp, iMessage, and multiplayer games fall outside the ban’s reach, even though younger users spend significant time on them.
The policy rationale, laid out in an explanatory statement, targets harms tied to addictive and manipulative design features rather than internet access broadly. The document emphasises features such as infinite scroll, algorithmic amplification, and engagement-driven notifications as particular risks for younger users. It also notes that the eSafety Commissioner received a formal request from the Minister on June 12, 2025, and provided written advice back on June 19, 2025, a turnaround of just one week that suggests the regulatory groundwork had been prepared well in advance.
On the privacy side, the Office of the Australian Information Commissioner holds oversight over how personal data is handled during age checks. The OAIC has confirmed that platforms must take reasonable steps to enforce the minimum age from December 10, and that its remit covers privacy provisions in Part 4A and the Privacy Act. This dual-regulator structure, with eSafety handling compliance and the OAIC watching data practices, creates a two-front accountability system for platforms.
The political commitment underpinning the scheme was set out in a 2024 statement from the Prime Minister, which announced National Cabinet’s backing for a minimum age and flagged a phased approach to implementation. That announcement, available via the Prime Minister’s media release, framed the reform as a child-protection measure responding to concerns from parents, educators, and mental health experts. It also signalled that enforcement would be stepped up over time, culminating in the 2025-26 period now beginning.
What remains uncertain
The biggest open question is whether age assurance technology can reliably distinguish a 15-year-old from a 16-year-old without collecting sensitive biometric or identity data. The Australian Government commissioned an Age Assurance Technology Trial that assessed the feasibility, privacy, usability, and security of various approaches, including age verification, age estimation, and age inference. The multi-part report examined methods ranging from document checks to AI-based facial analysis and behavioural profiling. But no public evidence confirms which specific technology or combination of technologies platforms will be required or encouraged to adopt. The gap between trial findings and operational deployment remains wide.
For platforms, that ambiguity has practical consequences. Age estimation tools that rely on facial images raise obvious privacy and security concerns. Document-based verification risks excluding users without ready access to official ID and may encourage the sharing of sensitive information with third-party vendors. Inference models built from browsing or engagement patterns can be opaque and difficult to audit. Without a clear, endorsed pathway, companies must weigh legal risk against reputational damage if they are seen to over-collect children’s data.
Equally unclear is how the eSafety Commissioner will conduct early enforcement. Reporting from the Associated Press indicates that eSafety will send information demands and notices to platforms and require monthly reporting on closed under-16 accounts for a period after enforcement begins. Yet no primary government source has published the specific enforcement notices, audit schedules, or platform-by-platform compliance timelines that would give the public a clear picture of what December 10 actually looks like in practice. It is not yet evident whether regulators will prioritise the largest global platforms first or pursue a broader sweep across smaller services.
There is also no official government data on how many Australian children currently hold social media accounts, making it difficult to gauge the scale of the task or measure success. Without a reliable baseline, monthly reporting on closed accounts will show activity but not necessarily progress. A high number of removals could indicate effective enforcement, or simply reflect that underage users are cycling through new accounts and workarounds.
Platforms themselves have not publicly detailed their compliance strategies through any official regulatory filing or legislative submission available in the primary record. There is no consolidated public register of implementation plans, testing results, or independent audits. Instead, what exists are broad assurances about child safety and privacy, along with general concerns captured in secondary reporting about the trade-offs involved in stricter age checks.
A further gap exists in the post-June 2025 advisory record. After the eSafety Commissioner delivered written advice to the Minister on June 19, 2025, no subsequent public update has addressed real-time enforcement challenges, technical readiness, or revised timelines. The most recent primary documentation reflects the consultation and rulemaking phase, not the operational phase that begins in days. Stakeholders are therefore relying on a combination of legislative text, explanatory material, and media reports to infer how the first months of enforcement will unfold.
How to read the evidence
The strongest evidence supporting this enforcement push comes from primary legislative and regulatory documents. The amending bill, the 2025 Rules, the explanatory statement, and guidance from both the eSafety Commissioner and the OAIC form a clear legal chain. Together, they confirm the obligations, the penalty structure, the excluded services, and the division of regulatory responsibility. For anyone trying to understand what the law requires of platforms, these sources are the most reliable reference points.
Where the evidence thins is on implementation mechanics. The Prime Minister’s announcement established the political commitment and National Cabinet endorsement, but functions more as a policy origin marker than an operational blueprint. The Age Assurance Technology Trial report provides technical context and highlights the strengths and weaknesses of different approaches, yet it stops short of prescribing a mandatory solution. And the Associated Press reporting, while credible and detailed, is currently the only publicly cited source for specific enforcement practices such as monthly account-closure reporting.
For parents, educators, and young people, this means that the broad contours of the ban are settled, but the day-to-day experience is still coming into focus. Some platforms may roll out intrusive age checks; others may rely on softer measures like prompts and voluntary declarations until regulators push harder. Workarounds, such as using excluded messaging apps or shared family accounts, are likely to test the boundaries of the regime.
The next phase will therefore hinge less on new legislation and more on how existing powers are exercised. Transparent enforcement updates, clear technical standards for age assurance, and privacy-preserving solutions will determine whether the under-16 ban is seen as a workable safeguard or an overreach. Until those details are published, the most dependable guide remains the primary legal record, read alongside cautious but incomplete signals from regulators and independent reporting.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.