Morning Overview

US states push device age checks, but kids’ online risks stay unsolved

Utah and Texas have passed laws requiring app stores to verify users’ ages before downloads, joining a growing number of states betting that digital gatekeeping can protect minors online. Federal lawmakers are now drafting similar bills, and the FTC recently signaled it will ease enforcement pressure on companies that adopt age-checking tools. Yet the core problem these measures claim to address, the mental health toll of social media on young people, depends on design choices and algorithmic exposure patterns that age gates alone do not touch.

Utah and Texas Build the App Store Gate

Utah’s lawmakers have put the app store at the center of their strategy. Under S.B. 142, the App Store Accountability Act, platforms must ask users to self-select an age category and trigger verification when someone indicates they are under 16. The statute requires that any account for a minor be linked to a parent or guardian, whose consent is needed before a child can download apps that fall into restricted categories. Developers are also required to assign age ratings to their products, and app stores can face enforcement if they fail to check those ratings before allowing a download, effectively turning the store into a gatekeeper for youth access.

Texas adopted a nearly parallel model. Its own App Store Accountability Act, S.B. 2420, similarly creates age categories, places verification duties on app stores, and mandates parental consent and account tethering for minors. Texas goes further by imposing additional disclosure obligations on developers and setting out a penalty scheme that is scheduled to take effect on January 1, 2026. Together, Utah and Texas are testing how far states can go in using app distribution chokepoints to control what children can install on their phones—without directly regulating the design of the apps themselves.

Courts and Industry Push Back

The tech industry has begun to fight these mandates in court. In a challenge brought by an industry group, a federal judge blocked the Texas age-verification law on constitutional grounds, agreeing that the requirements could burden protected speech and impose intrusive obligations on platforms and users. That injunction clouds the future of the Texas statute even as its nominal effective date approaches, and by extension it signals that Utah’s nearly identical framework may face similar First Amendment and privacy arguments if it is fully implemented.

Beyond constitutional claims, critics point to the data-collection footprint that age checks require. To verify ages at scale, app stores may need to gather government IDs, biometric scans, or other sensitive identifiers from all users, not just children. Privacy advocates warn that such systems create new databases that could be vulnerable to breaches or misused for surveillance. For families, the result is a confusing landscape: laws promise stronger protections, but injunctions, appeals, and compliance delays mean that the practical safeguards can lag far behind the political headlines, leaving parents uncertain about what, if anything, has actually changed.

Federal Efforts Mirror the State Playbook

Members of Congress are now trying to nationalize the app store gatekeeping model. The proposed federal App Store Accountability Act, introduced as S. 1586 in the 119th Congress, would take the core elements of the Utah and Texas laws—age categories, verification duties, and parental consent—and apply them across the country. Supporters argue that a single federal standard would spare companies from navigating a patchwork of state rules, while critics worry it would lock in a narrow focus on age checks rather than broader safety reforms. At the same time, lawmakers in the House have backed a companion measure, with a bipartisan group announcing a children’s online safety bill in a February 2026 press event that framed the effort as a landmark attempt to protect kids on social platforms.

Federal regulators are also nudging companies toward age checks. On February 25, 2026, the Federal Trade Commission issued a policy statement under COPPA declaring that it would not bring enforcement actions against firms that use age-verification technologies in good faith to determine users’ ages and protect children. The move effectively offers a safe harbor for companies willing to experiment with verification tools, signaling that Washington wants more age checks, not fewer. Yet the statement focuses on compliance incentives rather than outcomes, leaving open the question of whether the systems it encourages will actually reduce the harms that dominate public concern about youth and social media.

Age Gates Do Not Fix Algorithmic Harm

Public-health officials have emphasized that the biggest risks young people face online are tied to design, not just access. The U.S. Surgeon General’s advisory on social media compiles research showing that recommendation algorithms, endless scroll features, and constant notifications can fuel anxiety, depression, and sleep disruption in adolescents. The advisory highlights how exposure to self-harm content, unrealistic body standards, and cyberbullying clusters within certain feeds, and it stresses that the intensity and nature of engagement—how long teens stay on, what they see repeatedly, and how platforms nudge them to interact—are key drivers of harm.

Age-verification laws largely sidestep those issues. A gate at the app store might stop a 12-year-old from installing a social app, but it does nothing to change what a 15-year-old sees once they are inside. Recommendation systems will still learn from a teen’s clicks and watch time, potentially steering them toward more extreme or distressing material over time. The Surgeon General’s findings underscore that simply deciding who can enter a platform is not the same as governing what happens once they are there. Nonetheless, much of the legislative energy in Utah, Texas, and Washington is flowing toward identity checks at the point of download, rather than toward rules that would alter feed ranking, limit addictive features, or curb the amplification of harmful content aimed at minors.

A Patchwork That Leaves Families Guessing

The result of these overlapping efforts is a patchwork of rules that can be difficult for families to navigate. Parents in Utah and Texas may soon encounter new prompts when setting up phones or app store accounts for their children, while parents in other states might see different or no changes at all. Federal bills remain proposals, and the FTC’s safe harbor is a policy signal, not a binding design standard. At the same time, litigation like the Texas case shows that even enacted laws can be frozen or reshaped by courts, meaning that the protections parents hear about in news coverage may not exist in practice when their children open an app.

This uncertainty can obscure a more basic reality: age gates, even when fully implemented, are only one small piece of online safety. They do not replace conversations at home about what kids see online, nor do they guarantee that platforms will redesign their systems in ways that align with the Surgeon General’s warnings about mental health. As lawmakers continue to refine state and federal rules, the central policy question is whether future measures will move beyond verifying birthdays to address the underlying architectures of social media that shape what young people encounter and how it affects them.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.