Morning Overview

Opinion: Big Tech design hooks kids and leaves them open to harm

New Mexico Attorney General Raul Torrez filed a lawsuit against Meta Platforms and CEO Mark Zuckerberg, alleging the company designed Instagram and Facebook in ways that expose children to sexual abuse and human trafficking. The suit, framed as a consumer-protection action, argues that Meta’s platform architecture deliberately hooks young users through addictive features while failing to shield them from predatory contact. It is one of a growing number of state-level legal actions that share a common theory: Big Tech’s engagement-driven design does not just keep kids scrolling, it makes them vulnerable.

How Engagement Loops Create Exposure

The central tension in every lawsuit and federal investigation targeting social media companies is not whether children use these platforms. It is whether the platforms are built to keep children using them in ways that increase risk. The New Mexico complaint, filed under state consumer-protection and child-safety statutes, connects Meta’s algorithmic recommendation systems to the facilitation of sexual exploitation of minors. The state’s theory is that features designed to maximize time on the app, such as infinite scroll, push notifications, and content recommendations tuned to emotional engagement, also funnel young users toward harmful interactions with adults.

That theory did not originate in a courtroom. A November 2022 petition for FTC rulemaking argued that specific design features “serve the interests of platforms and advertisers, not children,” and called on regulators to prohibit unfair design practices that transform minors’ online experiences into extraction opportunities. Europe’s Better Internet for Kids initiative has separately cataloged persuasive techniques that exploit young users’ developmental vulnerabilities, often collecting more personal data than necessary in the process. The pattern across jurisdictions is consistent: features optimized for adult engagement metrics are applied to minors without meaningful guardrails.

In practice, engagement loops work by rewarding users for staying on the platform and returning frequently. For adolescents, whose brains are still developing impulse control and risk assessment, the combination of variable rewards, social comparison, and constant notifications can be especially powerful. When these loops intersect with recommendation systems that prioritize content likely to provoke a reaction, they can amplify not only entertaining material but also self-harm content, sexualized imagery, and unsolicited contact from adults who understand how to game the system.

Critics argue that this is not an accidental byproduct of innovation but a foreseeable consequence of business models that prize attention above all else. By design, any friction that slows a user down, whether it is a safety warning, a stricter default on private messaging, or robust age verification, risks reducing engagement and, therefore, ad revenue. The New Mexico lawsuit seeks to translate that critique into legal liability, contending that Meta misrepresented the safety of its products to families while embedding risks into the core of its services.

What Federal Investigators Found

The FTC’s 6(b) study, titled “A Look Behind the Screens,” examined how major social media and video streaming companies monetize user data, including data belonging to children and teens. The resulting staff report found that large platforms had engaged in what the agency described as vast surveillance with inadequate safeguards for young users. The report documented privacy and safety gaps affecting children and tied those gaps directly to business models built on surveillance-based advertising and engagement optimization.

The FTC study matters because it moves the conversation beyond individual bad actors or isolated incidents. It describes a structural problem: platforms that profit by keeping users engaged have financial incentives to resist friction, including the friction of age verification, parental controls, or content moderation that might reduce time spent on the app. When those users are adolescents, the trade-off between engagement revenue and child safety becomes stark. The agency’s full report supports claims that design and optimization practices are not neutral engineering choices but profit-driven decisions with measurable consequences for minors.

Among the issues highlighted were opaque data-collection practices, limited transparency around how recommendation algorithms operate, and inconsistent application of safety tools meant to protect younger users. The report underscored that even when companies offer parental controls or teen-specific settings, those measures often sit atop a system still fundamentally oriented toward maximizing engagement. As a result, safeguards can feel like add-ons rather than core design principles.

For regulators and courts, these findings provide a factual backbone for arguments that certain design choices are unfair or deceptive when applied to children. If a platform markets itself as safe for young people while quietly building profiles on them and steering them toward content that increases time on site, enforcement agencies may view that as a misalignment between public promises and internal practices.

Congressional Pressure and Corporate Resistance

Lawmakers have tried to force accountability through public hearings, with mixed results. In October 2021, the Senate Commerce Subcommittee heard testimony from Frances Haugen, a former Facebook employee who disclosed internal research showing the company understood its products harmed teen mental health. Haugen’s whistleblower account detailed how internal studies linked addictive engagement features to declines in adolescent well-being, and how the company chose growth over safety when the two conflicted.

That hearing did not produce binding legislation, but it set the stage for a January 2024 Senate Judiciary Committee session that summoned CEOs from five companies: Meta, X, TikTok, Snap, and Discord. The committee framed the hearing explicitly around the companies’ record on child safety. When some executives initially resisted appearing, the committee issued subpoenas to Discord CEO Jason Citron, Snap CEO Evan Spiegel, and X CEO Linda Yaccarino to compel their attendance. The escalation from voluntary testimony to subpoena power signaled that congressional patience with self-regulation was running thin.

During these hearings, lawmakers pressed executives on whether they would support stronger federal privacy rules for minors, default safety settings, and clearer limits on data-driven targeting of children. Company leaders typically responded by highlighting existing safety teams and tools, while resisting specific mandates that could constrain product design or advertising models. The exchanges underscored a growing gap between political expectations and corporate willingness to alter underlying business incentives.

Meta, for its part, has pointed to new teen safety features and reported removing 635,000 accounts that sexualize children, according to reporting from the Associated Press. But those moves arrived against a backdrop of state lawsuits alleging addictive design and youth harms, raising a reasonable question: are these safety investments genuine reforms, or damage-control responses to legal and political pressure? The timing suggests the latter, and the scale of the problem, measured in hundreds of thousands of predatory accounts on a single company’s services, illustrates how reactive enforcement can struggle to keep pace with systemic risks.

From Individual Platforms to Systemic Reform

The convergence of state litigation, federal investigations, and congressional scrutiny is pushing the debate toward systemic solutions rather than platform-by-platform fixes. One emerging idea is to treat certain engagement-maximizing features as inherently high-risk when used on services likely to be accessed by minors. That could mean presumptive limits on autoplay, infinite scroll, or algorithmic amplification of content for underage users, unless companies can demonstrate robust protections.

Another avenue is to strengthen transparency obligations. Regulators and lawmakers are increasingly interested in requiring companies to share more detailed information about how recommendation systems work, what data they collect on minors, and how they assess the impact of new features on child safety. The FTC’s findings on surveillance-based advertising make clear that without insight into these internal processes, it is difficult for parents, educators, or policymakers to evaluate corporate claims about safety.

At the same time, there is a growing recognition that families cannot shoulder the burden alone. While media literacy and parental controls are useful, they operate within environments whose basic rules are set by companies. When those rules reward attention above all else, even the most vigilant parents and digitally savvy teens face an uphill battle.

The New Mexico case against Meta will test whether courts are prepared to hold platforms legally accountable for design choices that allegedly expose children to severe harms. Whatever its outcome, the lawsuit reflects a broader shift: child safety online is no longer being treated as a matter of voluntary corporate responsibility, but as a question of legal duty and regulatory oversight. As more evidence emerges about how engagement-driven systems operate, the pressure to redesign those systems with children in mind is likely to intensify.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.