Morning Overview

Opinion: Court rulings raise legal risk for Meta and Google social apps

A Los Angeles jury found Meta and Google liable for harming children through addictive social media features on March 25, 2026, ordering the two companies to pay a combined $6 million in damages. The verdict lands weeks after a separate New Mexico state jury reached its own finding that Meta violates state law by endangering children’s mental health and safety. Together, these rulings signal that courts are increasingly willing to hold tech platforms financially accountable for the design choices built into their most popular apps, and the legal exposure is growing fast.

Los Angeles Verdict Hits Both Companies at Once

The Los Angeles Superior Court case served as a bellwether trial, meaning its outcome will shape how hundreds of similar claims proceed. The jury found Instagram and YouTube liable and negligent in a social media addiction and harm case brought on behalf of minors. Bellwether trials carry outsized influence because they test the strength of evidence and legal theories before a broader wave of litigation moves forward.

A separate report from Reuters described the verdict against both Meta and Google, noting that jurors concluded the companies’ products caused measurable harm to the young plaintiffs. The $6 million damages figure may look modest relative to the revenue these platforms generate, but the real weight of the ruling lies in its precedent. If this verdict holds through appeals, it establishes a template that plaintiffs in other jurisdictions can follow, and it validates the core argument that app design itself, not just user-generated content, can be the source of legal liability.

What makes this case unusual is that it targets product architecture rather than specific posts or accounts. Plaintiffs argued that features like autoplay, infinite scroll, and algorithmically curated feeds were engineered to maximize engagement among young users at the expense of their wellbeing. The jury agreed, and that distinction matters: it shifts the legal question from “What did users post?” to “How did the platform keep children hooked?” That framing opens the door to lawsuits that focus on time-on-platform metrics, notification design, and other engagement levers that have historically been treated as neutral technical choices rather than potential safety hazards.

The Los Angeles proceedings also function as a stress test for the evidence plaintiffs have amassed in multidistrict litigation over social media addiction. According to a broader explainer on what comes next in these trials, hundreds of related cases are pending, and their settlement value will likely be benchmarked against this initial win. Even if future juries award different sums, the basic finding that design decisions can be negligent when applied to children is now on the record.

New Mexico Builds a Separate Case on Predator Access

The Los Angeles verdict did not emerge in isolation. New Mexico Attorney General Raul Torrez filed a state lawsuit against Meta and Mark Zuckerberg alleging that the company’s products enabled child sexual abuse, solicitation, and human trafficking. The complaint also charged Meta with deceptive practices and addictive design, arguing that public assurances about safety and content moderation were inconsistent with internal knowledge about how predators exploit its platforms.

A New Mexico state-court jury later concluded that Meta harms children by undermining their mental health and safety, finding violations of state law. The trial featured testimony from whistleblowers and evidence drawn from investigative accounts that documented how easily adults could contact minors through Meta’s apps and how slowly the company responded to clear warning signs. Where the California trial focused on addiction and negligence in design, New Mexico’s case centered on whether Meta knowingly allowed its platforms to function as conduits for criminal exploitation of children.

The two rulings together create a pincer effect. Meta now faces liability findings on two separate grounds in two different states: one rooted in product-design negligence, the other in failure to prevent criminal conduct. Each verdict strengthens the other by demonstrating that independent juries, reviewing different evidence, reached the same bottom-line conclusion about the company’s responsibility to young users. That convergence is likely to feature prominently in settlement talks and in any future negotiations with state attorneys general or federal regulators.

Section 230 Protections Face New Pressure

Both Meta and Google have long relied on Section 230 of the Communications Decency Act, which generally shields internet platforms from liability for content posted by their users. According to coverage of the Los Angeles case, both companies argued that this liability shield should have blocked the lawsuits from reaching trial at all. Judges rejected that position, ruling that the cases could proceed because the claims targeted the platforms’ own design decisions rather than third-party speech.

This distinction is critical for the broader tech industry. If appellate courts uphold the reasoning that Section 230 does not protect companies from claims about how their algorithms and interfaces are built, then the statute’s protective scope narrows considerably. Every social app that uses recommendation engines, push notifications, streaks, or engagement-maximizing features could face similar suits. The legal question shifts from content moderation to product engineering, and that is territory Section 230 was never explicitly written to cover.

The relatively small damages award in Los Angeles may encourage plaintiffs’ lawyers to push for structural remedies rather than headline-grabbing payouts. Courts could, for example, order changes to age-verification systems, default privacy settings, or the visibility of certain engagement features for minors. For companies built on time-on-platform business models, mandated design changes could prove more disruptive than any single check written to a handful of plaintiffs.

FTC Antitrust Appeal Adds a Second Front

While child-safety litigation dominates the immediate headlines, Meta also faces continued antitrust scrutiny. A federal judge previously ruled in Meta’s favor in the Federal Trade Commission’s monopolization case challenging the company’s acquisitions of Instagram and WhatsApp, but the agency has not backed down. In January, the FTC announced it would appeal the ruling in its case, signaling that regulators still see Meta’s past deals as central to concerns about competition and market power in social networking.

Reporting on the original district court decision noted that the judge rejected the FTC’s attempt to unwind the Instagram and WhatsApp purchases, a move that would have been one of the most aggressive remedies in modern antitrust enforcement. According to a detailed account in The Washington Post’s coverage of the case, the court found that the agency had not met its burden to show that the acquisitions unlawfully maintained Meta’s monopoly. The appeal now moves that fight to a higher court, where the legal standards for reversing such a ruling are demanding but not insurmountable.

The antitrust battle and the child-safety lawsuits are formally separate, but they intersect in practice. Both lines of attack question whether Meta’s scale and design choices serve users or primarily entrench the company’s dominance. A loss in the antitrust appeal could eventually force divestitures or behavioral remedies that reshape Meta’s product ecosystem, while sustained pressure from juries and state enforcers on safety issues could make it harder for the company to defend the status quo in any regulatory forum.

What Comes Next for Tech Platforms

For Meta and Google, the immediate next step is almost certain: appeals. The companies are expected to challenge the Los Angeles verdict on multiple grounds, including jury instructions, evidentiary rulings, and the interpretation of Section 230. Meta is likewise likely to continue contesting the New Mexico judgment, arguing that it cannot be held responsible for every criminal act that occurs on its platforms, particularly when it maintains reporting tools and content policies aimed at abuse.

Yet even if some of these rulings are narrowed or overturned on appeal, the litigation itself is already reshaping expectations. Plaintiffs’ lawyers now have a tested playbook for presenting internal documents, expert testimony on product design, and personal stories from affected minors in ways that resonate with juries. State attorneys general have a clearer sense of how to frame cases that combine consumer protection, child safety, and, in some instances, trafficking statutes.

Other tech firms are watching closely. Platforms that rely heavily on recommendation algorithms and youth engagement, whether video apps, messaging services, or gaming platforms, face many of the same design questions that surfaced in Los Angeles and New Mexico. If courts continue to treat engagement-boosting features as potential sources of negligence when applied to children, companies may preemptively adjust their products to reduce risk, even before new laws or regulations require it.

The combination of jury verdicts, state enforcement actions, and renewed federal antitrust efforts suggests that large social media companies are entering a period where their core design and growth strategies will be scrutinized not just in the court of public opinion, but in courtrooms across the country. The outcomes of the pending appeals will determine how far that scrutiny ultimately extends, and how costly it becomes.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.