
New York is moving from warnings about cigarettes to warnings about social feeds. Governor Kathy Hochul has signed a first-in-the-nation law that will force major platforms to display mental health labels when young people use features the state deems “addictive,” putting social media in the same regulatory conversation as tobacco and alcohol. The move caps a multi‑year campaign in Albany to rein in algorithmic design that keeps children scrolling and to confront the mental health fallout of life lived online.
What Hochul actually signed, and why it matters
The new statute requires social media companies to attach clear health warnings to products that use engagement‑driven tools such as infinite scroll, autoplay and other mechanics that “encourage excessive use” among minors. Governor Hochul framed the measure as a public health intervention aimed at the mental well‑being of children, positioning the labels alongside the kind of blunt disclosures long required on tobacco and alcohol packaging. Legal analysts, including Brendan Hickey of Vermont Law & Graduate School, have noted that New York is explicitly borrowing that regulatory playbook, treating certain feeds as a consumer product with known risks rather than a neutral communication tool.
In practical terms, the law will apply to platforms that rely on algorithmic recommendation engines to keep users engaged, including services like TikTok, Instagram, YouTube, X and Snapchat, when they are used by children and teenagers. The state’s own description of the measure emphasizes that it is aimed at “harmful features of social media platforms that prolong use,” and that the goal is to create a “safer digital environment for kids” rather than to police individual posts or political content, according to the official announcement from Hochul’s office. By targeting design choices instead of speech, lawmakers are betting they can withstand constitutional challenges while still forcing the industry to confront how its products affect young brains.
How the warning labels will work for young users
State officials have sketched out a system in which labels appear at key moments in a young person’s interaction with a platform, rather than as a one‑time disclosure buried in terms of service. The labels are expected to surface when a child first signs up or logs into an app and then reappear periodically based on continued use, reminding them that prolonged engagement with certain features carries documented mental health risks. Reporting on the law notes that these alerts are meant to interrupt the frictionless flow of scrolling, much as a pop‑up on a gambling site forces a user to acknowledge how long they have been playing, and that they will be tailored to minors rather than adults, according to details described by regional coverage of the bill.
The labels will not be generic boilerplate. New York intends them to spell out that extended exposure to algorithmically curated feeds has been linked to anxiety, depression and self‑harm among youth, and to do so in plain language that a 13‑year‑old can understand. Officials have compared the approach to the stark text and imagery that now appear on cigarette packs in many countries, a comparison echoed in international reporting that described the law as making social media look “like a cigarette pack” in the eyes of regulators and parents, a framing captured in coverage that opened with the phrase “Like a cigarette pack” when explaining New York’s strategy. The state is betting that repeated, unavoidable reminders will nudge both kids and caregivers to treat endless feeds as a health decision, not just a way to kill time.
The broader crackdown on “addictive” feeds
The warning labels do not exist in a vacuum. Earlier in the regulatory cycle, New York enacted a separate statute commonly referred to as The Act, which restricts how social media platforms can deliver “addictive feeds” to users under 18 without parental consent. The Act prohibits companies from providing children with algorithmically curated content that is tailored to maximize engagement unless a parent explicitly opts in, and it also limits overnight notifications and other nudges that can disrupt sleep, according to a detailed legal summary of The Act’s restrictions on addictive feeds. Together, the feed law and the new labeling requirement amount to a one‑two punch: one curbs the supply of attention‑grabbing design for minors, the other warns about the health costs when those designs are present.
Governor Hochul has framed this suite of measures as part of a broader campaign to treat social media algorithms the way previous generations treated lead paint or secondhand smoke. In June, she signed into law two bills that specifically target addictive social media apps and regulate the algorithms that power them, a milestone that public health advocates have highlighted as a turning point in how states think about digital risk. A hospital‑backed explainer on youth screen time notes that “In June, Governor Hochul ( Kathy Hochul ) signed into law two bills that target addictive social media apps and regulate social media algorithms,” underscoring that the warning labels are only the latest step in a multi‑bill strategy. For parents and platforms alike, the message is clear: New York is not just asking for better behavior, it is rewriting the rules of how feeds can be built and marketed to kids.
Child data, privacy and the mental health rationale
Behind the focus on labels and feeds is a deeper concern about how children’s data is harvested and weaponized to keep them online. The New York Child Data Protection Act, championed in Albany as a landmark state law, tightens restrictions on how online platforms can collect, share and monetize information about users under 18. A legislative release from the bill’s sponsor explains that the measure is designed to address the link between data‑driven targeting and rising rates of anxiety, depression and self‑harm among youth, describing the New York Child Data Protection Act as a response to mounting evidence that algorithmic profiling can exacerbate mental health problems, according to Sen. Gounardes’ explanation of the law’s mental health goals.
For years, the federal Children’s Online Privacy Protection Act has been the main guardrail for kids’ data, but it only covers children under 13 and focuses on parental consent rather than on the design of feeds themselves. Legal commentary on New York’s new regime points out that COPPA, formally titled the Children Online Privacy Protection Act ( COPPA ), leaves a wide gap for teenagers, whose data can still be mined aggressively even as their mental health indicators worsen. By extending protections up to age 18 and pairing them with explicit warnings about the psychological risks of prolonged use, New York is effectively arguing that privacy and mental health are two sides of the same coin: the more precisely a platform can profile a teen, the more power it has to keep that teen hooked.
How the law defines “addictive” social media
One of the thorniest questions in any regulation of digital platforms is where to draw the line between a popular product and an addictive one. New York’s approach focuses less on the content of posts and more on the mechanics that keep users engaged, singling out features like autoplay video, infinite scroll, streaks and push notifications that are designed to pull users back in. The state’s own description of the warning‑label law notes that it applies to platforms with features that “encourage excessive use,” a phrase echoed in international coverage that described the statute as a mandate for mental health warnings on services that bake “addictive” features into law, according to a report that summarized how the new state law mandates warnings for platforms with such features.
Officials have been explicit that they are not trying to police every corner of the internet. Static websites, simple messaging apps and services that do not rely on algorithmic feeds are unlikely to fall under the law’s definition of “addictive” design. Instead, the focus is on the engagement engines that power apps like TikTok’s For You page, Instagram Reels or YouTube Shorts, where content is endlessly refreshed and tailored to a user’s inferred interests. A broadcast segment explaining the law to a national audience noted that the state of New York will require platforms like X and TikTok to add warning labels about the mental health risks associated with prolonged use, underscoring that the law is aimed squarely at the most popular, algorithm‑heavy services rather than at niche forums or email.
Parents, schools and the push for “distraction‑free” spaces
New York’s crackdown on addictive design is not limited to what happens on a teenager’s phone at home. The state has also launched a “distraction‑free schools” initiative that encourages districts to restrict smartphone use during the school day, arguing that constant notifications and social media checks are undermining learning and fueling anxiety. A bulletin to families at one Staten Island high school directs parents to “More information about this initiative can be found at the official New York State website,” where Governor Hochul has promoted the state’s plan to become the largest in the nation with comprehensive school‑day phone limits. The warning labels law fits neatly into that agenda, giving educators another tool to point to when they argue that social media is not just a harmless distraction but a documented mental health risk.
Parents, too, are being cast as central enforcers of the new regime. The feed restrictions in The Act require parental consent before a child can receive algorithmically curated content, and the warning labels are designed to be visible to caregivers who may be glancing over a shoulder or setting up an account. Advocates who pushed for the law have argued that many parents underestimate how much time their children spend on apps like Instagram or Snapchat, and that a blunt on‑screen message about anxiety, depression or self‑harm can be a wake‑up call. A national TV segment on the law quoted Governor Hochul saying that “With the amount of information that can be shared online, it is essential that we prioritize mental health,” a line that neatly captures the state’s attempt to turn every login into a teachable moment for families.
The political coalition behind the law
Politically, the warning labels law reflects an unusual alliance of lawmakers, child‑safety advocates and digital rights groups who have been pressing Albany to act on youth mental health. Senator Andrew Gounardes has been a central figure in that coalition, sponsoring both the New York Child Data Protection Act and earlier measures targeting addictive feeds, and working closely with advocacy organizations that have documented a “flood” of child exploitation cases on platforms like Roblox. In one release, his office described new child‑safety legislation as building on New York’s SAFE for Kids Act and on the New York Social Me regulations, and noted that additional protections were “pending on Governor Hochul’s desk right now,” language that appears in a statement about how this legislation builds on New York SAFE for Kids Act and New York Social Me. The warning labels statute is one of those pending measures that has now become law, giving Gounardes and his allies a concrete win.
Outside government, groups like Common Sense Media have hailed the law as a breakthrough. In a statement titled “In Major Win for NY Kids and Families, Governor Hochul Signs Social Media Warning Labels Legislation,” the organization praised Hochul and legislative sponsors like Senator Gounarde and Assemblymember Nily Rozic for pushing through a measure that tech companies had quietly lobbied against. The group framed the law as a model for other states, arguing that clear, recurring warnings about mental health risks are a minimal step given the scale of youth distress linked to social media. That framing suggests the political fight is far from over: if New York’s experiment survives legal challenges, it is likely to be copied elsewhere, turning a state‑level skirmish into a national standard.
Industry pushback and looming legal fights
Tech companies have not embraced the new law, even if many have avoided direct public confrontation with Governor Hochul. Industry groups are expected to argue that mandatory warning labels amount to compelled speech that violates the First Amendment, and that New York is overstepping by dictating how national platforms communicate with their users. Legal scholars have already begun parsing whether the labels are more like factual disclosures, which courts often uphold, or like government‑scripted advocacy, which faces a higher constitutional bar. An early analysis from a legal commentator at a technology policy outlet that covered how Governor Hochul signed the bill noted that the law’s focus on “addictive” features rather than on specific viewpoints may help it survive, but that courts have been skeptical of states trying to micromanage online platforms.
New York officials, for their part, appear ready for a court battle. They have repeatedly compared the labels to long‑standing requirements for pharmaceuticals, alcohol and cigarettes, arguing that the state has a compelling interest in warning families about products linked to serious health harms. A legal news report on the law pointed out that the statute’s architects are leaning on the same logic that underpins warnings on tobacco and alcohol, and that they are prepared to defend the law as a neutral, evidence‑based disclosure rather than as an attempt to shame or stigmatize users, a position summarized in coverage that cited Brendan Hickey of Vermont Law & Graduate School and noted that the law borrows from the framework long required for tobacco and alcohol. Whether judges accept that analogy will determine how far other states can go in forcing tech companies to acknowledge the downsides of their own products.
What it means for kids, families and the next wave of regulation
For children and teenagers in New York, the most immediate change will be visual and psychological rather than technical. When they open TikTok, Instagram or similar apps, they will begin to see blunt messages about the mental health risks of prolonged use, especially when they engage with features that the state has labeled as “addictive.” A national news segment explaining the law to viewers emphasized that, according to the state’s own announcement, the labels are part of a broader effort to prioritize mental health “with the amount of information that can be shared online,” and that they will be paired with other safeguards for children younger than 16, a point captured in coverage that quoted the line “With the amount of information that can be shared online, it is essential that we prioritize mental health” when describing the law’s intent. For some teens, the labels may become background noise; for others, they may be a rare moment when an app acknowledges that it is not purely fun.
For regulators and lawmakers beyond New York, the law is likely to serve as a test case for how far states can go in reshaping the digital environment for minors. If the labels survive legal scrutiny and prove politically popular, other states may adopt similar measures or go further, perhaps requiring time‑use dashboards, default time limits or even more aggressive restrictions on algorithmic feeds. A concise summary of the law’s global significance noted that “The New state law mandates warnings for young users on platforms with features that encourage excessive use,” framing it as a potential template for other jurisdictions wrestling with the same questions. In that sense, Hochul’s signature is not just a New York story; it is an opening salvo in a broader debate over whether social media should carry the same kind of health warnings that now seem unremarkable on a pack of cigarettes.
More from MorningOverview