Morning Overview

Iowa signs a law requiring AI chatbots to tell minors they’re not human — and to remind them every 3 hours

Starting later this year, every AI chatbot that talks to a kid in Iowa will have to say something most tech companies have been content to bury in fine print: I am not a human being.

Gov. Kim Reynolds signed Senate File 2417 on May 3, 2026, making Iowa the first state to require AI-powered conversational systems to explicitly identify themselves as non-human when interacting with minors and to repeat that disclosure at least once every three hours during an ongoing session. The bill passed the Iowa Senate 48-0 and the House 95-0, a level of unanimity almost unheard of for legislation touching artificial intelligence.

The law also bans AI systems from generating sexually explicit content for minors, borrowing its definitions directly from 18 U.S.C. Section 2256, the federal statute that underpins child sexual exploitation law. Enforcement authority sits with the Iowa Attorney General, who must draft administrative rules before the requirements become operational.

Why Iowa, and why now

The bill did not emerge in a vacuum. Over the past two years, a string of disturbing incidents involving minors and AI companions has pushed child safety to the front of state legislative agendas. Lawsuits filed against Character.AI in late 2024 alleged that the platform’s chatbots engaged in sexually charged and emotionally manipulative conversations with teenagers, including one case linked to a 14-year-old’s suicide in Florida. Those cases drew national headlines and galvanized parents’ groups across the country.

At the federal level, former Surgeon General Vivek Murthy issued repeated advisories in 2023 and 2024 warning that social media and AI-driven platforms pose serious risks to adolescent mental health. Several states responded with their own proposals. California, New York, and Utah have all advanced bills targeting AI interactions with minors, though none has enacted a recurring-disclosure mandate as specific as Iowa’s three-hour rule.

Iowa lawmakers appear to have deliberately kept the bill narrow. By focusing on disclosure and explicit-content prohibitions rather than attempting to regulate AI speech broadly, sponsors avoided the industry pushback and First Amendment objections that have stalled more ambitious proposals elsewhere. The unanimous vote totals suggest the strategy worked: neither party saw political risk in backing a measure framed squarely as child protection.

What the law actually requires

SF 2417 applies to AI chatbots and conversational systems that interact with users identified as minors, whether through text, audio, or other interactive formats where a reasonable person might assume they are communicating with a human. The law imposes two core obligations:

  • Upfront disclosure: Before or at the start of any interaction with a minor, the system must clearly state that it is artificial intelligence, not a person.
  • Recurring reminder: If the session continues beyond three hours, the system must repeat the disclosure at least once during each subsequent three-hour window.

The prohibition on sexually explicit content for minors is separate from the disclosure mandate but packaged in the same bill. By anchoring its definitions to 18 U.S.C. Section 2256, the legislature tied enforcement to legal terms that federal courts have interpreted for decades, sidestepping the need to create and defend a new standard.

The bill substituted for a companion measure, HF 2507, during the legislative process, a procedural step that merged both chambers’ work into a single vehicle. Reynolds signed it alongside a batch of other legislation, according to the governor’s office.

The hard part: making it work

Passing the law was the easy part. Implementing it will be far more complicated, and the statute leaves many of the toughest questions to the Attorney General’s rulemaking process under Iowa Code Chapter 17A.

Age verification is the most immediate challenge. The law does not prescribe a specific method for determining whether a user is a minor. If platforms rely on self-reported birthdates, the requirement becomes trivially easy for teenagers to bypass, just as age gates on social media have proven largely ineffective. Stricter identity checks, such as government ID uploads or biometric scans, raise their own problems: privacy advocates have long warned against collecting sensitive data from children, and the security of any stored credentials becomes a liability.

Scope and exemptions are similarly undefined. Educational platforms used in Iowa classrooms, customer-service bots on retail websites, and health-information chatbots all interact with minors. Whether a homework-help assistant embedded in a school’s learning management system faces the same obligations as a general-purpose companion chatbot is a question the statute does not answer. The Attorney General’s rules will need to draw those lines, and the choices will determine whether the law is straightforward to comply with or technically burdensome for a wide range of services.

Session mechanics also need clarification. A chatbot that answers a single factual question does not maintain a persistent session the way a social companion bot does. Whether the three-hour reminder clock resets with each new message or runs continuously from first contact will matter for how platforms design their interfaces. These distinctions will likely emerge only in the administrative rules or through enforcement actions.

Jurisdiction presents a familiar internet-regulation puzzle. Many AI services are operated by companies based outside Iowa and accessed across state lines. How the state will assert authority over out-of-state providers, and whether it will seek to block or penalize services that decline to comply, is not addressed in the text. Companies could eventually challenge certain applications of the statute on interstate commerce or First Amendment grounds.

What the industry is watching

No major AI company has issued a public response to SF 2417 as of late May 2026. But the law’s practical impact will depend heavily on how the largest platforms react. OpenAI, Google, Meta, and Anthropic already maintain some safety systems for minor users, including content filters and, in some cases, age-gating. For those companies, adding a disclosure banner and a session timer may be incremental work.

Smaller developers and open-source projects face a steeper climb. Many lack dedicated compliance teams, and engineering age-verification, session-tracking, and content-filtering layers from scratch is not trivial. No public cost estimates have emerged from AI developers or from Iowa agencies listed in the state’s organizational directory that oversee education and child welfare.

There is also a displacement risk that the legislative record does not address. When compliant platforms add friction through disclosure pop-ups and session reminders, some young users may migrate to unregulated or offshore chatbots that ignore Iowa law entirely. This pattern has played out repeatedly in internet regulation, from age-restricted social media to online gambling. A disclosure mandate works only if the platforms minors actually use are subject to it. If enforcement targets large, U.S.-based companies while leaving smaller or foreign operators untouched, the law could inadvertently push vulnerable users toward less transparent alternatives with weaker content safeguards.

What comes next

The Attorney General’s office must now publish proposed rules, accept public comment, and finalize regulations before the law’s requirements become enforceable. That process typically takes months and will likely draw input from industry groups, civil-liberties organizations, educators, and parents. Companies building AI products that interact with Iowa minors should monitor the rulemaking docket closely; the comment period will be their primary opportunity to shape the details.

Other states are watching, too. If Iowa’s approach survives legal challenges and proves workable in practice, it could become a template for legislatures that want to act on AI and child safety without wading into the broader, more contentious debate over regulating AI-generated speech. If it stumbles on enforcement or drives users to less regulated corners of the internet, it will serve as a cautionary example.

For now, the facts on the ground are clear: Iowa has enacted the nation’s most specific requirement that AI chatbots identify themselves to children and keep identifying themselves for as long as the conversation lasts. The unanimous votes suggest the political will is there. Whether the regulatory machinery and the technology can keep up is the question that matters most.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.