Morning Overview

Florida AG James Uthmeier opens investigation into OpenAI and ChatGPT

Florida Attorney General James Uthmeier has opened a formal investigation into OpenAI and its flagship product, ChatGPT, making the nation’s third-largest state one of the first to move beyond public warnings and into direct enforcement action against an artificial intelligence company.

The probe, reported by WUSF News, centers on potential harms to users, with a particular focus on children and other vulnerable groups who interact with AI chatbots. However, the WUSF source as publicly available does not link to a detailed news article, and readers should note that independent corroboration of the report’s specifics is limited. While Uthmeier’s office has not released subpoenas, formal complaints, or detailed legal filings, the investigation marks a significant escalation from the coalition letters his office previously joined.

Uthmeier has not publicly detailed the legal theories underpinning the probe. It remains unclear whether the investigation relies on the Florida Deceptive and Unfair Trade Practices Act, state data privacy provisions, or another regulatory framework. No direct statements from Uthmeier, his office, OpenAI, child-safety advocates, or legal experts are available in the public record reviewed for this article.

A bipartisan push that set the stage

Florida’s solo investigation did not emerge from nowhere. It follows two major multistate efforts that put OpenAI and other AI companies on notice.

New York Attorney General Letitia James led a bipartisan coalition that sent a formal letter to OpenAI and other major tech firms, urging them to address dangerous chatbot features that could expose users to harmful content. The letter identified specific risks: chatbots delivering self-harm instructions, normalizing abusive relationships, and providing inaccurate guidance on health, legal, and financial matters.

Separately, a coalition of 42 attorneys general, led by Pennsylvania Attorney General Michelle Henry, sent its own letter to AI software companies demanding safeguards to protect vulnerable residents from harmful bot interactions. The linked Pennsylvania Attorney General page attributes leadership of that letter to “AG Sunday,” and the actual name of the attorney general who led the coalition could not be independently verified from the sources available. Florida was listed as a signatory on that letter according to the source, establishing that Uthmeier’s office had already flagged AI safety as a priority before launching its own inquiry.

Both coalition letters named OpenAI as a recipient and called on AI companies to implement stronger content moderation, age verification, and protections against emotionally manipulative outputs. The progression from co-signing a group letter to opening a standalone investigation suggests a deliberate escalation on Florida’s part, from collective pressure to individual enforcement.

What remains unknown

Key details about the Florida probe are still missing from the public record. No court filings, civil investigative demands, or consumer complaints tied to the investigation have surfaced. It is not clear whether the probe was triggered by specific incidents involving Florida residents, by broader policy concerns, or by patterns identified during the multistate coalition work.

OpenAI has not issued a public response to the Florida investigation. The company has previously pointed to its safety policies, including restrictions on generating certain types of harmful content and ongoing work on age-gating features, but whether it has engaged with Uthmeier’s office, produced documents, or contested the probe’s legitimacy is unknown.

It is also uncertain whether the investigation is civil, criminal, or still in a preliminary fact-gathering phase. Attorneys general routinely use investigative authority to collect information before deciding whether to pursue formal enforcement, and nothing in the public record clarifies where Florida’s effort falls on that spectrum.

No direct quotes from any party involved are available. Uthmeier’s office has not released public statements elaborating on the investigation’s scope. OpenAI has not commented. No child-safety advocates or legal experts have been quoted in the sourcing reviewed for this report.

Why it matters for AI regulation

The Florida probe arrives during a period of mounting state-level activity on AI oversight and a conspicuous lack of comprehensive federal legislation. Congress has held multiple hearings on AI safety but has not passed a binding regulatory framework. The Federal Trade Commission has signaled interest in AI enforcement under existing consumer protection authority, but no landmark federal case against a chatbot maker has materialized.

That vacuum has left state attorneys general as the most active regulators in the space. The 42-state coalition letter demonstrated broad bipartisan appetite for tougher oversight. Florida’s decision to move from letters to an active investigation could accelerate that trend.

If Uthmeier’s office follows through with formal legal action, such as a consumer protection lawsuit or a civil investigative demand, it could force OpenAI to disclose internal documents about content moderation practices, safety testing protocols, and how the company handles interactions with minors. That kind of discovery could produce findings that other states use to justify parallel investigations, raising compliance costs across the industry and increasing pressure for standardized safety practices.

For now, the clearest signal is that state attorneys general are not waiting for Washington to set the rules. Florida’s move, rooted in the same concerns voiced by officials in New York, Pennsylvania, and dozens of other states, suggests the next chapter of AI accountability may be written in state capitals, one investigation at a time.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.