
California has taken a pioneering step in the regulation of AI technologies, becoming the first U.S. state to enact legislation specifically targeting AI companion chatbots. Governor Gavin Newsom has signed Senate Bill 53 (SB-53) into law, introducing transparency requirements for these increasingly prevalent technologies.[1][2]
Background on AI Companion Chatbots

AI companion chatbots, such as Replika and Character.AI, have gained significant popularity for their ability to provide emotional support and personalized interactions. However, these technologies have also raised concerns over privacy and potential psychological impacts.[1] Prior to California’s SB-53, discussions on AI regulation had been ongoing at both federal and state levels. The Federal Trade Commission (FTC) has issued guidelines on AI transparency, but no other state had passed targeted legislation.[2]
Key stakeholders involved in the push for regulation include consumer advocacy groups such as the Electronic Frontier Foundation, which testified in support of SB-53 during legislative hearings.[1] These groups have been instrumental in highlighting the need for transparency and accountability in the use of AI technologies.
Key Provisions of SB-53

SB-53 mandates transparency in the operation of AI companion chatbots. Providers are required to disclose how user data is collected, stored, and used in interactions.[2] Additionally, the law requires clear labeling of AI-generated responses in companion chatbots. This provision, which becomes effective starting January 1, 2026, is designed to prevent users from mistaking AI interactions for human ones.[1]
The law also includes enforcement mechanisms, with the California Attorney General’s office authorized to administer fines up to $7,500 per violation.[2] This provision underscores the seriousness with which the state views compliance with the new regulations.
Legislative Process and Support

SB-53 was introduced by State Senator Scott Wiener in the California State Legislature during the 2025 session. The bill was motivated by reports of emotional harm resulting from interactions with unregulated AI companions.[1] The legislation received bipartisan support, with endorsements from tech industry leaders like OpenAI and mental health organizations such as the American Psychological Association.[2]
During the signing event on October 10, 2025, Governor Newsom emphasized the bill’s role in protecting vulnerable users, including minors. This focus on user protection underscores the state’s commitment to ensuring that technological advancements do not come at the expense of individual rights and well-being.[1]
Implications for Industry and Users

The enactment of SB-53 presents potential compliance challenges for AI companies operating in California. For instance, Character.AI may need to retrofit existing chatbots to meet the law’s disclosure standards.[1] However, the law also brings benefits for users, including enhanced privacy protections that could reduce risks of data misuse in emotional AI interactions.[2]
California’s move could have broader national implications, as other states like New York and Texas consider similar bills. This could potentially set a precedent for nationwide regulation of AI technologies, reflecting a growing recognition of the need for oversight in this rapidly evolving field.[1]