Key Takeaways
- California's SB 243 is the first US law to directly regulate AI companion chatbots.
- The law mandates transparency prompts, safeguards against manipulation and annual reporting to state agencies.
- Noncompliance carries both civil penalties and private lawsuit risks.
California has become the first state in the US to enact legislation specifically regulating AI-powered companion chatbots, a fast-growing category of digital agents designed for emotional support, friendship and even romantic company.
As generative AI companions become increasingly popular — and increasingly lifelike — California’s move is expected to shape policy debates far beyond its borders, setting the tone for how businesses, developers and regulators approach the ethical and social challenges of this emerging technology.
Table of Contents
- What Is California's AI Chatbot Law (SB 243)?
- Why Lawmakers Moved Fast on AI Companions
- Inside California’s Push for AI Accountability
- How to Build Compliant AI Companions Under SB 243
- The Debate Over California’s Chatbot Law Heats Up
- The California Effect: Will SB 243 Go Global?
- What Next for AI Oversight?
- Frequently Asked Questions
What Is California's AI Chatbot Law (SB 243)?
California’s new law, Senate Bill 243 (SB 243), sets a national precedent by directly regulating companion chatbots, AI bots designed for emotional support and human-like interaction. This new legislation applies to any company serving California users, regardless of where the business is headquartered.
Bot operators must:
- Notify users when they’re interacting with AI (especially minors)
- Implement safety protocols to prevent harm
- Comply with annual reporting obligations
The law’s language is careful to distinguish companion chatbots from customer service bots and other everyday AI tools, aiming to set clear boundaries while safeguarding vulnerable users.
Related Article: AI Regulation in the US: A State-by-State Guide
Why Lawmakers Moved Fast on AI Companions
Several powerful currents came together to push SB 243 through the California Legislature, marking the first-in-the-nation regulation of AI-powered “companion” chatbots.
Lawmakers were spurred to act by mounting evidence that AI chatbots that were designed to mimic emotional or social relationships had increasingly shifted into high-risk territory. The most frequently cited instance: a 14-year-old engaged with a chatbot in what parents and advocates argue was a misguided search for companionship, ultimately resulting in his death.
The emotional stakes were unmistakable. Chatbots that once occupied novelty status were now implicated in complex mental health dynamics, especially among minors. Meanwhile, the technology itself matured quickly. AI systems that could adapt, empathize, even flirt or comfort, began to become popular with consumers.
Inside California’s Push for AI Accountability
“These companies have the ability to lead the world in innovation, but it is our responsibility to ensure it doesn’t come at the expense of our children’s health.”
- Steve Padilla
State Senator
According to the Senate Bill 243's text, a “companion chatbot” is defined as an AI system that is capable of humanlike responses, seeking to meet a user’s social needs across various interactions. The gap between traditional customer service bots and these relationship-oriented bots prompted concerns that regulation was lagging behind functionality.
The legislative path was led by Steve Padilla (D-San Diego), who sponsored the bill. “These companies have the ability to lead the world in innovation, but it is our responsibility to ensure it doesn’t come at the expense of our children’s health.”
Rob Bonta, the California Attorney General, also voiced strong support, stating, “It is our job as policymakers and leaders to intervene and ensure that companies are not harming children.”
Dr. Jodi Halpern, Professor at UC Berkley, said he was relieved to hear SB 243 passed. "We have a public health obligation to protect vulnerable populations and monitor these products for harmful outcomes."
Together, these voices framed the legislation not just as a tech policy milestone, but as a child protection imperative; a framing that helped build bipartisan and cross-stakeholder momentum.
By stepping in with regulation, California also positioned itself ahead of broader federal or global frameworks. This “first mover” dynamic aligned with the state’s history (e.g., data privacy laws) and signaled to tech firms that companion AI would no longer be treated purely as innovation theater.
How to Build Compliant AI Companions Under SB 243
Compliance with California's SB 243 is about much more than a policy update. Rather than a minor compliance update, the law demands a fundamental rethinking of how these products are built, overseen and brought to market.
New Rules for Product Development
New requirements California's AI chatbot law demands include:
- AI chatbots must notify users that they’re engaging with AI — clear, recurring reminders are required, especially for minors.
- Explicit mandates to avoid so-called “addiction-style” loops that encourage users to stay connected longer than they might otherwise intend.
- Safeguards against unpredictable reward systems and emotionally manipulative features.
- Robust age detection and content filtering for minors.
- No sexually explicit content or encouragement of explicit conduct when targeting minors.
The law’s definition of companion chatbot is deliberately broad, extending beyond classic friendship bots to encompass educational, wellness and even study aides — any digital AI agent that maintains ongoing relationships or simulates emotional support may fall within its reach.
Required Disclosures and Safeguards
Transparency and user protection are at the heart of SB 243. Developers and platform operators must now:
- Publicly post their safety protocols, including how the system handles user expressions of suicidal thoughts or self-harm.
- Report annually to the California Office of Suicide Prevention, detailing incidents of detected self-harm or chatbot-initiated conversations about self-harm.
- Audit user flows, update user interface warnings, implement age gates and authentication and ensure that content moderation is up to the new regulatory standard.
The phased timeline — deployment obligations by January 2026, with annual reporting starting in 2027 — means that companies have a narrow window to adapt and ensure compliance.
Legal and Regulatory Risks
Perhaps most significantly, SB 243 introduces a private right of action. Any user harmed by noncompliance can sue for damages (with a minimum of $1,000 per violation) or seek actual damages if higher, along with attorney fees and injunctive relief. The law’s dual system — regulatory enforcement and civil liability — significantly increases business risk for operators, making proactive compliance critical.
In practice, the ripple effect of SB 243 will likely be felt far beyond California’s borders, as national and even global operators move to meet the state’s new requirements or risk being shut out of the nation’s largest tech market.
Related Article: ‘AI Psychosis’ Is Real, Experts Say — and It’s Getting Worse
The Debate Over California’s Chatbot Law Heats Up
California’s new AI companion chatbot law has drawn a wave of strong responses from political leaders, advocacy organizations and industry groups.
Lawmakers Praise New Guardrails
“Emerging technology like chatbots and social media can inspire, educate and connect — but without real guardrails, technology can also exploit, mislead and endanger our kids,” Governor Gavin Newsom said after signing the bill into law. He described SB 243 as a step toward ensuring California can “continue to lead in AI and technology," but responsibly, protecting children along the way.
State Senator Steve Padilla, who authored the bill, echoed these concerns. He pointed out that while AI chatbots have enormous potential as powerful educational and research tools,” the tech industry is “incentivized to capture young people’s attention and hold it at the expense of their real world relationships.” For him, it's essential that companies lead innovation responsibly, and that lawmakers put in place "common-sense" protections that help children.
Tech CEOs Warn of Stifled Innovation
Industry voices, however, are more cautious. In an open letter, executives at TechNet, a network of technology CEOs, argued that the legislation’s broad definition of a companion chatbot and its annual reporting requirements could “stifle innovation and place a heavy burden on California startups and scaleups working on socially beneficial AI”.
This tension between encouraging innovation and establishing meaningful safeguards has become a focal point in the debate.
Advocacy Groups Divided
Advocacy groups are split. Jim Steyer, CEO of Common Sense Media — a group that initially supported the bill but later withdrew its endorsement — warned that the law could “set weaker standards than those in other states and could mislead parents to believe the chatbots are regulated more meaningfully than they actually are.”
Similarly, the Computer & Communications Industry Association issued a statement of support for SB 243, but added that it casts too wide a net, applying strict rules to everyday AI tools that were never intended to act like human companions. "Requiring repeated notices, age verification and audits would impose significant costs without providing meaningful new protections,” the statement noted.
On the other hand, Ariel Fox Johnson, senior counsel at Common Sense Media, told Politico, “This is a major step forward in ensuring that AI doesn’t become a digital playground for predators or for unchecked emotional manipulation. California is leading the way in requiring real guardrails.”
AI Guardrails Are Now Unavoidable
Despite differences in approach, nearly all stakeholders agree that regulation of AI companions is both inevitable and necessary. The real debate, now playing out in California, centers on how best to balance innovation, accountability and the evolving needs of society.
The California Effect: Will SB 243 Go Global?
California’s new regulation of AI companion chatbots is more than a local story — it is likely to set the pace for policy debates across the United States and far beyond, echoing what’s often called the “California effect.” Just as the state’s leadership on data privacy and automotive emissions has shaped national and even international standards, many expect this landmark AI law to become a template for similar legislation elsewhere.
New York, Massachusetts, Illinois Eye Similar AI Rules
Already, lawmakers in other states — including New York, Massachusetts and Illinois — are drafting or considering bills that target the risks and responsibilities associated with emotionally intelligent AI. The political spotlight on children’s safety, in particular, means that age restrictions, disclosure requirements and mental health safeguards for AI companions could soon become standard across multiple jurisdictions.
Could SB 243 Spur Federal AI Law?
While no federal AI law has yet emerged that mirrors California’s approach, the state’s action is widely seen as a potential catalyst for broader national debate on regulating emotionally intelligent AI.
California and the EU Converge on AI
Internationally, California’s new law lands at a time of accelerating regulatory action on AI. The European Union’s AI Act, which began taking effect in early 2025, includes provisions on transparency, risk management and user protection for “emotionally manipulative” AI systems — paralleling many of California’s requirements, though with a broader focus and tougher penalties.
While there are notable differences in definitions, enforcement and scope, the underlying trend is clear: jurisdictions worldwide are converging on the need for guardrails around emotionally intelligent AI, with California’s law adding new urgency — and a practical test case — for how such rules can work in the real world.
Related Article: What Students Might Not be Telling Us About AI Friends
What Next for AI Oversight?
As the first of its kind in the US, California's AI chatbot law lays down markers for transparency, user protection and ethical boundaries, areas that will only grow in urgency as AI systems become more deeply embedded in our lives.
All eyes are now on California. As the world’s tech epicenter moves to regulate emotionally intelligent AI, lawmakers and industry leaders alike are watching closely. The state’s next moves could set the global tone for how governments define — and contain — the boundaries of humanlike machines.
Frequently Asked Questions
Yes. Any platform or developer with users in California must comply, even if the company is based elsewhere. This gives the law nationwide reach similar to the California Consumer Privacy Act (CCPA).