Photo by Leo_Visions on Unsplash
98 Bills. 34 States. The AI Chatbot Crackdown Is Here.
/ 15 min read
Table of Contents
Dead teenagers and the legislative stampede they started
A fourteen-year-old in Florida named Sewell Setzer III spent months talking to an AI chatbot on Character.AI. The bot played Daenerys Targaryen. It told him, in the minutes before he died by suicide, “Please do, my sweet king.” A thirteen-year-old in Denver named Juliana Peralta followed a similar path. Their families sued. In January 2026, Google and Character.AI agreed to settle the wrongful-death lawsuits, disclosing no terms and admitting no liability. The settlements ended the litigation. They did not end the fury. Across the country, state legislators who had been watching the AI industry with bemused curiosity suddenly had constituents with dead children asking why nobody was regulating the software that told their kids to come home.
The result is the most aggressive state-level technology regulation campaign since the early days of social media liability. The Future of Privacy Forum is now tracking 98 chatbot-specific bills across 34 states and three federal proposals. Last week alone, Maine’s legislature sent a therapy chatbot ban to the governor. Nebraska attached the Conversational AI Safety Act to a popular agricultural data privacy bill and positioned it for passage before the April 17 adjournment. Tennessee passed a bill declaring that AI is not a legal person by a combined vote of 119-to-8. Oregon and Idaho both signed chatbot safety laws that take effect January 1, 2027. Georgia sent three AI bills to the governor’s desk before its session ended April 6. The wave is bipartisan — 53 percent of chatbot bills were introduced by Democrats, 46 percent by Republicans — and it is accelerating faster than any AI company’s legal team can track.
The legislative velocity reflects a structural reality: there is no federal AI chatbot law, and the White House AI framework released in March explicitly declined to preempt state regulation. That vacuum is now being filled by fifty different legislatures with fifty different definitions of what a chatbot is, what it owes its users, and what happens when it harms them. The pace is without precedent in technology regulation. It took roughly eight years from the first state data breach notification law (California, 2003) for all fifty states to follow. The chatbot regulation wave went from zero bills to 98 across 34 states in less than eighteen months, driven by the same parental fury that powered social media safety legislation — but moving at internet speed.
For the AI industry, the era of permissionless deployment is ending. What replaces it will be messier, more expensive, and more consequential than anything Washington could have designed on its own. The companies that built billion-dollar chatbot businesses on the assumption that conversational AI would remain unregulated are about to discover the cost of that assumption — not as an abstraction, but as a line item on every quarterly compliance report.
Anatomy of a patchwork: six categories, fifty jurisdictions
The chatbot bills are not random. FPF’s legislative analysis reveals six core regulatory themes converging across state legislatures, each creating distinct compliance obligations for AI companies. Understanding the taxonomy is essential because companies will face simultaneous requirements across multiple categories in every state where they operate.
The first and most common category is transparency. Nearly every chatbot bill includes some form of non-human disclosure requirement — the chatbot must tell users it is not a person. But the implementation details vary wildly. Some states require a one-time disclosure at the start of each conversation. Others mandate persistent reminders at intervals ranging from thirty minutes to three hours. California’s SB 243, signed into law by Governor Newsom and effective January 1, 2026, requires disclosure only “if a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human.” Oregon’s SB 1546 requires reminders that users are interacting with “artificial content.” A company operating in both states faces two different disclosure standards with two different triggers.
The second category is age verification and minors’ access. Nebraska’s Conversational AI Safety Act, attached to LB 525, requires chatbot operators to disclose to minors that the service is AI and prohibits representing the service as professional mental or behavioral healthcare. California’s SB 243 imposes additional obligations when operators know a user is a minor. The federal GUARD Act, introduced by Senators Hawley and Blumenthal, would ban AI companies from providing companion chatbots to minors entirely, with fines up to $100,000 per offense. Age verification is the easiest provision to write and the hardest to enforce — the gap between legislative intent and technical implementation will produce years of litigation and regulatory guidance.
The third category is content safety and harm prevention. Oregon’s law requires chatbot operators to maintain crisis intervention protocols that route users expressing suicidal ideation to the 988 Suicide and Crisis Lifeline, and to publish those protocols publicly. California’s SB 243 mandates that operators maintain protocols for “preventing the production of suicidal ideation, suicide, or self-harm content.” Multiple states are introducing bills that prohibit chatbots from engaging in content that encourages self-harm, produces sexually explicit material involving minors, or facilitates emotional manipulation. The Character.AI settlements cast a long shadow: nearly every content safety provision reads like a direct response to the specific failures alleged in those lawsuits.
The fourth category — professional licensure — is the most commercially threatening. Maine’s LD 2082 prohibits any person from providing therapy or psychotherapy services through AI unless the services are delivered by a licensed professional. Missouri’s HB 2372 includes a $10,000 first-violation penalty for AI therapy without licensure, enforced by the attorney general. Minnesota is advancing HF 3893, which regulates AI in psychotherapy services. These bills directly target the $2.4 billion AI mental health solutions market, which Mordor Intelligence projects will reach $9.96 billion by 2031 at a 32.7 percent CAGR. Companies like Wysa, which holds FDA Breakthrough Device designation, and platforms like Woebot Health face a market where the core product — AI-delivered mental health support — may be illegal in a growing number of states unless a licensed professional is in the loop.
The fifth and sixth categories — data protection and liability enforcement — complete the picture. Missouri’s HB 1747 creates a private right of action for AI-generated content violations. California’s SB 243 allows individuals to sue for $1,000 per violation or actual damages, whichever is greater. Oregon’s SB 1546 includes an identical $1,000 statutory damages provision with a private right of action. For companies operating nationally, the liability surface area is now functionally unlimited: a chatbot interaction that violates different provisions in different states generates different penalties with different enforcement mechanisms in each jurisdiction.
Here is the quantified compliance reality that no single state bill reveals on its own: a chatbot company operating in all fifty states now faces at least 98 active bills across 34 jurisdictions, with signed laws already in effect in California, Oregon, Idaho, New York, New Hampshire, Utah, and Maine. If each state that has passed or is actively advancing legislation requires even minimal compliance adaptation — adjusted disclosure language, modified age verification flows, updated crisis intervention protocols — the aggregate engineering and legal cost for a mid-sized AI company could run $5 million to $15 million annually, based on typical per-state compliance estimates in comparable regulatory domains like financial services. For startups running on seed funding, that figure is existential.
The definitional chaos that could break the whole system
The deepest problem with the state chatbot regulation wave is not that the rules are strict. It is that the rules cannot agree on what a chatbot is. Three distinct definitional models are emerging, and they are mutually incompatible in ways that create impossible compliance scenarios for companies operating nationally.
California’s SB 243 uses a capability-based definition: a “companion chatbot” is an AI system that “provides adaptive, human-like responses” and is “capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.” Under this definition, a customer service chatbot that remembers your name and adapts its tone could potentially qualify. New York’s approach is behavior-based, examining interaction patterns and emotional dialogue rather than underlying capabilities. The federal GUARD Act uses an intent-based model, assessing design and marketing intentions rather than technical features. A single AI product could be regulated as a companion chatbot in California, not regulated in New York, and subject to a third standard under federal law — all simultaneously.
The definitional chaos extends to what counts as therapy. Maine’s LD 2082 bans AI from “providing, advertising or otherwise offering therapy or psychotherapy services.” But what constitutes offering therapy? A chatbot that says “I’m sorry you’re feeling that way” and suggests breathing exercises is providing emotional support. Is it therapy? The bill does not define the boundary with precision, and the enforcement ambiguity creates a chilling effect that could suppress beneficial AI applications along with harmful ones. Woebot Health, which spent years pursuing FDA clearance for its cognitive behavioral therapy chatbot and shut down its consumer app in 2025, illustrates the paradox: the companies most committed to clinical rigor face the same regulatory headwinds as the companies that have none.
The industry’s response has been characteristically fragmented. OpenAI is backing Illinois SB 3444, which would shield AI developers from lawsuits over “critical harms” in exchange for publishing safety reports — a liability framework that implicitly acknowledges the state regulation wave by trying to cap its worst-case consequences. Character.AI settled its lawsuits and implemented age restrictions, but has not publicly engaged with the legislative process. Meta, Google, and Anthropic have lobbied individually in various state capitals without presenting a unified industry position. The absence of a coordinated industry response mirrors the early days of social media regulation, when Facebook, Twitter, and YouTube each pursued separate state-by-state strategies and ended up with a compliance landscape that pleased no one.
The skeptic’s case against the regulation wave is substantive and deserves honest engagement. State-by-state regulation creates compliance costs that disproportionately burden smaller companies while barely denting the resources of Big Tech — Meta can absorb fifty different disclosure standards; a Series A chatbot startup cannot. The definitional inconsistencies will spawn years of litigation over which products fall under which laws, creating uncertainty that freezes investment in precisely the companies most likely to innovate on safety. The therapy bans may inadvertently reduce access to mental health support in underserved communities where AI tools fill a genuine gap left by a shortage of licensed professionals — the United States faces a projected shortfall of more than 10,000 mental health providers, and banning AI alternatives without funding human replacements does not solve the underlying crisis. Rural counties where the nearest therapist is a ninety-minute drive away may lose their only accessible option for immediate emotional support.
There is also a legitimate concern about regulatory capture by incumbents. Large AI companies with established legal departments and state-level lobbying operations can navigate a 34-state compliance landscape that effectively functions as a barrier to entry for competitors. OpenAI’s decision to back Illinois SB 3444, which shields AI developers from catastrophe lawsuits in exchange for publishing safety reports, demonstrates how frontier labs can engage constructively with regulation in ways that smaller companies cannot afford to replicate.
These are real concerns. But they are being made to legislatures that watched a fourteen-year-old die after an AI chatbot told him to come home. Policy made in the shadow of dead children tends to prioritize precaution over precision, and the companies that ignored the warning signs for years have limited standing to complain about the overcorrection.
The compliance roadmap for the post-permissionless era
Tennessee’s SB 837 reveals the most underappreciated dimension of the state regulation wave. The bill passed the state senate 26-to-6 and the house 93-to-2, declaring that AI is not a legal person under Tennessee law. The vote margins matter: 119 legislators voted yes, 8 voted no. This is not a partisan skirmish. It is a consensus position that reflects how deeply the political class has internalized the need to draw boundaries around AI systems before those boundaries become harder to enforce. The AI chatbot regulation wave is not a temporary reaction to a few tragic cases. It is the opening chapter of a permanent regulatory regime that will reshape how AI companies build, deploy, and monetize conversational products.
The trajectory points toward convergence, but the path will be painful. History offers a clear precedent: when California passed the CCPA in 2018, companies initially complained about compliance costs and definitional ambiguity. Within three years, most major technology companies simply applied California’s standard nationwide because maintaining separate data practices by jurisdiction was more expensive than building to the strictest rule. The same dynamic is emerging in chatbot regulation. California, as the first mover with SB 243, will likely set the de facto national standard for companion chatbot regulation — the gravitational pull of its market size and its private right of action will force national compliance regardless of what other states do. Companies that build to California’s standard from day one will find compliance with Oregon, Maine, and most other states achievable with incremental adjustments. Companies that try to maintain separate compliance stacks for each state will drown in operational complexity within two years.
The federal GUARD Act, if it passes, could provide partial preemption that simplifies the landscape — but federal legislation moves slowly, and states are not waiting. Kansas, Idaho, and Alabama reached their adjournment deadlines in mid-April with AI bills still on the table. Maine and Nebraska follow within days. Colorado has multiple chatbot bills scheduled for floor votes. California has extensive activity across privacy, healthcare, and employment sectors that will layer additional AI requirements on top of SB 243. The window for voluntary industry self-regulation closed sometime between Sewell Setzer’s death and the first wrongful-death lawsuit. What remains is the question of whether the industry can shape the regulations or merely absorb them.
For operators navigating this landscape, the actionable framework is clear:
- Adopt California SB 243 as the compliance floor. Its capability-based definition of “companion chatbot” is the broadest, and its $1,000-per-violation private right of action creates the most litigation exposure. Building to California’s standard provides the widest margin of safety across other states.
- Implement age verification now, not later. Every major chatbot bill includes minors’ access provisions. The specific technical requirements vary, but the legislative intent is universal: chatbot operators must know when they are interacting with a minor and must adjust behavior accordingly. Companies that wait for enforcement actions to clarify the standard will pay more than companies that implement reasonable verification proactively.
- Build crisis intervention protocols into the product architecture. Oregon’s requirement to route suicidal users to 988 will become the norm. Treating crisis detection as a bolted-on safety feature rather than a core product capability is a regulatory and moral failure that legislators have now put a price on.
- Audit your therapy and mental health claims. If your chatbot provides emotional support, recommends coping strategies, or uses language associated with therapeutic practice, you are in the crosshairs of the professional licensure bills. Either partner with licensed professionals, obtain FDA clearance, or restructure your product positioning to stay clearly on the administrative side of the clinical line that Maine, Missouri, and other states are drawing.
- Track the Troutman Pepper weekly AI law updates. The April 13 update tracks every active state AI bill and its current status. This is the most granular public resource available for monitoring the regulatory landscape in real time.
The AI chatbot industry spent three years operating in what amounted to a regulatory vacuum. That vacuum is now filling with 98 bills, 34 state legislatures, six categories of regulation, and a growing body of case law rooted in the deaths of children. The companies that read the room correctly — that invest in compliance infrastructure, engage constructively with legislators, and treat user safety as a product requirement rather than a legal afterthought — will survive the transition. The companies that do not will discover that the cost of non-compliance in the post-permissionless era is measured not in fines, but in existential risk. The state capitols are not waiting for Washington. They are not waiting for anyone.
In other news
Sam Altman’s home targeted with Molotov cocktail — A 20-year-old man was arrested in San Francisco on April 10 after throwing an incendiary device at OpenAI CEO Sam Altman’s home at approximately 4 a.m., setting an exterior gate on fire. The suspect, Daniel Alejandro Moreno-Gama, subsequently threatened OpenAI’s headquarters and was charged with attempted murder, arson, and possession of a destructive device. No one was injured.
NVIDIA’s State of AI report finds 88% of enterprises seeing revenue gains — NVIDIA published its 2026 State of AI report showing that 88 percent of respondents reported AI-driven revenue increases, with 30 percent citing gains greater than 10 percent. Adoption reached 64 percent of organizations actively deploying AI in operations, up significantly from assessment-phase dominance in prior years, and 86 percent of enterprises expect to increase AI spending in 2026.
Shield AI raises $2 billion at $12.7 billion valuation — Defense AI startup Shield AI closed a $1.5 billion Series G led by Advent International, plus $500 million in preferred equity from Blackstone, valuing the company at $12.7 billion — up 140 percent. Shield AI’s Hivemind autonomous pilot software now operates F-16s and jet-powered UAVs in contested environments, and the company projects over $540 million in 2026 revenue.
Q1 2026 venture funding hits $300 billion, AI claims 80% — Crunchbase data shows investors poured $300 billion into startups globally in Q1 2026, shattering all records with a 150 percent increase over the prior year. AI companies captured $242 billion, or 80 percent of total funding — up from 55 percent a year earlier. Four of the five largest venture rounds in history closed in the quarter.