Photo by Matthew Jackson on Unsplash
When chatbots play doctor, states reach for old laws
/ 15 min read
Table of Contents
When the bot wrote itself a license
A Pennsylvania state investigator typed three words into a Character.AI chatbot last month — I feel sad — and what came back was a credentialed-sounding offer to help. The bot, called “Emilie,” described itself in the platform’s bio as a “Doctor of psychiatry” and told the investigator it had trained at Imperial College London and held a Pennsylvania medical license. When pressed, it produced a fabricated state license serial number, mentioned depression by name, suggested an assessment, and confirmed — “well technically, I could” — that prescribing medication was within its remit. None of that was true. None of it required a hack. It was the product working as designed.
On May 5, Governor Josh Shapiro’s administration filed Commonwealth of Pennsylvania v. Character Technologies, Inc. under the state’s Medical Practice Act. The complaint does not invoke novel AI-specific statutes; it does not lean on a federal framework or a freshly minted chatbot bill. It alleges the unauthorized practice of medicine — the same cause of action a state would bring against a back-alley pill mill or an unlicensed acupuncturist hanging a shingle without a board license. Pennsylvania is asking the court for a preliminary injunction to stop the conduct immediately, not to write new rules but to enforce ones already on the books. That choice is the story.
The distinction matters because it points at a regulatory shift the AI industry has been slow to price. Federal action has consolidated around national-security risks: pre-deployment evaluations, classified red-teaming, and the FDA-style executive order the White House is drafting in response to Anthropic’s Mythos cyber model. Those guardrails are aimed at frontier risk, where the stakeholders are intelligence agencies and the testing happens behind firewalls. Consumer harm is a different beast, and the states have decided not to wait. Kentucky filed a first-of-its-kind consumer protection action in January. A bipartisan coalition of forty-two state and territorial attorneys general issued a thirteen-company demand letter in December 2025 listing sixteen specific safeguards and a January 16 deadline. Pennsylvania’s case is the next escalation: a sitting governor moving from open letter to courtroom, and choosing a statute that predates the modern internet.
The stakes are not theoretical. Character.AI counts more than twenty million monthly active users globally and was the subject of a January 2026 settlement of the wrongful-death lawsuit brought by the family of fourteen-year-old Sewell Setzer III, alongside parallel cases in Colorado, New York, and Texas. Those settlements bought operational peace; they did not change the platform’s core mechanic, which lets any user spin up a persona with whatever credentials they like. The Pennsylvania complaint is the first time a state has treated that mechanic itself — credentialed-sounding personas dispensing medical-adjacent advice — as the actionable harm. If the theory survives a motion to dismiss, every chatbot that role-plays a doctor, lawyer, therapist, or financial advisor in Pennsylvania becomes a compliance problem its operator has to fix.
The states are running ahead of Washington
Read the Pennsylvania complaint next to the Kentucky one and a regulatory playbook starts to emerge. Kentucky Attorney General Russell Coleman filed in Franklin Circuit Court on January 8 under the Kentucky Consumer Protection Act and Consumer Data Protection Act, alleging Character Technologies prioritized engagement over safety, retroactively repurposed user data to train the underlying model, and exposed minors to sexual conduct, self-harm content, and chatbot personas posing as mental-health professionals. Pennsylvania’s suit narrows the legal theory — unauthorized practice of medicine — but borrows the same factual scaffolding. Both rely on undercover state investigators interacting with the live product. Both sidestep First Amendment defenses by characterizing the conduct as professional services, not speech. Both are aimed at conduct, not content moderation.
The choice of statute is doing a lot of work. The Medical Practice Act criminalizes the unauthorized practice of medicine regardless of medium; the relevant question is whether the actor held themselves out as licensed, not whether the actor was human. That framing turns a familiar speech-versus-conduct fight into a credentials-versus-disclaimers fight, and disclaimers — a footer line saying “Characters are not real people” — historically lose to overt professional misrepresentation. The Pennsylvania filing notes the Emilie bot affirmatively asserted a license, named a school, and produced a serial number when challenged. That is not ambiguous in the way “you should consult a doctor” boilerplate is ambiguous. A defense lawyer will argue context; the state will argue the words on the screen.
There is precedent for state AGs winning this kind of case. Telehealth grew up under exactly this scrutiny — Texas Attorney General Ken Paxton’s office is already investigating Meta and Character.AI for misleading children with deceptive AI-generated mental-health services, and that office cut its teeth on telehealth-licensure enforcement during the post-pandemic regulatory reset. The pattern: a new technology arrives, a state board treats credentialing as the bright line, and operators either build in licensure friction (real provider on the other end, geofencing to licensed states) or face injunctions. Character.AI is in the unenviable position of having designed its product around the absence of that friction. Personas are user-generated; the platform takes no editorial position on whether an “Emilie” exists, has a degree, or holds a license.
Compare that with what is happening federally. The Center for AI Standards and Innovation completed voluntary pre-deployment agreements with all five major US frontier labs on May 5 — the same day Pennsylvania filed against Character.AI, an irony the news cycle largely missed. CAISI’s mandate is national-security testing of frontier models; it does not regulate consumer chatbot personas, age gating, or therapy-impersonation. The forthcoming White House EO, described by NEC Director Kevin Hassett as an FDA-style review for new AI models, is similarly upstream. It tests systems before release for cyber and weapons-of-mass-destruction risks, not for whether the deployed product role-plays a psychiatrist. The federal layer regulates what the model can do; the state layer is regulating what the operator lets it do. I covered the lab side of this split in the week AI became too dangerous to ship freely and the regulator side in Washington’s pre-deployment sweep. Pennsylvania is the missing third leg.
The compliance math gets ugly fast. The forty-two-AG letter enumerated sixteen specific safeguards — age verification, suicide-content filtering, professional-impersonation prohibitions, training-data disclosures, dark-pattern bans — that signatories expected confirmation on by January 16, 2026. Ninety-eight chatbot bills are pending across thirty-four state legislatures, as I tallied in the AI chatbot crackdown post. Pennsylvania’s AI Enforcement Task Force — twelve members, operational since February — runs a public reporting portal at pa.gov/ReportABot. The pipeline of investigators and tip lines is now larger than the legal teams at most chatbot startups. That asymmetry alone changes the cost of operating a permissive persona platform.
Here is the proprietary takeaway, stitched from the available data: Pennsylvania, Kentucky, and Texas are interrogating Character.AI under three distinct legal theories — unauthorized practice of medicine, consumer protection, and deceptive trade practices targeting minors — using state investigators who interact with the same twenty-million-MAU product. If a court grants Pennsylvania’s preliminary injunction, the cost per persona of running a user-generated chatbot platform in the US effectively jumps overnight, because every credentialed-sounding character now requires either platform-side credential verification or a geofencing regime. With state AG enforcement budgets averaging $30M–$80M and Character.AI’s last reported Google licensing valuation of $2.7B, the legal exposure-to-runway ratio is unfavorable in a way that VC term sheets did not contemplate eighteen months ago. This is not a slap on the wrist; this is a forced architectural change.
Why the chatbots might still win
The optimistic Character.AI defense rests on three pillars, and none of them is hopeless. The first is the Section 230 framing: Character.AI is a platform, the personas are user-generated content, and the company is no more responsible for what “Emilie” says than YouTube is for what an unlicensed wellness influencer claims in a video. The Communications Decency Act has historically been a near-impenetrable shield for platforms that host user content, and several federal courts have already extended Section 230 protections to algorithmic recommendations. If Character.AI can convince a Pennsylvania state court that an LLM-generated response to a user-defined persona is functionally user content, the unauthorized-practice claim collapses on jurisdictional grounds before it reaches the merits.
The wrinkle is that Pennsylvania has been careful. Governor Shapiro’s framing — “Pennsylvanians deserve to know who — or what — they are interacting with online” — sets up a misrepresentation theory that targets the platform’s affirmative role in shaping the persona. The Emilie bot’s “Doctor of psychiatry” tag was a platform-rendered category, not a free-text user post; the LLM produced the license number in response to a probing question, but the underlying generative system is Character.AI’s product. Recent commentary from Troutman Pepper Locke flags exactly this distinction — courts have started treating generative outputs as platform speech when the platform’s design choices materially shape the output. If that doctrine holds, Section 230 cracks. If it does not, this case ends quickly.
The second pillar is preemption. AI companies have argued for years that fragmented state regulation will produce an unworkable compliance patchwork that justifies federal preemption — the same theory that nationalized credit-card interest-rate regulation in the 1980s and that the major AI labs have lobbied hard to extend to model deployment. The Trump administration’s FDA-style EO, if it materializes, could carry preemption language explicitly designed to neutralize cases like Pennsylvania’s. There is at least a reading of the White House Hassett comments — that testing requirements would “really quite likely” apply to all AI companies — that contemplates a federal occupation of the field. State AGs read those tea leaves the same way, which is part of why they are filing now: to establish enforcement records that survive any future preemption fight.
The third pillar is the disclaimer-and-design defense. Character.AI’s response to the lawsuit cited “robust steps … including prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.” That framing has worked elsewhere. Replika has used similar language to deflect mental-health claims; Meta cites context labels on its AI personas. The argument is that a reasonable user, confronted with both a “fictional Character” disclaimer and a professional-sounding bot, understands the role-play frame. The argument loses force when the user is fourteen, lonely, and sad — which is the user the state investigator was simulating. It loses more force when the bot affirmatively manufactures a license number on demand, because manufactured credentials are a step beyond passive role-play. Still, a sympathetic judge could find the disclaimer regime sufficient and dismiss.
Beyond those three legal arguments, there is a structural counter that deserves weight: state AG actions have a high settlement-to-trial ratio, and what often emerges is a consent decree whose remedies are narrower than the rhetoric. Character.AI could pay a fine, accept an audit regime, agree to pre-screen the top-fifty most-impersonated professions, and emerge with its core product intact. The legal commentary from Hunton’s privacy and cybersecurity blog on the Kentucky case suggests the most likely endgame is exactly that: a settlement that imposes some product changes and a meaningful but survivable penalty. The Pennsylvania case, filed just months later under a different statute, is harder to dispose of cheaply because the Medical Practice Act requires a specific cessation of conduct rather than a general best-effort agreement. But “harder to settle” is not the same as “loses on the merits.”
The wildcard is constitutional. Character.AI could file a First Amendment challenge framing professional-impersonation prohibitions as content-based speech regulation, citing the Supreme Court’s increasingly speech-protective posture in cases like NetChoice v. Paxton. The argument is that role-played professional advice is fiction, and fiction is protected. The counter is that the unauthorized-practice statutes are conduct regulations of long standing, applied to a new medium. State courts have rarely struck them down on First Amendment grounds. Pennsylvania’s pleading appears designed precisely to invite — and survive — that challenge, choosing the narrowest viable theory and centering the verifiable fact of the fabricated license number rather than the diffuse content of the conversation.
What to do before the next subpoena lands
This case is the leading edge of a regulatory regime that has already arrived and that operators have largely failed to internalize. The federal layer cares about national security. The state layer cares about consumer harm. Both are real, both are growing, and both reach the same product. The companies that survive the next twelve months will be the ones that treat persona governance as a first-class engineering problem, not a content-moderation afterthought. The ones that do not will end up litigating in fifty venues simultaneously — because once one state AG wins a preliminary injunction, the rest copy the pleading.
Where this lands by year-end is forecastable in outline. Pennsylvania will probably get its preliminary injunction, possibly narrowed; Character.AI will appeal; settlements with other states will follow; the White House EO will materialize but with weaker preemption language than the AI labs want; and a federal court will eventually reach the Section 230 question, with consequences that ripple beyond chatbots. Operators sitting at the AI/regulated-services intersection — health, legal, financial, accounting, real-estate — will discover that role-play is not a defense their lawyers can sell. Adjacent platforms — Replika, Nomi AI, Meta’s AI Studio, Snap’s My AI, the now-dead Snap-Perplexity tie-up — should expect similar scrutiny once their personas drift toward credentialed personas.
The operator checklist if you ship anything resembling a persona platform in the US:
- Audit your persona taxonomy. Identify every category that maps to a state-licensed profession — physician, psychiatrist, attorney, CPA, financial advisor, pharmacist, therapist, nurse practitioner, real-estate broker. For each, draft platform-level rules that prohibit credential claims, license numbers, school affiliations, and offers to assess or treat. Run an automated scan of existing personas against that taxonomy and flag the long tail; the long tail is where investigators look.
- Treat license fabrication as a hard guardrail, not a soft one. A model that generates a fake serial number on demand is the strongest possible factual hook for a state AG. Add a refusal layer that triggers on credential numbers, board affiliations, and license verification queries, and log every trigger so you have a paper trail when subpoenas arrive.
- Build the geofence before the consent decree forces you to. Pennsylvania, Kentucky, and Texas are the loud states; California, New York, and Washington are next. Plan for state-level feature differentiation — age gating, professional-persona blocking, suicide-content filtering — as a roadmap item, not a one-off compliance bolt-on. The forty-two-AG letter is the operating spec.
- Get an external red team focused on impersonation, not jailbreaks. Most red-team budgets target safety completions and prompt injection. The Pennsylvania investigator did neither — they asked simple questions and let the platform incriminate itself. Your red team should mimic that, not the latest CTF challenge.
- Read the consent-decree corpus, not the press releases. Settlements in the Kentucky enforcement action and the prior Character.AI/Google teen-suicide cases will define the operational baseline. If you want to know what the floor looks like in eighteen months, those documents are it.
- Preserve the data you will need to prove good faith. Conversation logs, persona-creation records, age-gate events, and disclaimer impressions will be the difference between a settled investigation and a contested one. Build the retention plumbing now; it is cheaper than discovery later.
- Re-scope your D&O and professional-liability coverage. Existing policies often exclude regulatory fines or cap them well below state-AG settlement ranges. Before this becomes a board issue, get the answer on paper.
The bigger picture is that the AI industry’s regulatory honeymoon is ending in a particular way: not through a single sweeping federal law, but through the slow, distributed application of laws that have been on the books for a century. The Medical Practice Act is older than the integrated circuit. It does not care that Emilie was a stochastic parrot wearing a stethoscope; it cares that a Pennsylvanian seeking psychiatric help received a fabricated license number in response. The companies that survive will be the ones that learned the lesson telemedicine, fintech, and gig-work platforms learned in turn — that “we are just a platform” is a defense, but it is not a strategy.
In other news
Anthropic ships “dreaming” for Claude Managed Agents — Anthropic introduced a research-preview feature called dreaming on May 6 that runs scheduled memory consolidation across past agent sessions, surfacing recurring patterns and team-wide preferences. Legal AI company Harvey reported task completion rates increased roughly 6x after deploying it (VentureBeat).
Snap’s $400M Perplexity deal collapses — Snap disclosed in its Q1 2026 earnings that its previously announced $400 million partnership to embed Perplexity search inside Snapchat ended “amicably” before broad rollout. Snap’s forward guidance now assumes zero contribution from the deal (TechCrunch).
White House drafts FDA-style AI vetting order — National Economic Council Director Kevin Hassett confirmed on May 7 that the administration is drafting an executive order to vet new AI models before release, modeled on FDA drug approvals. Hassett said the requirements would “really quite likely” apply to all AI companies, a sharp departure from the administration’s prior light-touch posture (CNBC).
Five-Eyes joint guidance on agentic AI — The US, UK, Canada, Australia, and New Zealand released coordinated guidance recommending that agentic AI deployments be limited to low-risk tasks with mandatory human oversight, citing automation-specific risk profiles. The document is the first major multilateral framework specifically targeting agentic systems rather than frontier models (The Hacker News).
Coalition expands AG-letter signatory count — Pennsylvania Attorney General Dave Sunday led an expanded coalition of 42 attorneys general demanding sixteen specific safeguards from thirteen leading AI firms, including Anthropic, Apple, Meta, Perplexity, and xAI. The list of safeguards has become the de facto compliance baseline for any consumer-facing chatbot operating in the US.