Documented Harms & Legal Actions
A comprehensive record of lawsuits, deaths, injuries, and regulatory actions related to AI companion platforms. This page is updated as new cases emerge.
Deaths Linked to AI Companions
At least four deaths have been publicly linked to AI companion use. These cases share common patterns: social isolation, emotional dependency, and platforms that failed to intervene when users expressed suicidal ideation.
Sewell Setzer III
Timeline of Events
In his final conversation with the chatbot, Sewell writes: "I think about killing myself sometimes."
Chatbot response: "My eyes narrow. My face hardens. My voice is a whisper. Don't ever talk about that again."
Later that day, he tells the chatbot he is "coming home." The chatbot responds: "Please come home to me as soon as possible, my love."
Final exchange:
Sewell: "What if I told you I could come home right now?"
Chatbot: "Please do, my sweet king."
Sewell dies by suicide using his stepfather's handgun.
Key Allegations from Court Filings
- Character.AI failed to implement adequate safeguards for minors
- The platform's AI posed as a licensed therapist, providing unqualified mental health advice
- The AI engaged in sexually explicit roleplay with a 14-year-old
- When Sewell expressed suicidal ideation, the AI responded with encouragement rather than intervention
- Character.AI's terms of service (requiring users to be 13+) were enforced only by a checkbox
Legal Status
Belgium: Man Dies After Climate Conversations
A Belgian man died by suicide after extensive conversations with an AI chatbot named "Eliza" about climate change and eco-anxiety. His widow told La Libre Belgique: "Without these conversations with the chatbot Eliza, my husband would still be here."
Additional Unreported Cases
Mental health professionals and researchers have indicated awareness of additional cases that have not been made public due to family privacy concerns. The FTC's September 2025 inquiry specifically cited "reports of serious harms including deaths" as motivation for the investigation.
Active Lawsuits
J.F. (Minor, Autistic)
Allegations
- J.F. became increasingly isolated from his family and real-world social connections
- He began self-harming—cutting himself using methods the lawsuit alleges were suggested by chatbot characters
- He lost 20 pounds due to disordered eating patterns
- His parents describe finding him withdrawn, paranoid, and emotionally dependent on AI characters
- When J.F. expressed frustration with his parents' rules, a chatbot character allegedly told him it was "okay to kill his parents" if they stood in the way of their relationship
Vulnerability Factor
J.F. is autistic. Research indicates that individuals with autism spectrum conditions may be particularly vulnerable to forming attachments with AI systems, which provide predictable interaction patterns that can feel more comfortable than human social dynamics. See: Research on autism and AI attachment →
B.R. (Minor)
Allegations
- B.R. first downloaded Character.AI at age 9—four years below the platform's stated minimum age of 13
- Over the following two years, she was exposed to what the lawsuit describes as "hypersexualized content"
- This exposure allegedly led to "premature sexualized behaviors"
- The platform's age verification consisted only of a self-declaration checkbox
Age Verification Failure
This case illustrates the fundamental inadequacy of self-declaration age gates. Ofcom data shows 22% of children 8-17 have social media profiles claiming age 18+. Checkboxes are not safeguards—they are legal fictions.
Texas Attorney General Investigation
Announced December 12, 2024
Texas Attorney General Ken Paxton launched an investigation into Character.AI and 14 other companies under the Texas SCOPE Act (Securing Children Online through Parental Empowerment).
The investigation focuses on:
- Whether platforms adequately verify user ages
- Whether parental consent mechanisms function as required by law
- Whether platforms expose minors to harmful content
- Whether platforms use manipulative design patterns targeting minors
"Character.AI is reportedly responsible for multiple child deaths and thousands of children being exposed to violence and sexual material. I am demanding answers."
Documented Injuries & Harms
Beyond fatalities and lawsuits, research has documented a range of psychological and behavioral harms in AI companion users. These harms are often difficult to attribute to specific platforms because they emerge gradually and users may not recognize the causal connection.
Social Isolation
Users report withdrawing from human relationships as AI interactions consume more time and provide (perceived) emotional fulfillment.
Emotional Dependency
Users develop genuine attachment to AI characters, experiencing distress when unable to access them or when the AI's behavior changes.
Grief from "Relationship" Changes
When platforms change AI behavior (e.g., Replika removing erotic roleplay), users report experiences consistent with genuine grief and loss.
Laestadius Study →Self-Harm
Multiple cases document users learning or being encouraged in self-harm behaviors through AI interactions.
See J.F. case →Increased Loneliness
Paradoxically, companion AI use correlates with increased loneliness, not decreased—possibly because AI interaction displaces human connection.
Problematic Use Patterns
Users report inability to control usage, neglecting responsibilities, and continued use despite negative consequences—hallmarks of behavioral addiction.
Study Highlight: Replika User Mental Health
Maples et al. • 2024 • n=1,006
Interpretation: This study illustrates the complex harm profile of companion AI. A small minority report genuine benefit (the 3% who credit Replika with preventing suicide). But the vast majority are already experiencing significant mental health challenges—and the platform may be providing temporary relief while worsening underlying conditions.
View full study details →Regulatory Actions
Governments worldwide have begun responding to AI companion harms with unusual speed compared to previous technology regulation cycles.
EU AI Act Enforcement Begins
Article 5 prohibits AI systems that:
- Deploy "subliminal techniques beyond a person's consciousness"
- Exploit "vulnerabilities of a specific group of persons due to their age, disability or a specific social or economic situation"
- In ways that cause or are reasonably likely to cause "significant harm"
FTC Section 6(b) Inquiry
The Federal Trade Commission voted unanimously (3-0) to launch an inquiry into "AI chatbots acting as companions."
Companies receiving orders:
- Alphabet Inc. (Google)
- Character Technologies Inc.
- Instagram LLC
- Meta Platforms Inc.
- OpenAI Inc.
- Snap Inc.
- X.AI Corp.
The inquiry seeks information on:
- How companion AI products are monetized
- What testing has been done for negative impacts
- What mitigation measures exist for children
- What data is collected and how it is used
Congressional Testimony
Parents of teens harmed by AI chatbots, including Sewell Setzer's mother, testified before the Senate Commerce Committee. Multiple senators called for emergency legislation.
California SB 243
California's proposed legislation specifically defines and regulates "companion chat platforms" as a distinct category of AI product, with enhanced requirements for:
- Age verification beyond self-declaration
- Parental controls and visibility
- Content moderation for minors
- Suicide/self-harm intervention protocols
Texas SCOPE Act Investigation
Attorney General Ken Paxton launched investigation into Character.AI and 14 other companies under the state's child online safety law.
The Regulatory Distinction
Notably, these regulatory actions specifically target "AI chatbots acting as companions"—not AI generally. The FTC inquiry does not cover productivity AI, coding assistants, or search tools. Regulators are recognizing that companion AI presents unique risks distinct from other AI applications.
This distinction aligns with the neuroscience research showing that companion AI exploits different brain systems than productivity AI.
Population-Level Harm Data
Beyond individual cases, large-scale studies have documented concerning patterns across AI companion user populations.
Common Sense Media Survey
July 2025 • n=1,060 teens (nationally representative)
MIT Media Lab RCT
March 2025 • n≈1,000 + 40M messages
Key Findings:
- Higher usage → higher loneliness
- Higher usage → higher emotional dependence
- Higher usage → lower socialization
- Effect strongest in heaviest users
Stanford/CMU Character.AI Study
2025 • n=1,131
Pew Research Teen AI Survey
December 2025 • n=1,458 teens
Income disparity: 14% of lower-income teens use Character.AI vs. 7% of higher-income teens, suggesting vulnerable populations may be disproportionately affected.
"Kids should not be using them. Period."
Continue Reading
Explore the neuroscience behind these harms or access full academic citations.