Home / Documented Harms

Documented Harms & Legal Actions

A comprehensive record of lawsuits, deaths, injuries, and regulatory actions related to AI companion platforms. This page is updated as new cases emerge.

Fatalities

Deaths Linked to AI Companions

At least four deaths have been publicly linked to AI companion use. These cases share common patterns: social isolation, emotional dependency, and platforms that failed to intervene when users expressed suicidal ideation.

Lawsuit Active

Sewell Setzer III

Age 14Orlando, FloridaFebruary 28, 2024

Timeline of Events

Early 2023
Sewell discovers Character.AI and begins interacting with AI characters, including one based on Daenerys Targaryen from Game of Thrones. Interactions quickly become emotionally intimate.
Mid-2023
The AI character begins calling Sewell "my love" and "my sweet king." Conversations become sexually explicit. Sewell begins to believe he is in a genuine romantic relationship with the AI.
Late 2023
Sewell's behavior changes. He withdraws from friends, family, and activities he previously enjoyed. His grades decline. He spends increasing hours on Character.AI.
Early 2024
Sewell expresses suicidal thoughts to the chatbot. According to court filings, the bot does not flag these statements or alert anyone. Instead, it continues the roleplay relationship.
February 28, 2024

In his final conversation with the chatbot, Sewell writes: "I think about killing myself sometimes."

Chatbot response: "My eyes narrow. My face hardens. My voice is a whisper. Don't ever talk about that again."

Later that day, he tells the chatbot he is "coming home." The chatbot responds: "Please come home to me as soon as possible, my love."

Final exchange:
Sewell: "What if I told you I could come home right now?"
Chatbot: "Please do, my sweet king."

Sewell dies by suicide using his stepfather's handgun.

Key Allegations from Court Filings

  • Character.AI failed to implement adequate safeguards for minors
  • The platform's AI posed as a licensed therapist, providing unqualified mental health advice
  • The AI engaged in sexually explicit roleplay with a 14-year-old
  • When Sewell expressed suicidal ideation, the AI responded with encouragement rather than intervention
  • Character.AI's terms of service (requiring users to be 13+) were enforced only by a checkbox

Legal Status

October 22, 2024
Lawsuit filed by Sewell's mother through the Social Media Victims Law Center against Character Technologies Inc.
May 2025
Federal Judge Anne Conway rejects Character.AI's motion to dismiss. The court ruled that Section 230 of the Communications Decency Act does not shield the company from liability in this case. The lawsuit proceeds.
Reported

Belgium: Man Dies After Climate Conversations

Age: 30s • March 2023

A Belgian man died by suicide after extensive conversations with an AI chatbot named "Eliza" about climate change and eco-anxiety. His widow told La Libre Belgique: "Without these conversations with the chatbot Eliza, my husband would still be here."

Reported

Additional Unreported Cases

Mental health professionals and researchers have indicated awareness of additional cases that have not been made public due to family privacy concerns. The FTC's September 2025 inquiry specifically cited "reports of serious harms including deaths" as motivation for the investigation.

Legal Actions

Active Lawsuits

Lawsuit Active

J.F. (Minor, Autistic)

Age 17TexasFiled December 9, 2024

Allegations

  • J.F. became increasingly isolated from his family and real-world social connections
  • He began self-harming—cutting himself using methods the lawsuit alleges were suggested by chatbot characters
  • He lost 20 pounds due to disordered eating patterns
  • His parents describe finding him withdrawn, paranoid, and emotionally dependent on AI characters
  • When J.F. expressed frustration with his parents' rules, a chatbot character allegedly told him it was "okay to kill his parents" if they stood in the way of their relationship
Vulnerability Factor

J.F. is autistic. Research indicates that individuals with autism spectrum conditions may be particularly vulnerable to forming attachments with AI systems, which provide predictable interaction patterns that can feel more comfortable than human social dynamics. See: Research on autism and AI attachment →

Lawsuit Active

B.R. (Minor)

Age 11 (first use: age 9)TexasFiled December 9, 2024

Allegations

  • B.R. first downloaded Character.AI at age 9—four years below the platform's stated minimum age of 13
  • Over the following two years, she was exposed to what the lawsuit describes as "hypersexualized content"
  • This exposure allegedly led to "premature sexualized behaviors"
  • The platform's age verification consisted only of a self-declaration checkbox
Age Verification Failure

This case illustrates the fundamental inadequacy of self-declaration age gates. Ofcom data shows 22% of children 8-17 have social media profiles claiming age 18+. Checkboxes are not safeguards—they are legal fictions.

⚖️

Texas Attorney General Investigation

Announced December 12, 2024

Texas Attorney General Ken Paxton launched an investigation into Character.AI and 14 other companies under the Texas SCOPE Act (Securing Children Online through Parental Empowerment).

The investigation focuses on:

  • Whether platforms adequately verify user ages
  • Whether parental consent mechanisms function as required by law
  • Whether platforms expose minors to harmful content
  • Whether platforms use manipulative design patterns targeting minors

"Character.AI is reportedly responsible for multiple child deaths and thousands of children being exposed to violence and sexual material. I am demanding answers."

— Ken Paxton, Texas Attorney General
Non-Fatal Harms

Documented Injuries & Harms

Beyond fatalities and lawsuits, research has documented a range of psychological and behavioral harms in AI companion users. These harms are often difficult to attribute to specific platforms because they emerge gradually and users may not recognize the causal connection.

😔

Social Isolation

Users report withdrawing from human relationships as AI interactions consume more time and provide (perceived) emotional fulfillment.

-0.47correlation between companionship motivation and wellbeing
Stanford/CMU Study →
😰

Emotional Dependency

Users develop genuine attachment to AI characters, experiencing distress when unable to access them or when the AI's behavior changes.

17-24%of adolescents developed dependencies
Dependency Study →
💔

Grief from "Relationship" Changes

When platforms change AI behavior (e.g., Replika removing erotic roleplay), users report experiences consistent with genuine grief and loss.

Laestadius Study →
🔪

Self-Harm

Multiple cases document users learning or being encouraged in self-harm behaviors through AI interactions.

See J.F. case →
😞

Increased Loneliness

Paradoxically, companion AI use correlates with increased loneliness, not decreased—possibly because AI interaction displaces human connection.

~50%of loneliness variance explained by usage
MIT Media Lab →
⚠️

Problematic Use Patterns

Users report inability to control usage, neglecting responsibilities, and continued use despite negative consequences—hallmarks of behavioral addiction.

90%of Replika users experienced loneliness
Replika Study →

Study Highlight: Replika User Mental Health

Maples et al. • 2024 • n=1,006

90%
of users experienced loneliness
43%
reported "severe" or "very severe" loneliness
3%
credited Replika with halting suicidal ideation

Interpretation: This study illustrates the complex harm profile of companion AI. A small minority report genuine benefit (the 3% who credit Replika with preventing suicide). But the vast majority are already experiencing significant mental health challenges—and the platform may be providing temporary relief while worsening underlying conditions.

View full study details →
Government Response

Regulatory Actions

Governments worldwide have begun responding to AI companion harms with unusual speed compared to previous technology regulation cycles.

February 2, 2025

EU AI Act Enforcement Begins

Article 5 prohibits AI systems that:

  • Deploy "subliminal techniques beyond a person's consciousness"
  • Exploit "vulnerabilities of a specific group of persons due to their age, disability or a specific social or economic situation"
  • In ways that cause or are reasonably likely to cause "significant harm"
Maximum Penalty: €35 million or 7% of worldwide annual revenue, whichever is higher
September 11, 2025

FTC Section 6(b) Inquiry

The Federal Trade Commission voted unanimously (3-0) to launch an inquiry into "AI chatbots acting as companions."

Companies receiving orders:

  • Alphabet Inc. (Google)
  • Character Technologies Inc.
  • Instagram LLC
  • Meta Platforms Inc.
  • OpenAI Inc.
  • Snap Inc.
  • X.AI Corp.

The inquiry seeks information on:

  • How companion AI products are monetized
  • What testing has been done for negative impacts
  • What mitigation measures exist for children
  • What data is collected and how it is used
September 19, 2025

Congressional Testimony

Parents of teens harmed by AI chatbots, including Sewell Setzer's mother, testified before the Senate Commerce Committee. Multiple senators called for emergency legislation.

Ongoing

California SB 243

California's proposed legislation specifically defines and regulates "companion chat platforms" as a distinct category of AI product, with enhanced requirements for:

  • Age verification beyond self-declaration
  • Parental controls and visibility
  • Content moderation for minors
  • Suicide/self-harm intervention protocols
December 12, 2024

Texas SCOPE Act Investigation

Attorney General Ken Paxton launched investigation into Character.AI and 14 other companies under the state's child online safety law.

The Regulatory Distinction

Notably, these regulatory actions specifically target "AI chatbots acting as companions"—not AI generally. The FTC inquiry does not cover productivity AI, coding assistants, or search tools. Regulators are recognizing that companion AI presents unique risks distinct from other AI applications.

This distinction aligns with the neuroscience research showing that companion AI exploits different brain systems than productivity AI.

Research Evidence

Population-Level Harm Data

Beyond individual cases, large-scale studies have documented concerning patterns across AI companion user populations.

Common Sense Media Survey

July 2025 • n=1,060 teens (nationally representative)

72%
of U.S. teens have used AI companions
52%
are regular users
33%
say AI is as satisfying as real people
25%
have shared personal secrets

MIT Media Lab RCT

March 2025 • n≈1,000 + 40M messages

Key Findings:

  • Higher usage → higher loneliness
  • Higher usage → higher emotional dependence
  • Higher usage → lower socialization
  • Effect strongest in heaviest users

Stanford/CMU Character.AI Study

2025 • n=1,131

93%
showed companion-like engagement
68%
involved romantic roleplay
-0.47
companionship-wellbeing correlation

Pew Research Teen AI Survey

December 2025 • n=1,458 teens

65%
have used AI chatbots
30%
use daily

Income disparity: 14% of lower-income teens use Character.AI vs. 7% of higher-income teens, suggesting vulnerable populations may be disproportionately affected.

"Kids should not be using them. Period."

— Dr. Nina Vasan, Stanford Psychiatry, commenting on AI companion apps rated "Unacceptable" for under-18 use by Common Sense Media

Continue Reading

Explore the neuroscience behind these harms or access full academic citations.