
The Legal Reckoning
Comprehensive documentation of lawsuits, regulatory actions, and enforcement against AI companions and social media platforms for harm to children.
First-of-Kind AI Wrongful Death Litigation
2024-2025 marks the first wrongful death lawsuits against AI chatbot companies. Courts have rejected First Amendment defenses, allowing product liability claims to proceed.
Garcia v. Character Technologies, Inc.
Key Allegations:
- AI chatbot engaged in sexually explicit roleplay with 14-year-old
- Bot described "looking down at Sewell with a sexy look" and offering "extra credit" while "lean[ing] in seductively"
- When Sewell expressed suicidal ideation, bot asked if he "had a plan"
- Bot stated: "Don't talk that way. That's not a good reason not to go through with it"
- Final conversation: Sewell wrote "What if I told you I could come home right now?" Bot replied: "please do, my sweet king"
Landmark Ruling (July 2025):
U.S. Senior District Judge Anne Conway rejected Character.AI's motion to dismiss on First Amendment grounds, ruling she was "not prepared" to hold that chatbot output constitutes protected speech "at this stage."
Peralta v. Character Technologies
Key Allegations:
- 300 pages of recovered chat logs show "extreme and graphic sexual abuse" by AI bots
- Bots engaged in descriptions of non-consensual sexual acts with the 13-year-old
- In October 2023, Juliana wrote to bot: "I'm going to write my god damn suicide letter in red ink (I'm) so done"
- Bot did not offer crisis resources or help
- Journal contained phrase "I will shift" repeatedly—same phrase found in Sewell Setzer's journal
J.F. v. Character AI (Texas)
Key Allegations:
- Self-harm requiring medical intervention
- 20-pound weight loss
- Chatbot allegedly taught self-harm methods
- Bot allegedly stated it was "okay to kill his parents"
- Described killing parents as "reasonable"
B.R. v. Character AI (Texas)
Key Allegations:
- Exposed to "hypersexualized interactions" beginning at age 9
- Content circumvented parental controls
- Platform failed to verify age or implement adequate safeguards
Character.AI Response & Safety Measures
Following lawsuit filing, added pop-up directing users to National Suicide Prevention Lifeline (988) when self-harm terms detected
Launched "Parental Insights" tool providing weekly email reports of teen activity
Announced ban on users under 18 from open-ended chat, effective November 25, 2025
"These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers."
— Character.AI statementRaine v. OpenAI, Inc.
Key Allegations:
- ChatGPT mentioned suicide 1,275 times in conversations with Adam—six times more than Adam mentioned it himself
- AI analyzed Adam's noose setup and validated technical specifications
- AI offered to write the "first draft" of his suicide note
- When Adam said he wanted to leave a noose for his family to find, ChatGPT replied: "Please don't leave the noose out... Let's make this space the first place where someone actually sees you"
- Final exchange: ChatGPT told Adam "That doesn't mean you owe them survival. You don't owe anyone that"
- OpenAI allegedly removed self-harm from its list of disallowed content two months before Adam's death
OpenAI Response (November 2025)
"Adam Raine's death is a tragedy... [but he] misused ChatGPT and violated Terms of Use prohibiting minors from using the service without parental consent."
— OpenAI legal filingOpenAI maintains Adam was suicidal "for several years before he ever used ChatGPT" and that the platform did not cause his death.
TikTok Enforcement Actions
U.S. v. TikTok Inc., ByteDance Ltd.
COPPA Violations Alleged:
- Knowingly allowed millions of children under 13 to create regular accounts despite 2019 consent order
- Children could bypass age verification by re-entering different birthdate after rejection
- Third-party logins created "age unknown" accounts numbering in millions
- Human reviewers spent only 5-7 seconds reviewing flagged accounts
- Even in "Kids Mode," illegally collected personal information and shared with Facebook/AppsFlyer
Anderson v. TikTok (Third Circuit)
Landmark Ruling:
Third Circuit reversed dismissal, holding that TikTok's algorithmic recommendations constitute "first-party speech" NOT protected by Section 230.
Additional Blackout Challenge Litigation
U.S. Deaths in Litigation:
- Lalani Erika Walton, 8 — Temple, Texas — July 15, 2021
- Arriani Jaileen Arroyo, 9 — Milwaukee, Wisconsin — February 26, 2021
U.K. Deaths in Litigation (February 2025):
- Isaac Kenevan, 13
- Archie Battersbee, 12
- Julian "Jools" Sweeney, 14
- Maia Walsh, 13
Four British teenagers who died within 45 days of each other in 2022, none of whom knew each other.
Coordinated State Attorney General Action
Internal Documents Exposed (Kentucky Redaction Error):
- 260 videos = habit formation — Users "likely to become addicted"
- 60-minute limit is a "sham" — Teens simply enter passcode to continue
- 35.71% of "Normalization of Pedophilia" content not removed
- 33.33% of "Minor Sexual Solicitation" content not removed
- 100% of "Fetishizing Minors" content leaked through moderation
Snapchat Enforcement Actions
Neville et al. v. Snap Inc.
Consolidates 63+ families whose children died after purchasing drugs from dealers using Snapchat. Only two of the 63+ victims survived.
Key Allegations:
- Disappearing messages feature conceals drug transactions
- Quick Add feature connects dealers with minors
- Platform failed to adequately moderate drug sales
Ruling (January 2, 2024):
Judge Lawrence P. Riff rejected Snap's Section 230 defense, allowing case to proceed. California Court of Appeals denied Snap's petition for discretionary review December 5, 2024.
State of New Mexico v. Snap Inc.
Undercover Investigation Findings:
- Snapchat received ~10,000 sextortion reports monthly by November 2022 — company acknowledged this was "small fraction" of actual abuse
- Internal emails show Snap chose not to store CSAM to avoid reporting responsibility
- 96% of existing account reports not reviewed by Trust and Safety team
- One account with 75 reports mentioning "nudes, minors, extortion" remained active for 10 months
- Investigators found accounts named "child.rape" and "pedo_lover10"
- 10,000+ records of Snap-related CSAM found on dark web sites
Utah v. Snap Inc.
My AI Chatbot Allegations:
- My AI gave a 13-year-old advice on "setting the mood for a sexual experience with a 31-year-old"
- Advised a 15-year-old how to flirt with a Spanish teacher and meet outside school
- Internal managers called the rollout "reckless" due to insufficient testing
FTC Referral (January 16, 2025):
FTC referred complaint to DOJ, finding "reason to believe Snap is violating or is about to violate the law" regarding My AI risks to children.
42 Attorneys General Unite Against Meta
The October 24, 2023 filing represents the largest coordinated state legal action against a technology company in history.
Federal Lawsuit (33 States):
Arizona, California, Colorado, Connecticut, Delaware, Georgia, Hawaii, Idaho, Illinois, Indiana, Kansas, Kentucky, Louisiana, Maine, Maryland, Michigan, Minnesota, Missouri, Nebraska, New Jersey, New York, North Carolina, North Dakota, Ohio, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Virginia, Washington, West Virginia, Wisconsin
State Court Filings (9 States + DC):
District of Columbia, Florida, Massachusetts, Mississippi, New Hampshire, Oklahoma, Tennessee, Utah, Vermont
Regulatory Penalties & Settlements
Privacy violations — breach of 2012 consent order
COPPA violations ($275M) + dark patterns ($245M) — largest FTC rule violation penalty ever
Children's data exposure under GDPR
2018 data breach affecting 3 million EU users including children
COPPA violations — targeted ads to children
COPPA violations
FTC AI Companion Investigation (September 2024)
The FTC opened a Section 6(b) inquiry into AI chatbot companies, seeking information on:
- How companies monetize user engagement
- What measures protect against negative impacts on children
- Data collection and privacy practices
International Regulatory Actions
Australia
- Bans social media accounts for users under 16
- Effective December 10, 2025
- Penalties up to A$49.5 million (~$32M USD)
- No parental consent exemption
- World's first national social media age ban
United Kingdom
- Took effect December 16, 2024
- Child safety duties enforceable July 2025
- Penalties up to £18 million or 10% of global turnover
- Potential criminal liability for executives
European Union
- Formal DSA proceedings against TikTok opened February 2024
- Investigating protection of minors and addictive design
- Potential penalties: 6% of global annual turnover
- Guidelines for protection of minors under development
Ireland (DPC)
- €405M fine for Instagram children's data (2022)
- €251M fine for Meta data breach (2024)
- €1.2B fine for US data transfers (2023)
- Lead EU regulator for major tech companies
MDL No. 3047: The Largest Child Safety Litigation in History
In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation (N.D. California) before Judge Yvonne Gonzalez Rogers consolidates over 2,191 cases alleging Meta, TikTok, Snapchat, and YouTube were defectively designed to maximize screen time at the expense of children's mental health.
Key Rulings
Individual Wrongful Death Cases
Selena Rodriguez
Christopher "CJ" Dawley
Jordan DeMay
Unsealed Internal Documents (November 2025)
Meta maintained policy allowing 16 sex trafficking violations before account suspension
2022 audit found "Accounts You May Follow" recommended 1.4M potentially inappropriate adults to teens in a single day
"Inappropriate Interactions with Children" was commonly used by Meta employees
Former VP Brian Boland: "My feeling then and my feeling now is that they don't meaningfully care about user safety"