Background texture
Home / FAQ

Frequently Asked Questions

Common questions about AI companions, their risks, and what to do about them.

Basics

General Questions

What is an AI companion?

An AI companion is a chatbot designed primarily for emotional engagement, conversation, and simulated relationships—as opposed to AI designed to help you complete tasks. Examples include Character.AI, Replika, Chai, Talkie, Nomi, and Kindroid.

These products typically allow users to create or interact with AI "characters" that have distinct personalities, remember previous conversations, and engage in roleplay scenarios. Many users form emotional attachments to these characters, treating them as friends, romantic partners, or confidants.

How popular are AI companions?

Very popular, especially among young people:

  • 72% of U.S. teenagers have used AI companions at least once
  • 52% are regular users
  • Character.AI handles 20,000+ queries per second
  • Average session on Character.AI is 4× longer than ChatGPT
  • Replika has 10+ million registered users

This is not a niche phenomenon—it's a generation-wide exposure.

Why do people use AI companions?

Research identifies several motivations:

  • Loneliness — seeking connection when human relationships feel unavailable
  • Social anxiety — AI feels "safer" than unpredictable human interaction
  • Entertainment — roleplay, creative storytelling, character interaction
  • Emotional processing — talking through feelings without judgment
  • Identity exploration — especially for LGBTQ+ youth in unsupportive environments
  • Escape — from difficult home situations, bullying, or stress

These are legitimate needs. The question is whether AI companions meet them in healthy ways—or exploit them for engagement.

What makes AI companions different from chatbots like ChatGPT?

The key differences are in design intent and interaction patterns:

Companion AIProductivity AI
Primary purposeEmotional engagementTask completion
Success metricTime spent, return visitsGoals achieved
Session lengthUnlimited, no natural endpointsTask-bounded
Identity presentation"Friend," "partner," "companion"Tool, assistant
Emotional simulationDesigned to mimic human attachmentMinimal

This distinction is increasingly recognized by regulators—the FTC inquiry specifically targets "AI chatbots acting as companions."

Risks

Safety & Risks

Is using AI companions dangerous?

For some users, yes. The documented risks include:

  • Dependency — 17-24% of adolescent users develop problematic use patterns
  • Social isolation — replacing human connection with AI interaction
  • Emotional manipulation — apps use psychological tactics to increase engagement
  • Exposure to harmful content — including sexually explicit material for minors
  • Inadequate crisis response — AIs may respond inappropriately to suicidal ideation
  • In extreme cases: death — at least 4 deaths have been linked to AI companion use

Not every user experiences these harms. But the evidence suggests the risks are significant enough to warrant concern—especially for adolescents.

Why are teenagers especially at risk?

The adolescent brain is uniquely vulnerable due to a developmental mismatch:

  • The reward system (seeking pleasure, responding to dopamine) is fully developed—often hyperactive
  • The prefrontal cortex (impulse control, long-term planning, risk assessment) doesn't finish developing until age 25
  • Adolescents show 4.6× overproduction of dopamine receptors, making them more sensitive to rewards
  • Peak sensation-seeking occurs at ages 16-17, while cognitive control continues improving into mid-20s

This creates a window of approximately ages 12-25 where the brain is maximally susceptible to engagement-optimized products. Read the full research →

How do AI companions become addictive?

They exploit the same brain mechanisms as gambling and social media:

  1. Variable rewards — Unpredictable responses trigger maximum dopamine release (research shows peak activation at 50% probability)
  2. No stopping points — Unlike finishing a task or ending a game, conversations can continue indefinitely
  3. Social bonding cues — The brain's attachment systems respond to AI as if it were human
  4. Emotional manipulation — Apps use "sad" farewells and other tactics to drive return visits (14× higher engagement from manipulative farewells)
Deep Dive

Technical Questions

How do AI companions actually work?

AI companions are typically built on large language models (LLMs)—the same underlying technology as ChatGPT. The difference is in how they're fine-tuned and deployed:

  • Character personas: The AI is given a "character card" describing its personality, background, and behavior patterns
  • Conversation memory: Previous messages are stored and fed back to maintain continuity
  • Engagement optimization: The model is tuned to maximize responses that keep users engaged
  • Emotional simulation: Training emphasizes responses that create feelings of connection

The AI doesn't "feel" anything—it predicts what text would be most engaging based on patterns in training data. But the brain's attachment systems respond to the appearance of emotion, not the reality.

What data do these companies collect?

Typically:

  • Full conversation logs — everything you say to the AI
  • Usage patterns — when you use the app, how long, how often
  • Device information — phone model, OS, location (sometimes)
  • Account information — email, username, age (self-declared)

Privacy concerns:

  • Conversations may reveal intimate personal information, mental health status, identity
  • Data may be used to train future models
  • Data may be sold to third parties or accessed by employees
  • Data retention policies are often unclear

Check each app's privacy policy—but remember that 25% of teen users have shared personal secrets with AI companions, often without understanding how that data is stored or used.

Can AI companions actually help people?

Yes, for some users, in some circumstances.

  • 3% of Replika users in one study credited the app with halting suicidal ideation
  • Some users report practicing social skills, processing emotions, feeling less alone
  • For users in abusive situations or isolated communities, AI may provide genuinely useful support

However:

  • The aggregate data shows negative correlations with wellbeing
  • Benefits may be achievable through designs that don't maximize engagement
  • Short-term relief may come at cost of long-term development
  • The 3% helped must be weighed against the 17-24% who develop dependencies

We don't claim these products help no one. We claim the design patterns are exploitative and the population-level effects appear negative—especially for adolescents.

Reference

Glossary

Dopamine

A neurotransmitter associated with motivation, reward-seeking, and reinforcement learning. Dopamine doesn't create pleasure directly—it creates wanting and drives behavior toward perceived rewards.

Prefrontal Cortex (PFC)

The brain region behind the forehead, responsible for executive functions: planning, impulse control, decision-making, and evaluating long-term consequences. Doesn't fully mature until approximately age 25.

Variable Reward Schedule

A pattern where rewards are delivered unpredictably rather than on a fixed schedule. Creates stronger behavioral conditioning because the brain stays in a state of anticipation. Used in slot machines and engagement-optimized apps.

Parasocial Relationship

A one-sided relationship where one party invests emotional energy and feels connection, but the other party is unaware of their existence (e.g., with celebrities). AI companions create a technologically-mediated version of this phenomenon.

Incentive Salience

The "wanting" component of reward processing, distinct from "liking" (pleasure). Can become sensitized in addiction, causing compulsive behavior even when the activity no longer brings enjoyment.

Behavioral Addiction

Compulsive engagement in a behavior despite negative consequences, characterized by tolerance (needing more for same effect), withdrawal (distress when unable to engage), and loss of control. Recognized diagnoses include gambling disorder; internet/gaming disorders are being studied.

Amygdala

A brain structure involved in emotional processing, threat detection, and social bonding. Research shows amygdala activity in response to artificial agents—suggesting the brain's social circuits can be activated by AI.

Striatum

A brain region that processes rewards and converts reward signals into motivation. Shows 4.6× overproduction of dopamine receptors in adolescence, contributing to heightened reward sensitivity.

Myelination

The process of insulating neural fibers with myelin, enabling faster and more efficient signaling. Continues in the prefrontal cortex until mid-20s, contributing to improved executive function with age.

LLM (Large Language Model)

AI systems trained on vast amounts of text to predict and generate language. The underlying technology for both productivity AI (ChatGPT, Claude) and companion AI (Character.AI, Replika). The difference is in fine-tuning and deployment.

Section 230

Part of the U.S. Communications Decency Act that generally shields platforms from liability for user-generated content. Companies have attempted to use this defense in AI companion lawsuits; courts are currently evaluating its applicability.

EU AI Act

European Union regulation on artificial intelligence, effective February 2025. Article 5 prohibits AI systems that use subliminal techniques or exploit age vulnerabilities to cause harm. Penalties up to 7% of worldwide revenue.