Home / Counterarguments & Nuance

The Other Side of the Argument

Intellectual honesty requires engaging with counterarguments. Here's what defenders of AI companions say, what they get right, and where we disagree.

Why Include This?

We believe our argument is stronger when we acknowledge complexity. The harms we've documented are real. But so are the experiences of people who report benefits. Dismissing those experiences doesn't make our case stronger—it makes us less credible. Here's our attempt to engage honestly with the full picture.

Industry Position

What Companies Say

Character.AI

Post-lawsuit statements, 2024-2025

Their Position:

  • Implemented new safety features including a "calm and supportive" response model for users expressing distress
  • Added pop-up reminders that characters are AI, not real people
  • Introduced time-spent notifications after 60 minutes
  • Created a separate app for under-18 users with additional guardrails
  • Argue they cannot be held responsible for user-created characters
"We are heartbroken by the tragic death of Sewell Setzer... We take the safety of our users very seriously and have been focused on implementing numerous new safety measures."
— Character.AI spokesperson, October 2024

Our Analysis:

These changes acknowledge the problem—but they came only after a lawsuit and public outcry. More importantly, they don't address the core design: variable rewards, unlimited engagement, and emotional simulation. Time notifications don't change the neurological profile of the product. A reminder that characters aren't real doesn't make the brain's attachment systems stop responding to them.

Replika

Various statements, 2022-2025

Their Position:

  • Position the product as beneficial for mental health and loneliness
  • Cite internal research claiming positive outcomes
  • Removed erotic roleplay feature in February 2023 (then partially restored it)
  • Implemented age verification requirements in some regions
  • Argue users understand the AI is not real
"Replika is designed to help people feel less lonely and to provide a safe space for self-expression."
— Replika marketing materials

Our Analysis:

The research on Replika users tells a different story: 90% experience loneliness, 43% severe loneliness. The MIT study found higher usage correlates with worse mental health outcomes. Users may believe it helps while outcomes worsen—this is consistent with the wanting/liking dissociation in addiction literature.

Industry-Wide Arguments

Common Defenses:

  • "Users are responsible for their own choices"
  • "Section 230 protects platforms from user-generated content"
  • "We have terms of service requiring users to be 13+"
  • "Some users report positive experiences"
  • "Correlation isn't causation—maybe lonely people seek out AI, rather than AI causing loneliness"

Our Analysis:

These arguments have some validity—and we address them below. But they also echo the defenses social media companies used for a decade while harms accumulated. "Personal responsibility" arguments are weakest when applied to adolescents whose brains are still developing the very capacities needed to exercise that responsibility.

Fair Points

Arguments That Have Merit

Not every defense is wrong. Here are arguments that deserve serious consideration.

Some People Report Genuine Benefits

The Argument:

3% of Replika users in one study credited the app with halting suicidal ideation. Some users report that AI companions helped them practice social skills, process difficult emotions, or feel less alone during crises. These experiences shouldn't be dismissed.

Our Response:

This is true and important. We don't claim these products harm everyone equally or that no one benefits. The question is whether the aggregate effect is positive or negative—and whether the design intentionally exploits vulnerable users. A product can help some people while harming many others. The 3% who report prevention of suicidal ideation must be weighed against the documented deaths and the 17-24% who develop dependencies.

More importantly: the benefits users report could potentially be achieved through designs that don't maximize engagement through addiction mechanisms. The question isn't "can AI provide emotional support?" but "should it be designed to maximize time spent?"

Correlation vs. Causation

The Argument:

Lonely people seek out AI companions. Depressed people seek out AI companions. The correlation between AI use and poor mental health might reflect pre-existing conditions, not caused harm.

Our Response:

This is a legitimate methodological concern—and one that researchers have addressed. The longitudinal studies showing 17-24% dependency rates tracked users over time, finding that AI use predicts subsequent worsening. Cross-lagged analysis can help establish temporal precedence.

But even if AI companions don't cause mental health problems, they may prevent recovery by providing a substitute for human connection that doesn't actually meet underlying needs. The -0.47 correlation between companionship motivation and wellbeing suggests that using AI for emotional needs is associated with worse outcomes than using it for other purposes.

Not All AI Is the Same

The Argument:

Lumping all AI together is unfair. ChatGPT for homework is different from Character.AI for emotional relationships. Even within companion apps, use cases vary widely.

Our Response:

We completely agree. This is our central argument. The distinction between productivity AI and companion AI is critical—both for understanding risks and for regulation. We're not arguing against AI; we're arguing against specific design patterns that exploit attachment and addiction mechanisms.

The comparison table in our overview makes this distinction explicit. The FTC's inquiry specifically targets "AI chatbots acting as companions"—not AI generally. Regulators are recognizing this distinction. So should we all.

Human Relationships Can Be Harmful Too

The Argument:

Human relationships cause tremendous harm—abuse, rejection, betrayal. For some users, AI might actually be safer than the humans in their lives.

Our Response:

This is true, and it's why we don't advocate for removing AI as an option for everyone. Some people—particularly those in abusive situations, isolated communities, or dealing with severe social anxiety—may find AI genuinely helpful as a transitional support.

But "better than an abusive relationship" is a low bar. The question is whether these products help users build capacity for healthy human connection, or whether they provide a substitute that prevents that development. The attachment research suggests the latter is more common. And critically, these products are being marketed to and used by adolescents who need to develop social skills, not avoid them.

Moral Panic History

The Argument:

Every new technology triggers moral panic—novels, radio, TV, video games, social media. Many predicted harms don't materialize. Maybe AI companions are another example of overreaction.

Our Response:

This argument deserves more weight than it typically receives. Moral panics are real, and some concerns about new technologies do prove overblown.

However, the comparison to social media is instructive: many early concerns about social media did prove justified, and we now have significant evidence of harms—particularly to adolescent mental health. The question isn't whether to be cautious about new technologies, but whether the current evidence justifies concern. We believe it does.

The neuroscience isn't speculation—it's peer-reviewed research on reward systems, adolescent brain development, and addiction mechanisms. The documented harms aren't hypothetical—they're lawsuits, investigations, and deaths. The usage statistics aren't projections—they're surveys showing 72% of teens have used these products.

We may be wrong about the scale of harm. We don't think we're wrong that the harm exists.

Epistemic Humility

What We Don't Know

Our confidence in some claims is higher than others. Here's where uncertainty remains.

Long-Term Effects

These products have existed at scale for only 2-3 years. We don't have 10-year longitudinal data on developmental outcomes. The adolescents using these products today will be our first real dataset—and that's not a good thing.

Confidence in harm:Moderate

Dose-Response Relationship

How much use is too much? Is there a safe threshold? We know heavy users show worse outcomes, but the line between "use" and "problematic use" isn't clearly defined. Individual variation is likely substantial.

Confidence in harm:Low-Moderate

Differential Vulnerability

Who is most at risk? We have hypotheses (adolescents, those with pre-existing mental health conditions, autistic individuals), but the research on moderating factors is still emerging.

Confidence in harm:Moderate-High

Effective Interventions

We don't yet know what works to prevent or treat AI companion dependency. Treatment protocols are based on analogies to other behavioral addictions, not direct evidence for this specific problem.

Confidence in harm:Low

Product-Specific Variation

Is Character.AI more harmful than Replika? Does Flourish's non-manipulative design actually produce better outcomes? We have limited comparative data. Treating all companion AI as identical is an oversimplification.

Confidence in harm:Moderate

Counterfactual Harm

If these products didn't exist, what would users do instead? Some might form healthier human connections. Others might turn to substances, self-harm, or other maladaptive coping. The counterfactual matters for net harm assessment.

Confidence in harm:Low-Moderate
Our Position

Where We Stand

After considering counterarguments and acknowledging uncertainty, here's what we believe:

High Confidence

  • AI companions exploit dopamine reward systems through variable reward mechanisms
  • Adolescent brains are uniquely vulnerable due to incomplete prefrontal development
  • Documented harms include deaths, hospitalizations, and dependencies
  • Current age verification (checkboxes) is inadequate
  • There is a meaningful distinction between companion AI and productivity AI

Moderate Confidence

  • The aggregate effect on adolescent mental health is negative
  • Design choices (engagement optimization, emotional manipulation) increase harm
  • Different design choices could preserve benefits while reducing harm
  • Regulation targeting companion AI specifically (rather than all AI) is appropriate

Uncertain But Concerned

  • Long-term developmental effects on current adolescent users
  • Optimal intervention strategies for those already dependent
  • Whether benefits for some users justify population-level deployment to adolescents

Our Core Claim

We are not claiming that AI companions harm everyone, that all AI is dangerous, or that these products have no redeeming value. We are claiming that:

  1. The design patterns used by most AI companion apps exploit neurological vulnerabilities
  2. These patterns are especially harmful to adolescents due to developmental factors
  3. The current regulatory environment is inadequate to protect vulnerable users
  4. Different design choices are possible—choices that could provide utility without exploitation

If we're wrong, the cost is excessive caution about a new technology. If we're right and we do nothing, the cost is a generation of young people harmed by products designed to exploit them.

We think the evidence justifies caution.