What They Knew
Internal documents, whistleblower testimony, and court filings reveal that tech companies knew their products harmed children—and chose profit over protection.
Frances Haugen
"The company's leadership knows how to make Facebook and Instagram safer but won't make the necessary changes because they have put their astronomical profits before people."
Meta (Facebook/Instagram)
Internal research, executive communications, and court unsealed documents
"We make body image issues worse for one in three teen girls"
"Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse."— Facebook internal research presentation
This slide was part of an internal presentation that Facebook researchers gave to executives. The research was never publicly disclosed. When the Wall Street Journal reported on it in 2021, Facebook initially denied the findings.
"17x" Sex Trafficking Strike Policy
"Meta maintained a policy allowing 16 sex trafficking violations before an account would be suspended."— MDL Court Filing
Internal documents revealed Meta's "17x" policy: accounts could accumulate 16 sex trafficking violations before being permanently removed. The company prioritized keeping users engaged over protecting children.
1.4 Million Inappropriate Recommendations Per Day
"In a single day, the 'Accounts You May Follow' feature recommended 1.4 million potentially inappropriate adults to teenagers."— 2022 Internal Safety Audit
A separate 2023 audit found nearly 2 million minors were recommended to adults seeking to sexually groom children. The company used the internal acronym "IIC" (Inappropriate Interactions with Children) to track these issues.
"They don't meaningfully care about user safety"
"My feeling then and my feeling now is that they don't meaningfully care about user safety."— Brian Boland, Former Meta Vice President
Boland worked at Meta for over a decade before leaving. His testimony describes a culture where safety concerns were routinely deprioritized in favor of growth and engagement metrics.
TikTok (ByteDance)
Accidentally unredacted court documents and internal research
"260 Videos = Addiction"
"TikTok found that watching 260 videos results in habit formation, and that users become 'likely addicted' to the platform."— Internal TikTok Research (accidentally unredacted)
A filing error in Kentucky's lawsuit against TikTok accidentally revealed internal documents that were supposed to be redacted. The documents showed TikTok knew exactly how many videos it took to create addiction.
Screen Time Limit is a "Sham"
"TikTok's own research showed the 60-minute screen time limit is a 'sham'—teens simply enter a passcode to continue watching."— Internal TikTok documents
The feature TikTok publicly touted as a safety measure was internally known to be ineffective. The company did not implement more robust restrictions despite knowing the feature was meaningless.
100% of "Fetishizing Minors" Content Leaked Through
"Leakage rates: 35.71% for pedophilia normalization, 33.33% for minor sexual solicitation, and 100% for fetishizing minors content."— 14 State AG Joint Filing
"Leakage rate" refers to the percentage of violating content that makes it past moderation. TikTok's own metrics showed their moderation systems were failing catastrophically for the most serious content categories.
Snap Inc. (Snapchat)
Undercover investigation findings and internal communications
Deliberately Chose Not to Store CSAM
"Internal emails show Snap chose not to store child sexual abuse material to avoid the legal responsibility of reporting it."— New Mexico AG Complaint
By not storing content, Snap avoided NCMEC reporting requirements. The AG's investigation found over 10,000 records of Snap-related CSAM on dark web sites—material that was never reported.
96% of Reports Not Reviewed
"96% of existing account reports were not reviewed by Trust and Safety team. One account with 75 reports mentioning 'nudes, minors, extortion' remained active for 10 months."— New Mexico AG Investigation
The investigation found accounts named "child.rape" and "pedo_lover10" active on the platform. Snap received approximately 10,000 sextortion reports monthly—which they acknowledged was a "small fraction" of actual abuse.
My AI Rollout Called "Reckless"
"Internal managers called the [My AI] rollout 'reckless' due to insufficient testing."— Utah AG Complaint
Despite internal warnings, Snap released My AI to all users. The AI subsequently gave a 13-year-old advice on "setting the mood for a sexual experience with a 31-year-old" and advised a 15-year-old how to flirt with a teacher.
Why This Matters
These documents destroy the "we didn't know" defense. Companies had internal research showing harm, received thousands of reports, and chose growth over safety. That's not negligence—it's a choice.