Communications Litigation Today was a Warren News publication.
'Promote Addiction'

Social Media Puts Profit Before Kids, Alleges Another School District

Social media companies are “ruthlessly extracting every dollar possible with callous disregard for the harm to mental health,” alleged a Thursday class action (docket 2:23-cv-00910) brought by the School District of the Chathams in U.S. District Court for New Jersey in Newark. The suit is one of several filed recently by U.S. school districts -- including Mesa Public Schools in Arizona (see 2301270067) and suburban Seattle Kent School District (see 2301110029) -- hoping to force social media platforms to do more to protect minors from online bullying, predators and behaviors detrimental to their mental health.

Defendants Meta, Snap, Tiktok and Alphabet “exploit minors’ undeveloped decision-making capacity, impulse control, emotional maturity, and poor psychological resiliency,” alleged the complaint. The social media companies knowingly seek to grow the use of their platforms by minors through designs, algorithms and policies “that promote addiction, compulsive use and other severe mental harm,” said the school district, and they thwart parents’ ability to keep their children safe by supervising and limiting social media use.

Social media companies’ business models are often built on “maximizing user engagement as opposed to safeguarding users’ health,” said the complaint, citing a 2021 advisory from the U.S. surgeon general. Social media is feeding a “youth mental health crisis,” as students experience “record rates of anxiety, depression” and other mental health issues because of defendants’ “intentional conduct,” it said.

The complaint quoted Sean Parker, Meta’s first president, in 2017 describing the “social-validation feedback loop” Facebook created that exploits a “vulnerability in human psychology” by giving social media users “a little dopamine hit” every time someone likes or comments on their post. It quoted Tim Kendall, Facebook's former director-monetization, in 2020 congressional testimony, saying Facebook “took a page from Big Tobacco’s playbook, working to make our offering addictive at the outset.”

Despite clear mandates Meta couldn’t allow children under 13 on its website, Facebook actively rejected proposed redesigns intended to "minimize the harms to children and teen users,” alleged the complaint. Algorithmic data mining pairs users with whatever content maximizes engagement with Meta’s platforms, which could be late at night when kids should be sleeping or during school, the complaint said.

Meta’s Messaging app gives predators and other bad actors direct and unsupervised access to children and teens, the complaint said. The company knows most unwanted interactions, including bullying and sexual exploitation of minors, occurs via direct messages, but it “simply does not care enough to change its platform settings” because that would potentially affect ad revenue, it said. Meta’s Instagram collects individualized data about users and their friends, and then notifies them by text or email to open selected content, “pulling users back onto the social media platform,” it said.

Leaked Meta surveys indicate the company understands its “severe impact on teens,” resulting in negative feelings such as “depression, suicidal ideation or violence to others,” said the complaint, citing a report from The Wall Street Journal. Social media and defendants' platforms are particularly susceptible to negative social comparison and feedback-seeking, said the complaint, linking rising rates of teen suicides and suicidal thoughts to social media.

Google’s YouTube uses techniques to grow engagement by minors to generate more ad revenue, said the complaint, saying it fuels “compulsive, addictive use” and pushes them into “dangerous ‘rabbit hole’ experiences.” The complaint cited a Wall Street Journal report on how TikTok uses viewing data history to steer users toward more videos that will “keep them scrolling.” That process can sometimes lead young viewers “down dangerous rabbit holes and toward content that promotes suicide or self-harm,” the complaint said.

Snap uses an algorithm to suggest connections, including whether someone should “friend” someone else using the “Quick Add” feature, said the complaint. The Snap-initiated messages result in “exposure to harmful contacts, bullying and dangerous predators,” it said. Snap’s self-destructing content design feature encourages minors to exchange “harmful, illegal and sexually explicit images with adults” and gives predators a “safe and efficient vehicle to recruit victims,” it said.

The plaintiff asserts the defendants are a public nuisance and exhibit negligence. They seek damages, penalties and monetary relief provided by applicable law; injunctive and other equitable relief necessary to protect the interests of students; and legal costs and reasonable attorneys’ fees.