![]() |
VOOZH | about |
A US jury on Wednesday found Meta and YouTube guilty of deliberately designing addictive products that caused harm to a young user in a landmark social media addiction trial that will likely serve as a bellwether for future cases.
The jurors in a Los Angeles court found the tech companies to be negligent in the design of the platform and in failing to warn users of its dangers.
The jury said yes to questions that asked whether the platforms acted with malice, oppression and fraud. The jury awarded damages of $6 million, with Meta bearing around 70% of the liability and YouTube the remaining 30%.
The case
Meta owns Facebook and Instagram and has over 3.5 billion users, while nine out of ten American teenagers aged 13-17 overwhelmingly use YouTube, according to a 2025 Pew Research Center report. Meta has long been accused of addictive platform design by parents, child safety groups and tech policy advocates, and of causing mental health issues resulting in eating disorders and self-harm.
The present case is the first in a flurry of lawsuits filed by teenagers, school districts and states, claiming that platforms like Meta-owned Instagram and YouTube were designed to encourage excessive use by millions of young Americans.
Twenty-year-old KGM started using YouTube when she was six, and Instagram at nine. She claimed that her addiction to social media led to anxiety, depression and body dysmorphia. The plaintiffs have argued that social media should be identified as a product and thus its design and other components must be held to product liability standards.
Mark Lanier, her lawyer, compared this design to dopamine-seeking “slot machines”, and said that YouTube and Meta were operating like “digital casinos” with their endless scroll features fuelling dopamine hits and thus, addiction.
The Section 230 problem
Past lawsuits against social media companies over the harm they cause frequently ran into Section 230 of the Communications Decency Act 1996, which says that internet platforms cannot be held liable for content posted by their users. Courts had read it broadly enough to kill most lawsuits before they got anywhere near a jury.
The plaintiffs in this case avoided that barrier by changing the claim itself, framing it instead around product design. It thus drew on principles from product liability law, where liability can arise from how a product is built rather than what it contains.
Jurors were not asked to assess specific posts or videos. They were directed, as a matter of law, to consider the platforms’ architecture, the structure of feeds, how engagement is sustained, and how users are drawn back repeatedly. That distinction places the claim outside Section 230 protection.
California Superior Court Judge Carolyn Kuhl said that jurors would need to determine whether the alleged harm flowed from third-party content, which would trigger Section 230 immunity, or from the companies’ own design choices. Thus, the jury had to determine that both companies met all four elements of negligence under California law: a duty of care, a breach of that duty, causation, and harm.
The causation standard was central to the defence. Meta pointed to factors outside social media, including the plaintiff’s personal circumstances and medical history, to argue that the platforms could not be the cause of harm.
But the law does not require sole causation. Under the “substantial factor” test, the question is whether the defendant’s conduct played a meaningful role in producing the harm. The jury concluded that it did.
The finding of malice raised the threshold further. Under California law, this requires a showing of conscious disregard for safety. For the jury to reach that conclusion, it had to accept that the companies were not merely careless but aware of risks and proceeded nonetheless.
Internal company materials presented at trial included research and communications indicating that risks to younger users were known. The implication was not simply that harm occurred, but that it was foreseeable within the system as designed.
A parallel verdict on safety
The day before the Los Angeles verdict, a jury in New Mexico reached a separate conclusion against Meta. That case was brought under the Consumer Protection Law and concerned child sexual exploitation. It pertained to whether Meta had misled the public about the safety of its platforms while privately making decisions that reduced protections for children. The jury found that it had, and damages were set at $375 million.
The trigger was internal company communications around the Meta 2019 decision to expand end-to-end encryption across Facebook Messenger. The evidence showed that senior employees had flagged concerns that encryption would remove the company’s ability to identify child sexual abuse material and report it to law enforcement. Reports of child exploitation filed by Meta dropped by millions in the years that followed.
The theory in that case was different. It turned not only on design, but on whether the company’s public assurances about safety matched its internal assessment of risk.
Taken together, the two verdicts point to a broader shift. Scrutiny is moving from what users post to how platforms are built and managed.