Meta and Google Found Liable for Social Media Harms to Kids

Mar 26, 2026, 2:25 AM
Image for article Meta and Google Found Liable for Social Media Harms to Kids

Hover over text to view sources

In a landmark ruling, a Los Angeles jury found the parent companies of Instagram and YouTube liable for causing mental distress to a teenager, a decision that could set a significant precedent for future legal actions against major tech firms. The jury awarded $3 million in damages to the plaintiff, known as KGM, and her mother, with Meta held responsible for approximately 70% of the amount and Google for the remaining 30%.
The case, which focused on claims of social media addiction and its adverse effects on youth, marks one of the first successful lawsuits against tech companies for the design of their platforms. KGM, now 20, testified that her compulsive use of Instagram and YouTube from a young age led to severe anxiety, depression, and suicidal thoughts. She argued that these platforms were deliberately designed to be addictive, using features like infinite scroll to maximize engagement and profit at the expense of young users' mental health.
This jury decision comes on the heels of another ruling in New Mexico, where Meta was found liable for concealing the risks associated with its platforms, resulting in $375 million in damages. The outcomes of these cases signal a potential shift in how courts view the responsibilities of social media companies regarding their impact on children.
John M. Bennett, Director of the California Initiative for Technology and Democracy, praised the verdict, stating that it holds tech companies accountable for the harm inflicted on children while they profit from their platforms. He described the business model of these companies as "fundamentally exploitative" and called for a reassessment of how they target young users, especially given the documented mental health crisis among adolescents today.
The ruling is part of a broader wave of litigation against social media giants, with over 1,600 cases currently in various stages of legal proceedings. These lawsuits aim to address the growing concerns over social media's role in exacerbating mental health issues among youth, as mental health professionals have increasingly linked excessive social media use to rising rates of anxiety, depression, and self-harm in young people.
The jury's decision specifically noted that the platforms were engineered with features designed to keep users engaged, akin to tactics employed by the tobacco industry in the past. KGM's attorney, Mark Lanier, drew direct parallels between social media companies and tobacco firms, arguing that the companies knowingly created addictive environments for profit, disregarding the potential harm to children.
Despite the ruling, both Meta and Google have expressed their intent to appeal the verdict. Meta's spokesperson stated that the company disagrees with the jury's findings, asserting their commitment to developing a safe environment for young users. Similarly, Google defended YouTube, claiming that it is not a social media platform and that the lawsuit mischaracterized its intentions to provide a responsible streaming service.
Legal experts highlight that this trial is significant not only for its implications on Meta and Google but also for the broader regulatory landscape surrounding social media. The ruling may encourage more parents and guardians to seek legal redress for harms caused by social media, potentially leading to stricter regulations and increased accountability for tech companies worldwide. The decision could also influence ongoing legislative efforts aimed at protecting children from harmful online content across various jurisdictions【3】.
As the implications of this case continue to unfold, many anticipate that it could signal a turning point in how society navigates the challenges posed by social media, particularly concerning its effects on vulnerable populations like children and teenagers. The outcome may also spur a reevaluation of the legal protections that currently shield tech companies from liability, particularly in relation to the design and functionality of their platforms.
The verdict in Los Angeles serves as a reminder of the urgent need for accountability in the tech industry as society grapples with the complexities of digital engagement and its impact on mental health. As more cases are set to follow, the tech landscape may soon face significant changes in how it operates and how it prioritizes the safety and well-being of its youngest users.

Related articles

Oregon Lawmakers Push AI Regulations to Safeguard Youth Mental Health

Oregon lawmakers are advancing Senate Bill 1546, which aims to regulate AI chatbots to protect youth from mental health crises. The legislation mandates that chatbots must disclose their artificial nature and provide referrals to mental health resources when users exhibit signs of self-harm or suicidal thoughts.

Oregon Lawmakers Propose AI Chatbot Regulations for Child Safety

Oregon lawmakers are introducing legislation to regulate AI chatbots, aiming to protect children's mental health. The proposed bill mandates monitoring for signs of self-harm, prohibits explicit content for minors, and requires clear disclosures that chatbot responses are not human-generated.

Governor Hochul Proposes Comprehensive Measures for Child Safety

Governor Kathy Hochul has announced a series of proposals aimed at enhancing online safety for children, restricting harmful AI chatbots, and expanding mental health resources for youth in New York. These initiatives are designed to address the growing mental health crisis among young people while ensuring their protection in digital environments.

New York Mandates Mental Health Warnings on Social Media Platforms

New York has enacted a law requiring social media platforms to display mental health warning labels for features that may harm young users. The legislation targets addictive design elements like infinite scrolling and autoplay, aiming to protect minors from potential mental health risks associated with excessive use.

The Need for Public Health Regulation of AI Companions

As AI companions proliferate, their potential risks to mental health, particularly among vulnerable populations like children and adolescents, necessitate a shift from technology oversight to public health regulation. Current frameworks are inadequate, leaving users exposed to harmful interactions and emotional manipulation without proper safeguards.