New York Mandates Mental Health Warnings on Social Media Platforms

Dec 27, 2025, 2:23 AM
Image for article New York Mandates Mental Health Warnings on Social Media Platforms

Hover over text to view sources

New York has taken a significant step in addressing the mental health risks associated with social media use among young users by enacting a new law that requires platforms to display warning labels. Governor Kathy Hochul announced the measure, which targets features such as infinite scrolling, autoplay, and algorithm-driven feeds that critics argue encourage excessive engagement and can lead to mental health issues like anxiety and depression.
The law mandates that social media platforms, including TikTok, Instagram, Facebook, Snapchat, and YouTube, must provide clear and unavoidable warnings about the potential mental health risks when users under 18 access these features. The warnings are designed to inform users about the dangers of compulsive behavior and the overstimulation of reward centers in the brain, which can create pathways similar to those seen in substance use or gambling addictions.
Governor Hochul emphasized the importance of protecting children from the harms of social media, stating, "Keeping New Yorkers safe has been my top priority since taking office, and that includes protecting our kids from the potential harms of social media features that encourage excessive use." She compared the new warning labels to those required on tobacco products, arguing that families deserve transparency regarding the risks associated with social media.
The law will apply to conduct occurring partly or wholly in New York, but it does not extend to users physically located outside the state. Enforcement authority is granted to the New York attorney general, who can seek civil penalties of up to $5,000 per violation.
Research has increasingly highlighted the negative impact of social media on youth mental health. Studies indicate that nearly half of adolescents feel worse about their bodies due to social media, and the percentage of hospital visits among young users for suicidal ideation and attempts has nearly doubled from 2008 to 2015. The US Surgeon General has characterized the current youth mental health crisis as a public health emergency, underscoring the need for measures like the new law in New York.
The law requires that warning labels appear prominently when minors log onto these platforms, remaining visible for at least 10 seconds. Additionally, they must reappear for at least 30 seconds after three hours of cumulative use and once every additional hour thereafter. Platforms are prohibited from obscuring the warnings in terms of service or other hard-to-find locations.
This legislative move aligns New York with other states, such as California and Minnesota, that have implemented similar measures to address the impact of social media on young users. The growing concern over children's well-being online has prompted various states to take action, and Australia recently imposed a social media ban for children under 16.
In response to the new law, representatives from major social media companies, including TikTok and Meta, have not yet issued statements. However, advocates for mental health and child safety have praised the legislation as a necessary step toward accountability for tech companies.
As New York implements this law, it reflects a broader trend of increasing scrutiny on social media platforms and their design features that may contribute to mental health issues among young users. The hope is that these warning labels will raise awareness and encourage healthier online habits among minors, ultimately fostering a safer digital environment.
The law is part of a larger conversation about the responsibilities of tech companies in safeguarding the mental health of their users, particularly vulnerable populations like children and teenagers. As the effects of social media continue to be studied, further regulations may emerge to ensure that these platforms prioritize user well-being over engagement metrics.
In conclusion, New York's new law mandating mental health warnings on social media platforms represents a proactive approach to addressing the potential harms of excessive online use among young people. By requiring transparency and accountability from tech companies, the state aims to protect its youth and promote healthier interactions with social media.

Related articles

Meta and Google Found Liable for Social Media Harms to Kids

A Los Angeles jury has ruled that Meta and Google are liable for the mental distress caused to a teenager by their platforms, awarding $3 million in damages. The case highlights concerns about social media addiction and its impact on young users, potentially paving the way for further legal actions against tech giants.

Oregon Lawmakers Push AI Regulations to Safeguard Youth Mental Health

Oregon lawmakers are advancing Senate Bill 1546, which aims to regulate AI chatbots to protect youth from mental health crises. The legislation mandates that chatbots must disclose their artificial nature and provide referrals to mental health resources when users exhibit signs of self-harm or suicidal thoughts.

Oregon Lawmakers Propose AI Chatbot Regulations for Child Safety

Oregon lawmakers are introducing legislation to regulate AI chatbots, aiming to protect children's mental health. The proposed bill mandates monitoring for signs of self-harm, prohibits explicit content for minors, and requires clear disclosures that chatbot responses are not human-generated.

Governor Hochul Proposes Comprehensive Measures for Child Safety

Governor Kathy Hochul has announced a series of proposals aimed at enhancing online safety for children, restricting harmful AI chatbots, and expanding mental health resources for youth in New York. These initiatives are designed to address the growing mental health crisis among young people while ensuring their protection in digital environments.

The Need for Public Health Regulation of AI Companions

As AI companions proliferate, their potential risks to mental health, particularly among vulnerable populations like children and adolescents, necessitate a shift from technology oversight to public health regulation. Current frameworks are inadequate, leaving users exposed to harmful interactions and emotional manipulation without proper safeguards.