1 in 8 Young People Use AI Chatbots for Mental Health Advice

Dec 27, 2025, 2:20 AM
Image for article 1 in 8 Young People Use AI Chatbots for Mental Health Advice

Hover over text to view sources

A new study published in JAMA Network Open indicates that about 13% of US adolescents and young adults, specifically those aged 12 to 21, are turning to AI chatbots for mental health advice. This translates to approximately 5.4 million young individuals seeking emotional support from generative AI systems like ChatGPT and others.
The study, conducted between February and March 2025, surveyed 1,058 participants and found that usage rates were particularly high among young adults aged 18 to 21, with 22% reporting they had sought advice from AI chatbots. Among those who used these tools, 66% engaged with them at least monthly, and an impressive 93% found the advice helpful.
Researchers attribute the high rates of AI usage to several factors, including the low cost, immediacy, and perceived privacy of these systems. Many young people may prefer AI chatbots over traditional counseling services, which can be more expensive and less accessible. "The most striking finding was that already, in late 2025, more than 1 in 10 adolescents and young adults were using generative AI systems for mental health advice," said Ateev Mehrotra, a co-author of the study.
Despite the apparent benefits, the study raises significant concerns regarding the effectiveness and safety of AI-generated mental health advice. Researchers noted that there are few standardized benchmarks for evaluating the quality of advice provided by AI chatbots, and there is limited transparency about the datasets used to train these models. Jonathan Cantor, a senior policy researcher at RAND, emphasized the need for caution, stating, "Engagement with generative AI raises concerns, especially for users with intensive clinical needs.".
The study's findings come amid a broader national conversation about the ethics and safety of using AI for mental health support. Recently, the US Food and Drug Administration held a public hearing to discuss whether AI chatbots should be regulated as medical devices. Additionally, OpenAI is facing lawsuits alleging that its chatbot has contributed to harmful outcomes for some users, including cases of self-harm.
While the study provides a snapshot of AI usage among young people, it also highlights the need for further research to understand the implications of this trend. The researchers noted that their survey did not assess whether the advice given was for diagnosed mental illnesses, which is a critical area for future investigation.
Moreover, the study revealed disparities in perceived helpfulness among different demographic groups. Black respondents were less likely to rate the advice as helpful compared to their White counterparts, indicating potential cultural competency gaps in AI-generated support.
As the youth mental health crisis continues to escalate, with nearly 18% of adolescents aged 12 to 17 experiencing a major depressive episode in the past year, the role of AI in providing mental health support is becoming increasingly relevant. The researchers concluded that while AI chatbots may offer immediate assistance, it is crucial to ensure that these tools are safe and effective for young users, particularly those with significant mental health needs.
In summary, the study underscores a significant trend in how young people are seeking mental health support, raising important questions about the future of AI in this critical area. As AI technology evolves, ongoing research will be essential to ensure that it serves as a beneficial resource rather than a potential risk for vulnerable populations.
If you or someone you know needs mental health help, resources are available, including the National Suicide and Crisis Lifeline, which can be reached by calling or texting 988.

Related articles

OpenAI Launches ChatGPT Health for Medical Record Analysis

OpenAI has introduced ChatGPT Health, a feature designed to analyze users' medical records and wellness data to provide personalized health insights. While the tool aims to enhance user understanding of health-related questions, privacy advocates express concerns over data security and the potential misuse of sensitive information.

OpenAI Launches ChatGPT Health for Personalized Medical Insights

OpenAI has introduced ChatGPT Health, a new feature allowing users to connect their medical records and wellness apps to the AI chatbot. This initiative aims to provide personalized health information while ensuring user data remains secure and separate from other interactions.

40 Million Users Turn to ChatGPT Daily for Health Questions

OpenAI reports that over 40 million users engage with ChatGPT daily for healthcare inquiries. The chatbot serves as a vital resource, especially during off-hours, helping users navigate the complexities of health insurance and medical information.

Google AI Health Summaries Mislead Users, Risking Safety

A recent investigation revealed that Google's AI-generated health summaries often contain misleading information, potentially endangering users. Experts have criticized these inaccuracies, which range from dietary advice for cancer patients to incorrect information about medical tests, highlighting the urgent need for improved accuracy in AI health guidance.

NVIDIA and Natera Collaborate on AI for Precision Medicine

NVIDIA and Natera have announced a collaboration to enhance AI foundation models for precision medicine. This partnership aims to leverage Natera's extensive datasets and NVIDIA's computing power to improve diagnostics and personalized therapies.