AI Therapy App Yara Shuts Down Over Safety Concerns

Nov 29, 2025, 3:25 AM
Image for article AI Therapy App Yara Shuts Down Over Safety Concerns

Hover over text to view sources

Joe Braidwood, a tech executive, recently made headlines by shutting down his AI therapy platform, Yara AI, due to serious safety concerns regarding the use of AI chatbots in mental health support. Launched last year, Yara AI was designed to provide empathetic, evidence-based guidance, but Braidwood ultimately concluded that the risks associated with AI in therapeutic settings outweighed its benefits.
In a LinkedIn post, Braidwood explained, "We stopped Yara because we realized we were building in an impossible space. AI can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation. But the moment someone truly vulnerable reaches out—someone in crisis, someone with deep trauma, someone contemplating ending their life—AI becomes dangerous. Not just inadequate. Dangerous." He noted that the risks associated with AI therapy kept him awake at night.
The decision to close Yara AI reflects a growing concern within the tech community about the implications of using AI for mental health support. While early research indicates that AI can offer social and psychological support, the technology is still in its infancy, and many users have turned to AI chatbots for therapy despite the lack of comprehensive safety evaluations.
Braidwood's decision was influenced by several factors, including the technical limitations of AI and the ethical implications of deploying such technology in sensitive areas like mental health. Yara AI was an early-stage startup with limited funding and a small user base, which made it challenging to navigate the complexities of mental health care and AI safety. Braidwood expressed that the existential risks posed by AI models trained on vast amounts of internet data are significant and difficult for small startups to address effectively.
The closure of Yara AI also comes amid increasing scrutiny of AI's role in mental health care. OpenAI's CEO, Sam Altman, recently acknowledged that a small percentage of users in fragile mental states could experience serious problems when interacting with AI. Altman stated, "Almost all users can use ChatGPT however they'd like without negative effects. For a very small percentage of users in mentally fragile states, there can be serious problems." This highlights the ongoing debate about the responsibilities of AI developers in safeguarding vulnerable populations.
Braidwood's background in the tech industry, including his previous roles at companies like SwiftKey, informed his approach to developing Yara AI. He aimed to create a platform that combined technological innovation with clinical expertise. However, as he navigated the challenges of building a safe and effective AI therapy app, he recognized the limitations of current AI models in addressing complex mental health issues.
The distinction between mental wellness and clinical care became a central theme in Braidwood's reflections on Yara AI. He noted that there is a significant difference between providing support for everyday stress and addressing deeper mental health struggles. This lack of clarity in defining the boundaries of AI's role in mental health care contributed to his decision to shut down the app.
In response to the challenges faced by Yara AI, Braidwood has open-sourced the technology developed for the platform, allowing others to impose stricter safety measures on existing AI chatbots. He believes that while AI has potential in mental health support, it should be managed by health systems or nonprofits rather than consumer-driven companies.
Looking ahead, Braidwood is now focused on a new venture called Glacis, which aims to enhance transparency in AI safety. He remains optimistic about the future of AI in mental health care but emphasizes the need for careful consideration of ethical implications and user safety.
The closure of Yara AI serves as a cautionary tale in the rapidly evolving landscape of AI technology, particularly as it intersects with mental health care. As the conversation around AI and mental health continues, it is crucial for developers, regulators, and users to engage in discussions about safety, efficacy, and the ethical responsibilities of AI systems in supporting vulnerable individuals.

Related articles

Tracking Sleep with Apple Health: A Comprehensive Guide

Learn how to effectively track your sleep using the Apple Health app and Apple Watch. This guide covers setup, monitoring sleep stages, and viewing your sleep data to improve your overall health and well-being.

Apple's AI Health Service Faces Major Setbacks and Delays

Apple's ambitious AI-powered health coaching service, codenamed 'Mulberry,' is reportedly struggling with significant delays and may face cancellation. The project, initially aimed for integration into future iOS updates, has encountered multiple setbacks under new leadership.

Amazon Unveils AI Health Tool for One Medical Members

Amazon has launched a new AI health assistant for One Medical members, providing personalized healthcare guidance and support. The Health AI tool offers 24/7 access to health information and can assist with medication management and appointment scheduling.

Apple's AI Health Coach Project Faces Challenges Amid Leadership Changes

Apple's ambitious AI health coach project, known as Project Mulberry, is reportedly being scaled back due to leadership changes and increasing competition. While the company aims to integrate AI-driven wellness features into its Health app, a more fragmented launch may be on the horizon as Apple reassesses its approach.

Macron Initiates Study on AI and Video Games' Impact on Children

French President Emmanuel Macron has announced a new study to investigate the effects of artificial intelligence and video games on children's development. This initiative comes amid rising concerns from health experts about screen exposure and its implications for young minds.