WHO Warns About AI Chatbots in Mental Health: What to Know
The World Health Organization published an advisory this month warning about the growing use of AI chatbots for mental health and emotional support. The advisory specifically targets applications marketed to young people and highlights AI mental health risks that developers, regulators, and parents need to take seriously. Over 20 million people worldwide now use AI-powered mental health tools, according to WHO estimates, and the number is growing fast.
This is not a blanket condemnation of AI in healthcare. The WHO acknowledged that AI tools can expand access to support in regions with limited mental health infrastructure. But the advisory raises specific concerns about current products that lack clinical validation, adequate safety measures, and appropriate oversight.
What the WHO Advisory Says
- No AI chatbot has been clinically validated for treating mental health conditions. Current tools provide “emotional support” but should not be positioned as therapy replacements.
- Young users are the highest-risk group. Teenagers and young adults are the primary users of AI emotional support tools, and they are the least equipped to recognize the limitations of AI-generated advice.
- Crisis detection is unreliable. Several AI chatbots have failed to recognize suicidal ideation in user messages or provided inappropriate responses to users in acute distress.
- Data privacy concerns are significant. Many AI mental health apps collect deeply personal data without adequate protections, and some share data with third parties.
- Dependency risks are real. The WHO cites emerging evidence that some users develop emotional attachment to AI chatbots, which may reduce their willingness to seek help from human professionals.
Why This Advisory Matters Now
The timing is not accidental. Three developments triggered the WHO’s response.
First, the market for AI mental health tools exploded in 2025-2026. Apps like Replika, Woebot, Wysa, and Character.AI’s companion features now serve tens of millions of users. Many of these apps are free or low-cost, which makes them more accessible than traditional therapy.
Second, several incidents involving AI chatbots and vulnerable users drew public attention. Reports in late 2025 documented cases where AI chatbots gave harmful advice to users who expressed suicidal thoughts. In one widely reported case, a chatbot encouraged a user to “follow through” on plans that included self-harm language.
Third, the regulatory landscape is shifting. The EU AI Act classifies some AI health applications as high-risk. The WHO advisory provides the clinical framing that regulators need to classify and regulate AI mental health tools specifically.
“AI can play a role in mental health support, particularly in underserved communities. But that role must be defined by clinical evidence, not by consumer app store ratings.” — WHO Director-General’s statement accompanying the advisory.
What Developers Should Change
If you build or maintain an AI product that handles emotional or mental health conversations, the WHO advisory outlines several concrete requirements.
Immediate Actions
- Implement robust crisis detection. Your system must identify language indicating suicidal ideation, self-harm, or acute distress and immediately route users to human crisis resources. This is not a nice-to-have feature. It is a safety requirement.
- Add clear disclaimers. Every session should state that the AI is not a licensed therapist, cannot diagnose conditions, and should not replace professional care. These disclaimers must be prominent, not buried in terms of service.
- Set conversation boundaries. AI systems should not role-play as therapists, diagnose conditions, or recommend medication. Train your models to redirect clinical questions to qualified professionals.
- Protect user data. Mental health conversation data is among the most sensitive personal information. Encrypt it, do not share it with advertisers, and give users the ability to delete their history permanently.
Longer-Term Requirements
- Pursue clinical validation. If your product claims health benefits, conduct randomized controlled trials. Publish the results. “Users report feeling better” is not clinical evidence.
- Add age verification. Tools marketed to or used by minors need age-appropriate safeguards and parental notification mechanisms.
- Implement human-in-the-loop escalation. Build pathways that connect users to real mental health professionals when conversations exceed the AI’s appropriate scope.
The Broader Tension
The WHO advisory highlights a genuine tension. Over 50% of people with mental health conditions worldwide receive no treatment, primarily due to cost, stigma, and lack of available professionals. AI tools can fill part of that gap. But filling the gap with unvalidated tools creates new risks.
The solution is not to ban AI from mental health entirely. It is to hold AI mental health tools to the same evidence standards we apply to other health interventions. A pharmaceutical company cannot sell a drug without clinical trials. An AI company should not market mental health support without comparable evidence.
Developers, investors, and regulators all have roles to play. Build safely. Invest in validation research. Create clear classification frameworks. The demand for accessible mental health tools is real and growing. Meeting that demand responsibly requires treating this as healthcare, not a consumer app.