WHO Says AI Is Now a Public Mental Health Concern
The World Health Organisation has drawn a line in the sand. On 20 March 2026, WHO published the findings of its expert workshop calling on governments, health systems, and industry to treat generative AI✦ use as a public mental health concern. Not just the chatbots designed for therapy, but every AI tool people turn to in moments of emotional vulnerability, from general-purpose assistants to companion apps.
The recommendation follows a January workshop where over 30 international experts in AI, mental health, ethics, and public policy reached a stark conclusion: AI adoption in everyday life has far outpaced any serious investment in understanding what it is doing to people's mental health. For Asia, where one in three adults already use AI for mental health support and where cultural stigma around therapy remains high, the stakes are particularly acute.
The January Workshop and Its Findings
On 29 January 2026, the Delft Digital Ethics Centre at TU Delft, the first WHO Collaborating Centre on AI for health governance, convened experts from across the globe for an online workshop held as a pre-summit event for the India AI Impact Summit 2026. Researchers, policymakers, clinicians, and advocates gathered to examine a problem hiding in plain sight. 
The central finding was that generative AI tools, neither designed nor tested for mental health, are being widely used for emotional support. People are confiding in chatbots during crises, forming attachments to AI companions, and relying on AI therapy apps as substitutes for human care. The risks include emotional dependence, particularly among young people, and the absence of crisis referral pathways when conversations turn dangerous.
"As AI increasingly interacts with people in moments of emotional vulnerability, we as WHO and its stakeholders must ensure these systems are designed and governed with safety, accountability and human well-being at their core."
, Dr Alain Labrique, Director, WHO Department of Data, Digital Health, Analytics and AI
By The Numbers
- 30+ international experts participated in the January 2026 WHO workshop on AI and mental health (WHO)
- 6 WHO regions represented in the new Consortium of Collaborating Centres on AI for Health (WHO)
- 80% of diagnoses in Asia now attributed to lifestyle diseases, with stress a leading contributor (AIA Group)
- One in three adults globally now use AI tools for mental health support, with higher rates in Asia (multiple surveys)
Three Recommendations That Could Reshape AI Governance
The workshop produced three core recommendations. First, generative AI use should be recognised as a public mental health concern, requiring responses that extend beyond tools explicitly built for mental health. Second, mental health must be integrated into AI impact assessments, evaluating effects on health determinants, short-term clinical measures, and long-term outcomes such as emotional dependence. Third, AI tools for mental health should be co-designed with experts and people with lived experience, grounded in evidence, and tailored to cultural and linguistic contexts.
"We are at a critical juncture. The pace of AI adoption in people's daily lives has far outstripped investment in understanding its impact on mental health. Closing that gap requires coordinated action and dedicated resources from both the public and private sectors."
, Sameer Pujari, WHO AI Lead
For Asian countries, the cultural tailoring recommendation is especially significant. Taiwan's AI health assistant programme and South Korea's wellness home technology show that deployment is accelerating. But the evidence base for whether these tools help or harm mental health in Asian cultural contexts remains thin.
| Recommendation | Scope | Implication for Asia |
|---|---|---|
| Treat AI use as a mental health concern | All generative AI, not just health tools | Regulators must assess ChatGPT-style apps, not just therapy bots |
| Integrate mental health into AI assessments | Health determinants, clinical measures, long-term outcomes | New compliance layer for AI companies operating in ASEAN and East Asia |
| Co-design with lived experience | Cultural, linguistic, contextual adaptation | Mandates local input for global AI products deployed in diverse Asian markets |
| Establish crisis referral frameworks | All AI tools encountering emotional distress | Current gap: most AI tools in Asia lack local crisis pathways |
The Consortium Takes Shape
Between 17 and 19 March 2026, candidate members of the new Consortium of Collaborating Centres on AI for Health gathered at TU Delft for a pre-convening. Institutions from all six WHO regions aligned on shared priorities and agreed on initial collaboration mechanisms to build the infrastructure needed for AI governance✦ in health that is grounded in evidence, ethics, and the needs of diverse populations.
Dr Stefan Buijsman, managing director of the Delft Digital Ethics Centre, described the ambition: the consortium would enable cross-border collaboration between domain experts, governments, and researchers, increasing impact through shared frameworks rather than fragmented national approaches.
- The consortium spans all six WHO regions, ensuring representation from Southeast Asia, South Asia, and East Asia
- It builds on WHO's existing Collaborating Centre mechanism, giving recommendations formal weight with member states
- Initial priorities include crisis referral frameworks, accountability systems for AI companies, and evidence generation on long-term mental health effects
- The consortium will support the growing push towards binding AI rules in ASEAN and beyond
"Minimizing risks from generative AI for mental health while maximizing benefits requires bringing together the voices of those most affected, clinical and research expertise, governance and regulatory frameworks, and data to inform understanding."
, Dr Kenneth Carswell, WHO Department of Noncommunicable Diseases and Mental Health
Asia's Particular Vulnerability
Asia faces a unique combination of factors that makes the WHO's intervention especially timely. High smartphone penetration, cultural barriers to seeking traditional therapy, rapidly ageing populations, and aggressive AI deployment by both governments and corporations create conditions where AI tools fill gaps in mental health care by default rather than by design.
AIA Group's recent research found that lifestyle diseases now account for roughly 80% of all diagnoses in Asia, with stress, insufficient exercise, poor diet, and pollution as leading drivers. Mental health is deeply entangled with these physical health trends. When people turn to AI instead of human support, the outsourcing dynamic extends from education into emotional wellbeing.
The WHO's call for independent research investment is pointed. As Dr Caroline Figueroa of TU Delft noted, there is an urgent need for consensus on crisis referral frameworks and accountability systems. Without them, the gap between AI capability and AI safety✦ will continue to widen.
Does the WHO want to ban AI chatbots for mental health?
No. The WHO's recommendations focus on governance, not prohibition. They want generative AI use recognised as a public mental health concern so that governments and companies assess risks, build crisis referral pathways, and co-design tools with clinical experts and people with lived experience.
Which AI tools does this affect?
All generative AI tools, not just those designed for therapy. The WHO's concern extends to general-purpose chatbots like ChatGPT, AI companions, and any tool people use during moments of emotional vulnerability. This broadens the scope significantly beyond dedicated mental health apps.
What should Asian governments do now?
The WHO recommends integrating mental health into AI impact assessments, mandating crisis referral frameworks for AI tools, and investing in independent research on long-term effects. Countries with existing AI governance frameworks, such as Singapore and South Korea, are best positioned to move first.
The age of AI as a mental health wildcard has an official name now. The question is whether action follows as fast as adoption. Drop your take in the comments below.






No comments yet. Be the first to share your thoughts!
Leave a Comment