Your Therapist Might Be a Chatbot. That Is a Problem.
More than one in three adults now use AI chatbots for mental health support, according to a survey by Cognitive FX. Usage peaks at 64% among 25 to 34 year olds, and 22% of respondents said they rely on chatbots daily for emotional support. The global AI in mental health market, valued at $1.71 billion in 2025, is projected to reach $9.12 billion by 2033.
These numbers should make everyone in Asia pay attention. The region faces a chronic shortage of mental health professionals, with some countries reporting fewer than one psychiatrist per 100,000 people. AI chatbots are filling a gap that health systems have ignored for decades.
Why People Choose a Bot Over a Human
The reasons are not mysterious. Mental health care in most of Asia is expensive, scarce, and carries significant social stigma. An AI chatbot is available at 3am, does not judge, and costs nothing or next to nothing. For millions of young people across India, the Philippines, and Indonesia, a chatbot is not a second choice. It is the only accessible option.
Platforms like Wysa, which was built in India, have attracted millions of users across the region. Woebot, Flourish, and the mental health features built into ChatGPT and Gemini are seeing surging adoption, particularly among Gen Z and millennial users who grew up communicating through screens.
"AI, neuroscience, and data are fuelling personalised mental health care at a scale that traditional therapy cannot match." - American Psychological Association, Trends Report, January 2026
By The Numbers
- 35%: Share of adults who have used AI chatbots for mental health support
- 64%: Usage rate among 25 to 34 year olds, the highest of any age group
- $9.12 billion: Projected global AI mental health market value by 2033, up from $1.71 billion in 2025
- 41.2%: Users who report receiving occasionally wrong advice from AI mental health chatbots
- 15%: Adults aged 55 and over who have turned to AI chatbots for mental health help
The Risks Are Real and Growing
Here is where the story turns. A 2026 report from ECRI, a patient safety organisation, ranked misuse of AI chatbots in healthcare as the top health technology hazard of the year. The concern is not that chatbots are useless. It is that they are being used for things they were never designed to handle.
General-purpose AI models like ChatGPT were not built to provide mental health care. They can sound empathetic without understanding context. They can validate harmful thought patterns. They can miss critical warning signs that a trained therapist would catch immediately. Research has identified 15 distinct ethical risks, from mishandling crisis situations to showing bias against people with substance use disorders or severe mental illness.
"Misuse of AI chatbots in health care tops 2026 Health Tech Hazard report." - ECRI, Health Technology Safety Report, February 2026
The 41.2% of users who report receiving wrong advice is not a minor glitch. In mental health, wrong advice can reinforce harmful behaviours, delay real treatment, or escalate a crisis.
Asia's Mental Health Gap Makes This Urgent
The stakes in Asia are higher than in regions with better-resourced health systems. The World Health Organisation estimates that the treatment gap for mental health conditions in low and middle-income countries exceeds 75%. In parts of South and Southeast Asia, the gap is closer to 90%.
| Country | Psychiatrists per 100,000 | Treatment Gap |
|---|---|---|
| India | 0.3 | 83% |
| Indonesia | 0.4 | 96% |
| Philippines | 0.5 | 78% |
| Japan | 12.0 | 58% |
| Australia | 13.0 | 46% |
In countries like Indonesia, where the treatment gap sits at 96%, the question is not whether AI chatbots should be used for mental health. People are already using them. The question is whether governments and health systems will step in to ensure minimum safety standards before something goes badly wrong.
What Good Looks Like
Fortis Healthcare in India launched an AI-powered mental health app with self-assessment tools designed by clinical psychologists. The app routes users toward human therapists when risk thresholds are crossed, rather than trying to handle everything itself. That model, AI as triage and first response, human professionals for diagnosis and treatment, is what most experts consider the responsible path.
- AI chatbots work best as a first point of contact, reducing stigma and providing basic coping tools
- Escalation protocols that route users to human professionals when risk is detected are essential
- Governments in Asia need to establish minimum safety standards for mental health AI, including mandatory crisis detection and referral capabilities
- Transparency about AI limitations is critical, users must know they are talking to a machine, not a therapist
Are AI mental health chatbots safe to use?
For general emotional support and basic coping strategies, purpose-built mental health chatbots like Wysa and Woebot are reasonably safe. General-purpose AI like ChatGPT is riskier because it was not designed for clinical contexts and may provide inappropriate advice during crisis moments.
Why are so many young people in Asia using AI for mental health?
Three factors converge: severe shortage of mental health professionals, high social stigma around seeking help, and the comfort Gen Z and millennials feel with digital-first interactions. In many Asian countries, an AI chatbot is the most accessible mental health resource available.
Should Asian governments regulate AI mental health tools?
Yes. At minimum, regulations should require crisis detection and escalation capabilities, mandatory disclosure that users are interacting with AI, and clinical validation of advice provided. Several countries in Europe have started drafting such frameworks, but Asia lags behind.
Can AI chatbots replace therapists?
No. AI chatbots can supplement mental health care by providing immediate support, basic screening, and coping tools. But they cannot diagnose conditions, build therapeutic relationships, or handle complex cases. The best models use AI for triage and first response, then route to human professionals.
Have you ever used an AI chatbot when you were struggling, and did it actually help? Drop your take in the comments below.
YOUR TAKE
We cover the story. You tell us what it means on the ground.
What did you think?
Share your thoughts
Be the first to share your perspective on this story
This is a developing story
We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

