Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
Life

One in Three Adults Now Use AI for Mental Health

AI chatbots are filling Asia's mental health gap. But 41% of users say the advice is sometimes wrong.

Intelligence DeskIntelligence Desk••6 min read

Digital mental health support reaches millions across Asia

AI Snapshot

The TL;DR: what matters, fast.

35% of adults now use AI chatbots for mental health support, peaking at 64% among 25-34 year olds

The AI mental health market will grow from $1.71 billion to $9.12 billion by 2033

Indonesia's 96% mental health treatment gap makes unregulated AI chatbots a high-stakes gamble

AI Chatbots Fill Asia's Mental Health Gap, But at What Cost?

More than one in three adults now use AI chatbots for mental health support, according to a survey by Cognitive FX. Usage peaks at 64% amongst 25 to 34-year-olds, and 22% of respondents said they rely on chatbots daily for emotional support. The global AI in mental health market, valued at $1.71 billion in 2025, is projected to reach $9.12 billion by 2033.

Advertisement

These numbers should make everyone in Asia pay attention. The region faces a chronic shortage of mental health professionals, with some countries reporting fewer than one psychiatrist per 100,000 people. AI chatbots are filling a gap that health systems have ignored for decades.

In countries like Indonesia, where the treatment gap exceeds 90%, young people aren't choosing between human therapists and AI chatbots. For millions across India, the Philippines, and Indonesia, AI mental health tools have become the only accessible option.

Advertisement

Young professional in park, contemplative moment
A young professional sits alone in a quiet park, reflecting on the tension between digital convenience and human connection in mental health care

Why Millions Choose Bots Over Human Therapists

The reasons aren't mysterious. Mental health care in most of Asia is expensive, scarce, and carries significant social stigma. An AI chatbot is available at 3am, doesn't judge, and costs nothing or next to nothing.

Platforms like Wysa, which was built in India, have attracted millions of users across the region. Woebot, Flourish, and the mental health features built into ChatGPT and Gemini are seeing surging adoption, particularly amongst Gen Z and millennial users who grew up communicating through screens.

"AI, neuroscience, and data are fuelling personalised mental health care at a scale that traditional therapy cannot match." - American Psychological Association, Trends Report, January 2026

By The Numbers

  • 35%: Share of adults who have used AI chatbots for mental health support
  • 64%: Usage rate amongst 25 to 34-year-olds, the highest of any age group
  • $9.12 billion: Projected global AI mental health market value by 2033, up from $1.71 billion in 2025
  • 41.2%: Users who report receiving occasionally wrong advice from AI mental health chatbots
  • 15%: Adults aged 55 and over who have turned to AI chatbots for mental health help

When AI Mental Health Goes Wrong

Here's where the story turns dangerous. A 2026 report from ECRI, a patient safety organisation, ranked misuse of AI chatbots in healthcare as the top health technology hazard of the year. The concern isn't that chatbots are useless. It's that they're being used for things they were never designed to handle.

General-purpose AI models like ChatGPT weren't built to provide mental health care. They can sound empathetic without understanding context. They can validate harmful thought patterns. They can miss critical warning signs that a trained therapist would catch immediately.

"Misuse of AI chatbots in health care tops 2026 Health Tech Hazard report." - ECRI, Health Technology Safety Report, February 2026

The 41.2% of users who report receiving wrong advice isn't a minor glitch. In mental health, wrong advice can reinforce harmful behaviours, delay real treatment, or escalate a crisis. Research has identified 15 distinct ethical risks, from mishandling crisis situations to showing bias against people with substance use disorders or severe mental illness.

Asia's Treatment Gap Makes This Crisis Urgent

The stakes in Asia are higher than in regions with better-resourced health systems. The World Health Organisation estimates that the treatment gap for mental health conditions in low and middle-income countries exceeds 75%. In parts of South and Southeast Asia, the gap is closer to 90%.

CountryPsychiatrists per 100,000Treatment Gap
India0.383%
Indonesia0.496%
Philippines0.578%
Japan12.058%
Australia13.046%

In countries like Indonesia, where the treatment gap sits at 96%, the question isn't whether AI chatbots should be used for mental health. People are already using them. The question is whether governments and health systems will step in to ensure minimum safety standards before something goes badly wrong.

Building Better AI Mental Health Tools

Fortis Healthcare in India launched an AI-powered mental health app with self-assessment tools designed by clinical psychologists. The app routes users towards human therapists when risk thresholds are crossed, rather than trying to handle everything itself. That model, AI as triage and first response with human professionals for diagnosis and treatment, is what most experts consider the responsible path.

  • AI chatbots work best as a first point of contact, reducing stigma and providing basic coping tools
  • Escalation protocols that route users to human professionals when risk is detected are essential
  • Governments in Asia need to establish minimum safety standards for mental health AI, including mandatory crisis detection and referral capabilities
  • Transparency about AI limitations is critical: users must know they're talking to a machine, not a therapist
  • Clinical validation of AI advice should be mandatory, with regular audits of chatbot responses to sensitive mental health queries
The AIinASIA View: We see this as one of the most consequential AI deployments happening in Asia right now, and it's happening with almost no regulatory guardrails. The 35% adoption figure isn't a technology story. It's a healthcare infrastructure failure that AI is papering over. For countries like Indonesia and India, where the treatment gap exceeds 80%, banning chatbots isn't realistic. But allowing unregulated general-purpose AI to handle crisis situations is reckless. Asia needs a middle path: certified AI mental health tools with mandatory escalation to humans, rolled out in partnership with existing health systems rather than as a replacement for them.

Are AI mental health chatbots safe to use?

For general emotional support and basic coping strategies, purpose-built mental health chatbots like Wysa and Woebot are reasonably safe. General-purpose AI like ChatGPT is riskier because it wasn't designed for clinical contexts and may provide inappropriate advice during crisis moments.

Why are so many young people in Asia using AI for mental health?

Three factors converge: severe shortage of mental health professionals, high social stigma around seeking help, and the comfort Gen Z and millennials feel with digital-first interactions. In many Asian countries, an AI chatbot is the most accessible mental health resource available.

Should Asian governments regulate AI mental health tools?

Yes. At minimum, regulations should require crisis detection and escalation capabilities, mandatory disclosure that users are interacting with AI, and clinical validation of advice provided. Several countries in Europe have started drafting such frameworks, but Asia lags behind.

Can AI chatbots replace therapists?

No. AI chatbots can supplement mental health care by providing immediate support, basic screening, and psychoeducation, but they cannot replace human therapists for diagnosis, treatment planning, or handling complex mental health conditions. They work best as entry points to care.

What happens when AI mental health chatbots give dangerous advice?

Currently, there's little accountability. Most chatbot providers include disclaimers that their tools aren't medical devices, but users often don't understand these limitations. This regulatory gap is particularly concerning in Asia, where traditional support systems may be weaker.

The rise of AI companions across Asia shows how quickly digital relationships can become normalised. As AI transforms wellness and health across the region, the mental health chatbot trend represents both the promise and peril of this technological shift. Will Asia lead the world in creating safe, effective AI mental health tools, or will we become a cautionary tale of what happens when innovation outpaces regulation? Drop your take in the comments below.

â—‡

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Be the first to share your perspective on this story

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the This Week in Asian AI learning path.

Continue the path →

No comments yet. Be the first to share your thoughts!

Leave a Comment

Your email will not be published

Privacy Preferences

We and our partners share information on your use of this website to help improve your experience. For more information, or to opt out click the Do Not Sell My Information button below.