Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
ChatGPT Health Asia
Life

230 million users ask about health each week, so OpenAI has launched ChatGPT Health

OpenAI launches ChatGPT Health for 230 million weekly users seeking medical guidance, featuring secure data integration and physician collaboration.

Intelligence Deskโ€ขโ€ข4 min read

AI Snapshot

The TL;DR: what matters, fast.

OpenAI launches ChatGPT Health with input from 260+ physicians for 230M weekly health-seeking users

Platform integrates medical records and wellness apps while maintaining separate privacy protections

70% of healthcare discussions occur outside clinic hours, highlighting accessibility demand

Advertisement

Advertisement

OpenAI Launches ChatGPT Health as 230 Million Weekly Users Seek Medical Guidance

OpenAI has officially launched ChatGPT Health, a dedicated healthcare section within its popular chatbot designed to act as a "healthcare ally". This move signals a significant push into the health sector, allowing users to securely link their medical records and data from various wellness applications.

The new feature, developed over two years with input from more than 260 physicians, is currently available via a waitlist and will roll out more broadly to web and iOS users soon. It enables integration with services like b.well Connected Health for medical records, and popular wellness apps such as Apple Health, MyFitnessPal, Function, and Weight Watchers.

Rising Demand Drives Healthcare AI Adoption

The initiative follows mounting evidence of user demand for health-related AI assistance. Over 230 million people globally already use ChatGPT's growing platform weekly for health and wellness queries, with 40 million engaging daily.

"We're addressing existing issues in the healthcare space, like cost and access barriers, overbooked doctors, and a lack of continuity in care," said Fidji Simo, OpenAI's CEO of Applications.

Simo shared a personal anecdote during the press briefing, detailing how ChatGPT helped her identify a potentially dangerous drug interaction after a hospital stay. This example underscores the company's vision for AI as a tool to aid, rather than replace, human healthcare professionals.

By The Numbers

  • 230 million users ask health and wellness questions on ChatGPT each week
  • 40 million people use ChatGPT for health-related questions daily
  • More than 5% of all global ChatGPT prompts pertain to healthcare
  • 70% of healthcare discussions on ChatGPT occur outside typical clinic operating hours
  • The healthcare industry represents 29% of customers using ChatGPT in their purchasing journey

Privacy Protections and Legal Limitations

Privacy has been a key consideration for ChatGPT Health. It operates as a separate, compartmentalised space with enhanced encryption and isolation. Critically, conversations within ChatGPT Health are not used by default to train OpenAI's foundational models, and it maintains a distinct memory and chat history from the main ChatGPT interface.

However, despite these protections, the platform is not HIPAA compliant, as consumer health products typically fall outside the scope of the Health Insurance Portability and Accountability Act. Nate Gross, OpenAI's head of health, confirmed that the company would still be obliged to provide data when legally mandated, such as through subpoenas or court orders, or in emergency situations.

This distinction is important for users to understand, as it means personal health information, while segregated, isn't entirely immune from legal access. The situation highlights ongoing debates around data privacy in AI applications, similar to concerns raised about AI therapy apps across Asia.

Safety Concerns and Industry Scrutiny

The launch comes amidst growing scrutiny over the reliability of AI chatbots for health advice. There have been concerning reports, such as a case in August 2025 where a man was hospitalised after allegedly following ChatGPT's suggestion to substitute salt with sodium bromide.

"We are not designed for diagnosis or treatment. We've done extensive work to fine-tune the model to ensure we provide information without being alarmist," Simo emphasised when questioned about safeguards against exacerbating health anxiety.

Google's AI Overview has also faced criticism for providing unsafe medical recommendations. Recent investigations have uncovered instances of AI systems giving misleading advice on liver function tests and diets for pancreatic cancer patients. A Mount Sinai study from August 2025 further concluded that widely used AI chatbots are "highly vulnerable" to disseminating harmful health information.

OpenAI acknowledges these concerns, stressing that ChatGPT Health is programmed to direct users to healthcare professionals in distressing circumstances. This approach mirrors Taiwan's more cautious deployment of AI health assistants in its national healthcare system.

AI Health Platform Compliance Status Data Training Policy Professional Integration
ChatGPT Health Not HIPAA compliant Conversations not used for training Referral to healthcare professionals
Taiwan's Gemini Health Government-regulated Localised data processing Direct NHS integration
Traditional AI chatbots Varies by platform Often used for training Limited professional oversight

Regional Competition and Market Response

OpenAI's healthcare push comes as competitors make similar moves. Anthropic unveiled healthcare AI tools just days after OpenAI's announcement, highlighting the intensifying competition in medical AI applications. This follows patterns seen across AI wellness applications in Asia, where regional players are rapidly developing localised solutions.

The broader implications extend beyond individual health queries. Key considerations for healthcare AI deployment include:

  • Regulatory compliance varies significantly between jurisdictions
  • Cultural sensitivity in health communications requires localised training
  • Integration with existing healthcare systems presents technical challenges
  • Professional liability questions remain largely unresolved
  • Patient data sovereignty concerns vary by region

How does ChatGPT Health differ from regular ChatGPT?

ChatGPT Health operates as a separate, compartmentalised space with enhanced encryption and isolation. Conversations aren't used to train OpenAI's models and maintain distinct memory from regular ChatGPT sessions.

Is ChatGPT Health HIPAA compliant?

No, ChatGPT Health is not HIPAA compliant as consumer health products typically fall outside HIPAA scope. The company must provide data when legally mandated through subpoenas or court orders.

Can ChatGPT Health diagnose medical conditions?

No, ChatGPT Health is explicitly not designed for diagnosis or treatment. It's programmed to direct users to healthcare professionals in concerning situations and provides informational support only.

Which wellness apps integrate with ChatGPT Health?

ChatGPT Health integrates with Apple Health, MyFitnessPal, Function, Weight Watchers, and b.well Connected Health for medical records. More integrations are expected as the platform expands.

When will ChatGPT Health be widely available?

Currently available via waitlist, ChatGPT Health will roll out more broadly to web and iOS users soon. OpenAI hasn't announced specific timelines for Android or global availability.

The AIinASIA View: OpenAI's healthcare venture represents both opportunity and risk. While 230 million weekly health queries demonstrate clear user demand, the lack of HIPAA compliance and documented AI hallucination issues raise serious concerns. The company's emphasis on directing users to professionals is commendable, yet insufficient given AI's persuasive potential. We believe regulated markets like Singapore and South Korea offer better models for healthcare AI deployment, with mandatory professional oversight and clear liability frameworks. OpenAI's approach, while innovative, prioritises market capture over patient safety.

This push into health aligns with broader AI integration trends, as seen with companies expanding AI capabilities across multiple sectors. The question remains whether rapid deployment serves users better than the more cautious, regulated approaches adopted in markets like Taiwan's systematic health AI rollout.

As AI continues reshaping healthcare accessibility, the balance between innovation and safety becomes increasingly critical. What's your view on AI chatbots entering healthcare: do the accessibility benefits outweigh the safety risks? Drop your take in the comments below.

โ—‡

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Join 2 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the AI Safety for Everyone learning path.

Continue the path รขย†ย’

Latest Comments (2)

Tony Leung@tonyleung
AI
5 February 2026

Not HIPAA compliant" is a big red flag for any serious medical application, especially with OpenAI confirming data can be legally mandated. In Hong Kong, the regulatory framework around health data is already complex, even for traditional service providers. This looks more like a data play than a genuine healthcare solution right now.

Priya Ramasamy@priyaram
AI
10 January 2026

The point about not being HIPAA compliant is a big one, especially looking at it from a Malaysian telco perspective. Our local data privacy acts, like PDPA, have their own requirements for sensitive personal data, which health info definitely falls under. If OpenAI is subject to US legal mandates for data access, how that translates for users here in Malaysia is a real concern. We don't have direct equivalents for those types of legal orders, but it highlights the need for very clear guidelines on what data can be accessed locally if such a service were to launch here. It's not just about encryption, it's about jurisdictional control over sensitive user data.

Leave a Comment

Your email will not be published