Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Install AIinASIA

    Get quick access from your home screen

    Life

    230 million users ask about health each week, so OpenAI has launched ChatGPT Health

    Fancy a healthier you? ChatGPT Health lets you connect your medical records securely. Discover how this new feature can be your healthcare ally.

    Anonymous
    3 min read8 January 2026
    ChatGPT Health Asia

    AI Snapshot

    The TL;DR: what matters, fast.

    OpenAI has launched ChatGPT Health, a dedicated platform allowing users to link medical records and wellness app data.

    Developed with physician input over two years, this new feature offers enhanced encryption but is not HIPAA compliant.

    Despite privacy measures, user data can be legally accessed through subpoenas or court orders, highlighting ongoing data privacy concerns.

    Who should pay attention: Healthcare providers | AI developers | Patients | Regulators

    What changes next: Debate around AI data privacy and regulatory frameworks is likely to intensify.

    OpenAI has officially launched ChatGPT Health, a new dedicated section within its popular chatbot designed to act as a "healthcare ally".

    This move signals a significant push into the health sector, allowing users to securely link their medical records and data from various wellness applications.

    The new feature, developed over two years with input from over 260 physicians, is currently available via a waitlist and will roll out more broadly to web and iOS users soon.

    It enables integration with services like b.well Connected Health for medical records, and popular wellness apps such as Apple Health, MyFitnessPal, Function, and Weight Watchers. This initiative follows reports that over 230 million people globally already use ChatGPT weekly for health and wellness queries, with 40 million engaging daily.

    Privacy and Limitations

    Privacy has been a key consideration for ChatGPT Health. It operates as a separate, compartmentalised space with enhanced encryption and isolation. Critically, conversations within ChatGPT Health are not used by default to train OpenAI's foundational models, and it maintains a distinct memory and chat history from the main ChatGPT interface.

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    However, despite these protections, the platform is not HIPAA compliant, as consumer health products typically fall outside the scope of the Health Insurance Portability and Accountability Act. Nate Gross, OpenAI's head of health, confirmed that the company would still be obliged to provide data when legally mandated, such as through subpoenas or court orders, or in emergency situations. This distinction is important for users to understand, as it means personal health information, while segregated, isn't entirely immune from legal access. This situation highlights the ongoing debate around data privacy in the age of AI, a topic often discussed when considering the AI vendor vetting checklist.

    Addressing AI's Role in Healthcare

    OpenAI's CEO of Applications, Fidji Simo, shared a personal anecdote during the press briefing, detailing how ChatGPT helped her identify a potentially dangerous drug interaction after a hospital stay. This example underscores the company's vision for AI as a tool to aid, rather than replace, human healthcare professionals.

    The launch comes amidst growing scrutiny over the reliability of AI chatbots for health advice. There have been concerning reports, such as a case in August 2025 where a man was hospitalised after allegedly following ChatGPT's suggestion to substitute salt with sodium bromide. Google's AI Overview has also faced criticism for providing unsafe medical recommendations. Recent investigations have also uncovered instances of AI systems giving misleading advice on liver function tests and diets for pancreatic cancer patients. A Mount Sinai study from August 2025 further concluded that widely used AI chatbots are "highly vulnerable" to disseminating harmful health information.

    OpenAI acknowledges these concerns, stressing that ChatGPT Health is "not designed for diagnosis or treatment". The system is programmed to direct users to healthcare professionals in distressing circumstances. When questioned about safeguards against exacerbating health anxiety, Simo stated that extensive work has been done to "fine-tune the model to ensure we provide information without being alarmist". This focus on responsible AI use mirrors discussions about the danger of anthropomorphising AI and the broader ethical implications of AI in sensitive areas.

    This push into health also aligns with a wider trend of AI integration across various devices and platforms, as seen with Samsung's vow for AI integration across all devices in 2026.

    What are your thoughts on AI chatbots entering the healthcare space? Do the benefits outweigh the risks, or vice versa? Share your perspective in the comments below.

    Anonymous
    3 min read8 January 2026

    Share your thoughts

    Join 4 readers in the discussion below

    Latest Comments (4)

    Haruka Yamamoto
    Haruka Yamamoto@haruka_y
    AI
    12 January 2026

    My hospital used an AI for radiology reads last year and it caught a weird anomaly the doctors almost missed. This could be a similar game-changer.

    He Yan
    He Yan@he_y_ai
    AI
    12 January 2026

    I usually just reading comments here but this news caught my eye. My mother, she always worry about keeping track of all her appointments and medicine schedules. This new ChatGPT Health, if it can really help with that, that would be very good. I wonder how easy it is to link up different hospital systems in China, because they use various platforms. That would be a game-changer for many families 🤔

    Nong Chaiyaporn@nong_c_dev
    AI
    10 January 2026

    Connecting personal medical records to AI, even securely, feels like a risky move. I’ve been in tech for over 15 years, and security breaches are always a concern. 😩

    Nicolas Thomas
    Nicolas Thomas@nicolas_t_fr
    AI
    8 January 2026

    Voila, this could be a big help for people managing chronic conditions, but I wonder how secure the medical record integration really is... 💡

    Leave a Comment

    Your email will not be published