OpenAI has officially launched ChatGPT Health, a new dedicated section within its popular chatbot designed to act as a "healthcare ally".
This move signals a significant push into the health sector, allowing users to securely link their medical records and data from various wellness applications.
The new feature, developed over two years with input from over 260 physicians, is currently available via a waitlist and will roll out more broadly to web and iOS users soon.
It enables integration with services like b.well Connected Health for medical records, and popular wellness apps such as Apple Health, MyFitnessPal, Function, and Weight Watchers. This initiative follows reports that over 230 million people globally already use ChatGPT weekly for health and wellness queries, with 40 million engaging daily.
Privacy and Limitations
Privacy has been a key consideration for ChatGPT Health. It operates as a separate, compartmentalised space with enhanced encryption and isolation. Critically, conversations within ChatGPT Health are not used by default to train OpenAI's foundational models, and it maintains a distinct memory and chat history from the main ChatGPT interface.
However, despite these protections, the platform is not HIPAA compliant, as consumer health products typically fall outside the scope of the Health Insurance Portability and Accountability Act. Nate Gross, OpenAI's head of health, confirmed that the company would still be obliged to provide data when legally mandated, such as through subpoenas or court orders, or in emergency situations. This distinction is important for users to understand, as it means personal health information, while segregated, isn't entirely immune from legal access. This situation highlights the ongoing debate around data privacy in the age of AI, a topic often discussed when considering the AI vendor vetting checklist.
Addressing AI's Role in Healthcare
OpenAI's CEO of Applications, Fidji Simo, shared a personal anecdote during the press briefing, detailing how ChatGPT helped her identify a potentially dangerous drug interaction after a hospital stay. This example underscores the company's vision for AI as a tool to aid, rather than replace, human healthcare professionals.
The launch comes amidst growing scrutiny over the reliability of AI chatbots for health advice. There have been concerning reports, such as a case in August 2025 where a man was hospitalised after allegedly following ChatGPT's suggestion to substitute salt with sodium bromide. Google's AI Overview has also faced criticism for providing unsafe medical recommendations. Recent investigations have also uncovered instances of AI systems giving misleading advice on liver function tests and diets for pancreatic cancer patients. A Mount Sinai study from August 2025 further concluded that widely used AI chatbots are "highly vulnerable" to disseminating harmful health information.
OpenAI acknowledges these concerns, stressing that ChatGPT Health is "not designed for diagnosis or treatment". The system is programmed to direct users to healthcare professionals in distressing circumstances. When questioned about safeguards against exacerbating health anxiety, Simo stated that extensive work has been done to "fine-tune the model to ensure we provide information without being alarmist". This focus on responsible AI use mirrors discussions about the danger of anthropomorphising AI and the broader ethical implications of AI in sensitive areas.
This push into health also aligns with a wider trend of AI integration across various devices and platforms, as seen with Samsung's vow for AI integration across all devices in 2026.
What are your thoughts on AI chatbots entering the healthcare space? Do the benefits outweigh the risks, or vice versa? Share your perspective in the comments below.






Latest Comments (2)
Not HIPAA compliant" is a big red flag for any serious medical application, especially with OpenAI confirming data can be legally mandated. In Hong Kong, the regulatory framework around health data is already complex, even for traditional service providers. This looks more like a data play than a genuine healthcare solution right now.
The point about not being HIPAA compliant is a big one, especially looking at it from a Malaysian telco perspective. Our local data privacy acts, like PDPA, have their own requirements for sensitive personal data, which health info definitely falls under. If OpenAI is subject to US legal mandates for data access, how that translates for users here in Malaysia is a real concern. We don't have direct equivalents for those types of legal orders, but it highlights the need for very clear guidelines on what data can be accessed locally if such a service were to launch here. It's not just about encryption, it's about jurisdictional control over sensitive user data.
Leave a Comment