The race to embed artificial intelligence into healthcare is intensifying, with Anthropic's new "Claude for Healthcare" suite marking a significant development. This launch, announced at the JPMorgan Healthcare Conference, directly challenges OpenAI's recently introduced ChatGPT Health, as both tech giants vie for dominance in one of the economy's most sensitive sectors.
AI for Patients and Providers
Anthropic is offering US subscribers on its Pro and Max plans the ability to link their personal health records to the Claude chatbot. This integration allows users to gain medical insights, mirroring the functionality of OpenAI's offering, which has already garnered over 230 million weekly users asking health-related questions.
Both companies have prioritised secure data access. Anthropic has partnered with HealthEx, a startup that aggregates records from more than 50,000 health systems. OpenAI, on the other hand, chose b.well, a platform connecting to 2.2 million providers and 320 health plans. Crucially, both platforms also support integration with popular wellness apps such as Apple Health, MyFitnessPal, and Function Health, aiming for a holistic view of personal well-being.
Enhanced Tools for Medical Institutions and Research
Claude for Healthcare isn't just for individual patients; it also provides HIPAA-ready infrastructure for medical institutions. It connects to crucial industry databases, including the Centers for Medicare & Medicaid Services Coverage Database, ICD-10 medical coding data, the National Provider Identifier Registry, and PubMed. This integration promises to streamline administrative tasks like prior authorisation requests and insurance appeals by aligning clinical guidelines with patient records.
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
Powered by Anthropic's Claude Opus 4.5 model, the platform also extends its capabilities to pharmaceutical companies. By integrating with ClinicalTrials.gov and bioRxiv, it aims to support drug development processes. Major players like AstraZeneca, Sanofi, Banner Health, and Flatiron Health are already engaging with Anthropic on these initiatives. OpenAI similarly offers HIPAA-compliant tools for medical institutions through its GPT-5 models, reinforcing the head-to-head competition.
Navigating Privacy and Ethical Concerns
The rapid deployment of AI in healthcare comes amidst growing scrutiny regarding its ethical implications and data privacy. Recent settlements by Character.AI and Google regarding lawsuits alleging their chatbots contributed to mental health crises highlight the potential risks, especially for vulnerable users. Concerns about AI chatbots exploiting children have also been raised recently, as discussed in AI chatbots exploit children, parents claim ignored warnings.
Both Anthropic and OpenAI have stated that user health data will not be used to train their AI models, and conversations will remain encrypted with enhanced privacy protections. However, experts point out that when these AI health tools are provided directly to consumers, they often fall outside the direct scope of HIPAA regulations, leaving users with limited recourse in the event of a data breach. This regulatory gap is a significant concern for consumer advocates and policymakers alike.
"These tools are incredibly potent," commented Eric Kauderer-Abrams, who leads Anthropic's life sciences division. "However, for critical scenarios where every detail is significant, you should definitely verify the information."
This sentiment underscores the current limitations and the need for human oversight. The ethical considerations surrounding AI in healthcare are complex and require ongoing dialogue, as detailed in reports from organisations like the World Health Organisation.
What are your thoughts on AI being integrated into personal health records? Share your concerns or hopes in the comments below.













Latest Comments (4)
More AI hype.
good to see anthropic move for healthcare. i hope can help many patient more fast get treatments. this is important next step, but need safe. 📌
all of this ai in healthcare makes me nervous for privacy issues 🤷
Anthropic also jumping in, not sure how this will help with real patient load na.
Leave a Comment