Skip to main content
AI in ASIA
Anthropic healthcare AI
Life

Anthropic unveils healthcare AI tools days after OpenAI

Anthropic's new healthcare AI tools are here, hot on OpenAI's heels. Discover how this British-spelled tech giant plans to revolutionise patient care. Read m...

Intelligence Desk3 min read

AI Snapshot

The TL;DR: what matters, fast.

Anthropic has launched "Claude for Healthcare", a new suite of AI tools designed for both individual patients and medical institutions.

The platform enables US subscribers to link personal health records for medical insights and offers HIPAA-ready infrastructure for healthcare providers.

Claude for Healthcare integrates with key industry databases and health apps, aiming to streamline administrative tasks and support pharmaceutical research.

Who should pay attention: Healthcare providers | AI developers | Patients

What changes next: Competition in healthcare AI will continue to escalate.

The race to embed artificial intelligence into healthcare is intensifying, with Anthropic's new "Claude for Healthcare" suite marking a significant development. This launch, announced at the JPMorgan Healthcare Conference, directly challenges OpenAI's recently introduced ChatGPT Health, as both tech giants vie for dominance in one of the economy's most sensitive sectors.

AI for Patients and Providers

Anthropic is offering US subscribers on its Pro and Max plans the ability to link their personal health records to the Claude chatbot. This integration allows users to gain medical insights, mirroring the functionality of OpenAI's offering, which has already garnered over 230 million weekly users asking health-related questions.

Both companies have prioritised secure data access. Anthropic has partnered with HealthEx, a startup that aggregates records from more than 50,000 health systems. OpenAI, on the other hand, chose b.well, a platform connecting to 2.2 million providers and 320 health plans. Crucially, both platforms also support integration with popular wellness apps such as Apple Health, MyFitnessPal, and Function Health, aiming for a holistic view of personal well-being.

Enhanced Tools for Medical Institutions and Research

Claude for Healthcare isn't just for individual patients; it also provides HIPAA-ready infrastructure for medical institutions. It connects to crucial industry databases, including the Centers for Medicare & Medicaid Services Coverage Database, ICD-10 medical coding data, the National Provider Identifier Registry, and PubMed. This integration promises to streamline administrative tasks like prior authorisation requests and insurance appeals by aligning clinical guidelines with patient records.

Powered by Anthropic's Claude Opus 4.5 model, the platform also extends its capabilities to pharmaceutical companies. By integrating with ClinicalTrials.gov and bioRxiv, it aims to support drug development processes. Major players like AstraZeneca, Sanofi, Banner Health, and Flatiron Health are already engaging with Anthropic on these initiatives. OpenAI similarly offers HIPAA-compliant tools for medical institutions through its GPT-5 models, reinforcing the head-to-head competition.

The rapid deployment of AI in healthcare comes amidst growing scrutiny regarding its ethical implications and data privacy. Recent settlements by Character.AI and Google regarding lawsuits alleging their chatbots contributed to mental health crises highlight the potential risks, especially for vulnerable users. Concerns about AI chatbots exploiting children have also been raised recently, as discussed in AI chatbots exploit children, parents claim ignored warnings.

Both Anthropic and OpenAI have stated that user health data will not be used to train their AI models, and conversations will remain encrypted with enhanced privacy protections. However, experts point out that when these AI health tools are provided directly to consumers, they often fall outside the direct scope of HIPAA regulations, leaving users with limited recourse in the event of a data breach. This regulatory gap is a significant concern for consumer advocates and policymakers alike.

These tools are incredibly potent," commented Eric Kauderer-Abrams, who leads Anthropic's life sciences division. "However, for critical scenarios where every detail is significant, you should definitely verify the information.

This sentiment underscores the current limitations and the need for human oversight. The ethical considerations surrounding AI in healthcare are complex and require ongoing dialogue, as detailed in reports from organisations like the World Health Organisation.

What are your thoughts on AI being integrated into personal health records? Share your concerns or hopes in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Be the first to share your perspective on this story

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the Enterprise AI 101 learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Loading comments...