The company says its new offerings are meant to help people better understand medical information and help healthcare organisations handle time-consuming tasks. At the same time, Anthropic is trying to walk a careful line by stressing privacy safeguards and limits on what the AI can and cannot do.
Anthropic has introduced Claude for Healthcarea new set of tools that allow healthcare providers, insurers, and even individual users to use Claude for medical- work. Alongside this, the company has expanded Claude for Life Scienceswhich was first launched last year for researchers and clinicians.
At the centre of the update is the ability for users to connect Claude to medical data sources. In the United States, Claude Pro and Max subscribers can now choose to link official medical records and data from fitness and health apps such as Apple Health and Android Health Connect. Anthropic says this is fully opt-in and can be turned off at any time.
When connected, Claude can summarise medical history, explain lab reports in simple language, spot patterns across health data, and help users prepare questions for doctor visits.
For everyday users, the pitch is clarity, not diagnosis. Medical reports can be dense and confusing, and many people leave clinics unsure about what their test results actually mean. Claude is positioned as a helper that translates medical language into something easier to understand.
According to Anthropic, the goal is to make conversations between patients and doctors more productive. The company has been clear that Claude is not meant to replace doctors or offer treatment decisions. It includes warnings about uncertainty and regularly points users back to qualified medical professionals.
Anthropic is also targeting healthcare organisations, where paperwork often eats into clinical time. Claude for Healthcare is described as “HIPAA-ready” infrastructure, allowing it to be used in regulated environments in the US.
New connectors allow Claude to pull information from major healthcare systems, including Medicare coverage databases, ICD-10 medical coding data, and provider registries. This means the AI can help with tasks like:
These are areas where delays often slow down care, and Anthropic argues AI can reduce that friction.
Beyond hospitals, Anthropic is doubling down on life sciences. Claude can now connect to platforms used in clinical trials and biomedical research, including clinical trial registries and research databases.
The company says this allows Claude to assist with drafting clinical trial protocols, tracking trial performance, and preparing regulatory submissions. It is part of a broader effort to make AI useful not just in labs, but across the long and complex path of bringing new drugs to market.
With AI entering sensitive areas like healthcare, scrutiny is inevitable. Anthropic has stressed that health data shared with Claude will not be used to train future models. Users control what data is shared and can revoke access at any time.
The company also acknowledges limits. Claude carries disclaimers that it can make mistakes and should not be treated as a medical authority. Anthropic executives have said the system is designed to support human experts, not replace them.
This caution comes at a time when AI chatbots are facing tougher questions globally about safety, mental health risks, and misinformation.
Anthropic’s move highlights how quickly healthcare is becoming a priority for AI companies. OpenAI, Google, and now Anthropic are all racing to position their models as useful partners in medicine.
For patients, this could mean clearer information and less confusion. For doctors and hospitals, it could mean fewer hours lost to admin work. For regulators, it raises urgent questions about oversight, accountability, and long-term impact.
Healthcare may only be one sector, but it is one where trust matters most. How companies like Anthropic handle that trust will shape how far AI is allowed to go.