Anthropic, OpenAI's healthcare push fans the flames of privacy unrest
ETtech January 12, 2026 05:57 PM
Synopsis

Anthropic has announced “Claude for Healthcare and Life Sciences” for payers, providers and pharma companies via cloud and enterprise integrations. Meanwhile, OpenAI has launched 'ChatGPT Health', a set of tools that can input medical records and patient histories and generate summaries, answers and suggested actions for clinicians and consumers.

Artificial intelligence (AI) companies Anthropic and OpenAI are making a direct pitch for health-related use cases through new offerings that can handle medical records and patient data.

Anthropic has announced “Claude for Healthcare and Life Sciences” for payers, providers and pharma companies via cloud and enterprise integrations. Meanwhile, OpenAI has launched 'ChatGPT Health', a set of tools that can input medical records and patient histories and generate summaries, answers and suggested actions for clinicians and consumers.

Both firms have positioned these as privacy‑conscious and “HIPAA‑ready”, but the sensitive nature of the data to be used for these products raises significant questions. HIPAA is a US federal law that sets standards for protecting sensitive patient health information.


Sensitive data and inference risks

The tools released by OpenAI and Anthropic are designed to work not just with simple symptoms, but with full medical records, lab reports, claims data and wellness feeds.

This dramatically increases the sensitivity of the health data these bots have been processing. Even when obvious identifiers are blocked, models can infer conditions such as mental health issues, pregnancy, or chronic illness from patterns in medications and test results. Experts said this data might be used as product metadata rather than regulated health information.

Also Read: Anthropic plans an IPO as early as 2026

Regulatory grey zones beyond HIPAA

In the US, where these offerings are being piloted, much of this AI activity sits outside traditional health‑privacy laws because AI providers are categorised as tech vendors and not healthcare providers. This means that HIPAA may not apply to many consumer and some enterprise uses. So, users and even some hospitals will have to rely on general consumer‑protection laws and terms of service, which are harder to enforce in practice for such sensitive data.

Consent and terms of use

Both companies have highlighted opt‑in flows and have ensured that users or enterprise customers can control whether health data is shared, but once data enters their systems, it can be logged and analysed for safety, analytics or product improvement. Therein lies the risk that the data may be used for model training.

For insurance, consent is usually mentioned deep into the paperwork rather than obtained directly for AI use. So, it may be unclear whether patients understand that their records may be passed through a general‑purpose AI service.

Also Read: Anthropic said to be in talks to raise funding at a $350 billion valuation

Cross‑border data flows

Because these products are distributed through global cloud platforms and integrate with multiple health apps and data intermediaries, health information can move across borders and jurisdictions with different privacy regimes. That creates uncertainty for regulators in regions such as Europe or India over who is responsible when something goes wrong.

What has already gone wrong

There is already growing concern around users turning to AI for health advice and companionship.

OpenAI has faced multiple lawsuits alleging that ChatGPT contributed to suicides by mishandling users' mental health crises, including a high-profile case where a California couple claimed the chatbot encouraged their teenage son, Adam Raine, to act on suicidal thoughts in April 2025.

Separately, Google's AI Overviews has drawn criticism after a Guardian investigation found the feature delivered inaccurate or dangerous health advice in 44% of medical queries, such as misleading guidance on symptoms or treatments that could lead a user to delay getting care.

Also Read: OpenAI vs Google: ChatGPT's daily visits fall 22%, Gemini holds steady
© Copyright @2026 LIDEA. All Rights Reserved.