Anthropic has been a cautious AI company since inception, preoccupied with safety and slightly reluctant to race. While that image remains, there is a quieter reality taking shape. In just six months, Anthropic has moved into new markets, some occupied by its own partners and customers, and the broader enterprise software industry.
The application layer
Until early 2026, the AI value chain had a clean separation. Foundation model companies built the models, and application-layer companies like Salesforce built the workflow automation solutions. Everyone stayed in their lane.
That changed in January, when Anthropic launched Cowork plug-ins covering legal research, financial modelling, HR and sales automation, functions that established software vendors had built entire businesses around.
This triggered nearly $1 trillion in cumulative losses across stocks of global software, financial, data, and professional services firms, several of which were Anthropic partners. Claude for Financial Services had integrated with FactSet, PitchBook, Morningstar, Daloopa, and S&P Global, before the Claude-maker's tools competed directly with their core products.
Cybersecurity
This repeated on February 23. Anthropic announced Claude Code Security, a tool that scans codebases for vulnerabilities and suggests patches. CrowdStrike, Datadog, and Zscaler fell 11%, while Fortinet and Okta dropped 6%. SentinelOne declined 5%, Palo Alto Networks 3%. The Global X Cybersecurity ETF hit its lowest level since November 2023.
Incumbent security vendors are not standing still, but their products are built on known threat signatures and pattern detection. Claude's approach is open-ended reasoning across an unfamiliar codebase. It operates more like a high-level security researcher than a software scanner. The tool maps data flows across thousands of files, identifies complex business logic flaws, and suggests targeted software patches for human review. This shifts the focus from merely "flagging" a problem to providing a ready-to-deploy solution, drastically reducing the Mean Time to Remediate (MTTR) for developers.
What does this mean?
Anthropic seems to find categories where Claude's reasoning capabilities outperform existing tools, builds solutions, and deploys them for enterprise customers.
A CNBC report cites an investor who says that while OpenAI has had to market six different products, Anthropic identified a sticky use case in code generation and pursued it relentlessly. Code generation led to developer security. Together, they justified the broader enterprise productivity push.
Anthropic’s image got an unexpected fillip from the dispute with the US Department of War. Business software subscriptions have grown from the beginning of the year till February, the peak of the controversy, while OpenAI's subscription share fell 1.5% in the same period.
The application layer
Until early 2026, the AI value chain had a clean separation. Foundation model companies built the models, and application-layer companies like Salesforce built the workflow automation solutions. Everyone stayed in their lane.
That changed in January, when Anthropic launched Cowork plug-ins covering legal research, financial modelling, HR and sales automation, functions that established software vendors had built entire businesses around.
This triggered nearly $1 trillion in cumulative losses across stocks of global software, financial, data, and professional services firms, several of which were Anthropic partners. Claude for Financial Services had integrated with FactSet, PitchBook, Morningstar, Daloopa, and S&P Global, before the Claude-maker's tools competed directly with their core products.
Cybersecurity
This repeated on February 23. Anthropic announced Claude Code Security, a tool that scans codebases for vulnerabilities and suggests patches. CrowdStrike, Datadog, and Zscaler fell 11%, while Fortinet and Okta dropped 6%. SentinelOne declined 5%, Palo Alto Networks 3%. The Global X Cybersecurity ETF hit its lowest level since November 2023.
Incumbent security vendors are not standing still, but their products are built on known threat signatures and pattern detection. Claude's approach is open-ended reasoning across an unfamiliar codebase. It operates more like a high-level security researcher than a software scanner. The tool maps data flows across thousands of files, identifies complex business logic flaws, and suggests targeted software patches for human review. This shifts the focus from merely "flagging" a problem to providing a ready-to-deploy solution, drastically reducing the Mean Time to Remediate (MTTR) for developers.
What does this mean?
Anthropic seems to find categories where Claude's reasoning capabilities outperform existing tools, builds solutions, and deploys them for enterprise customers.
A CNBC report cites an investor who says that while OpenAI has had to market six different products, Anthropic identified a sticky use case in code generation and pursued it relentlessly. Code generation led to developer security. Together, they justified the broader enterprise productivity push.
Anthropic’s image got an unexpected fillip from the dispute with the US Department of War. Business software subscriptions have grown from the beginning of the year till February, the peak of the controversy, while OpenAI's subscription share fell 1.5% in the same period.





