US Frames AI Guidelines After Anthropic's Clash With Pentagon
Nitin Waghela March 07, 2026 05:19 PM

The United States government has chalked up strict guidelines linked to civilian contracts with artificial intelligence companies, mandating these firms to allow "any lawful" use of their models

The fresh rules came after the public feud between Pentagon and Anthropic over the utilisation of the CEO Dario Amodei-led AI major's models for automated weapons and mass domestic surveillance programmes.

Meanwhile, the battle of Pentagon versus Anthropic ended in the latter being labelled as a "supply-chain risk" on Thursday, March 6.

A draft of the guidelines reviewed by the FT says AI groups seeking business with the government must grant the US an irrevocable license to use their systems for all legal purposes.

The guidance from the General Services Administration would apply to civilian contracts and is part of a broader government-wide effort to strengthen AI services procurement, the newspaper reported, adding that it mirrors measures the Pentagon is considering for military contracts.

"It would be irresponsible to the American people and dangerous to our nation for GSA to maintain a business relationship with Anthropic," Josh Gruenbaum, commissioner of the Federal Acquisition Service, a GSA subsidiary that helps procure software for the federal government, told Reuters by email.

"As directed by the President, GSA has terminated Anthropic’s OneGov deal - ending their availability to the Executive, Legislative, and Judicial branches through GSA’s pre-negotiated contracts," Gruenbaum said.

The GSA draft mandates that contractors "must not intentionally encode partisan or ideological judgments into the AI systems data outputs," the FT reported.

It requires companies to disclose whether their models have been "modified or configured to comply with any non-US federal government or commercial compliance or regulatory framework," the newspaper said.

© Copyright @2026 LIDEA. All Rights Reserved.