How the Anthropic-Pentagon disagreement on AI security intensified
A dispute between the United States Department of Defense and artificial intelligence company Anthropic escalated earlier this year after the AI developer refused to weaken safety safeguards on its systems for military use, triggering a series of policy decisions, corporate responses and legal action.
The standoff began in January when the Pentagon pushed the company to remove guardrails that prevent its AI models from being used for autonomous weapons targeting or domestic surveillance in the United States. Anthropic declined to relax those restrictions, arguing that such uses would conflict with its safety policies.
The disagreement quickly intensified. In February, the Pentagon urged multiple AI firms, including Anthropic, to make their models available in classified environments with fewer usage restrictions. The defence department also considered terminating its relationship with Anthropic due to the company’s insistence on maintaining limits on how the military could use its technology.
On February 23, U.S. Defense Secretary Pete Hegseth summoned Anthropic chief executive Dario Amodei to the Pentagon for discussions regarding military use of the company’s flagship AI system, Claude AI. The following day, the Pentagon warned Anthropic that it could face consequences, including being labelled a “supply-chain risk,” if it refused to adjust its policies.
The pressure increased further when the defence department asked major defence contractors such as Boeing and Lockheed Martin to review their reliance on Anthropic’s technology. Pentagon officials then gave the company until the evening of February 27 to agree to remove the safeguards. Anthropic refused to comply.
On February 27, U.S. President Donald Trump ordered federal agencies to cease using Anthropic’s technology, while the defence secretary formally directed the Pentagon to designate the company as a national security supply-chain risk. Anthropic immediately announced plans to challenge the decision in court.
That same day, rival AI developer OpenAI announced a deal to deploy its technology on the Department of Defense’s classified network. The following day, OpenAI clarified that the agreement included restrictions preventing its technology from being used for mass domestic surveillance, directing autonomous weapons or making high-stakes automated decisions.
In early March, multiple federal agencies — including the United States Department of State, United States Department of the Treasury and United States Department of Health and Human Services — began phasing out Anthropic’s AI tools. Defence contractor Lockheed Martin also signalled it would follow Pentagon guidance, potentially removing Anthropic technology from its supply chain.
Treasury Secretary Scott Bessent later confirmed that the Treasury Department would remove Anthropic systems from government infrastructure. Meanwhile, a technology industry group warned that the supply-chain risk designation could create uncertainty for companies and limit the military’s access to advanced AI tools.
The Pentagon formally designated Anthropic a supply-chain risk on March 5. Shortly afterward, Amazon said it was helping customers transition Department of Defense workloads away from Anthropic models on its cloud platform, while still allowing customers to use Claude for non-Pentagon applications.
The General Services Administration also introduced stricter rules for government AI contracts and terminated Anthropic’s OneGov agreement, which had made its AI technology available to federal agencies.
On March 9, Anthropic filed a lawsuit seeking to block the Pentagon from placing it on a national security blacklist, arguing that the designation is unlawful and violates the company’s free speech and due process rights. Executives warned that the government’s actions could reduce Anthropic’s 2026 revenue by several billion dollars and damage its reputation.
The dispute has drawn support from other technology companies. On March 10, Microsoft filed a legal brief backing Anthropic’s challenge, arguing that the Pentagon’s decision directly affects Microsoft and could force costly disruptions in products and services that depend on Anthropic’s AI technology.