In a debate over AI limitations, the Pentagon calls in the head of Anthropic
  • Elena
  • February 24, 2026

In a debate over AI limitations, the Pentagon calls in the head of Anthropic

Amid mounting pressure from the Trump administration, Pentagon officials have summoned Dario Amodei, chief executive of Anthropic, to Washington for a meeting on Tuesday to discuss how the company’s artificial intelligence technology is deployed on classified US military systems.

The Defense Department and Anthropic signed a $200 million pilot contract last year, making Anthropic the first AI company authorised to operate on the military’s classified networks. However, a January 9 memo by Defense Secretary Pete Hegseth urging AI firms to remove restrictions on their technology prompted both sides to renegotiate the terms of the agreement.

Pentagon officials are now seeking broader usage rights. According to people briefed on the discussions, the department has insisted that contracts allow it to use AI models as it sees fit, provided the activities are lawful. At the same time, companies would be permitted to embed safety provisions — often referred to as a “safety stack” — within their systems.

Anthropic has indicated it is willing to loosen certain restrictions but has pushed for guardrails preventing its AI from being used for mass surveillance of Americans or in fully autonomous weapons systems without human oversight, sources familiar with the talks said.

The Pentagon is also advancing parallel agreements with rival AI providers. It has reportedly signed a deal with xAI, founded by Elon Musk, and is close to finalising an arrangement with Google for access to its Gemini model. Officials hope those agreements will strengthen their negotiating leverage with Anthropic.

The xAI model is widely viewed as less advanced than Anthropic’s systems, while Google’s Gemini competes directly with Anthropic and OpenAI. People briefed on the discussions say Google is keen to secure a deal, having invested heavily in government-dedicated data centres that remain underutilised.

OpenAI, whose models are already used on unclassified military networks, is reportedly not close to a classified agreement, as it continues refining its safety technology.

Pentagon officials have acknowledged that removing Anthropic from classified systems would create short-term disruptions. Military personnel reportedly use Anthropic’s Claude model alongside analytics software from Palantir to analyse classified data, and experts caution that cutting off access could hamper operational efficiency.

Anthropic has argued it has exercised greater caution than rivals in restricting access to its technology by Chinese entities. In November, the company said it had banned a Chinese state-sponsored group that was allegedly using its systems in a hacking campaign targeting technology firms, financial institutions and government agencies. Earlier, OpenAI said it had disrupted separate Chinese efforts to use AI tools for surveillance.

On the eve of the Pentagon meeting, Anthropic published a blog post alleging that three Chinese AI companies had siphoned information from its systems to improve their own models, underscoring broader geopolitical tensions surrounding advanced AI technologies.

The high-level talks reflect a broader debate within Washington over balancing rapid military adoption of AI with safety safeguards and export control concerns, as competition intensifies among leading US AI firms for government contracts.