Despite the Pentagon's label of "supply chain risk," Palantir employs Claude from Anthropic: CEO Alex Karp
Palantir Technologies chief executive Alex Karp has said the company continues to rely on artificial intelligence models from Anthropic despite the US Department of War recently classifying the firm as a “supply chain risk”.
In an interview with CNBC, Karp explained that while the Department of War plans to eventually phase out Anthropic, the process has not yet taken effect. He noted that Palantir’s systems are currently integrated with Anthropic’s technology and may in the future also incorporate other large language models.
Anthropic had previously partnered with Amazon Web Services and Palantir in 2024 to help provide artificial intelligence capabilities for the US military. However, the Pentagon last week formally labelled Anthropic as a supply chain risk, a designation typically reserved for companies that may have connections with foreign adversaries.
The classification means contractors and vendors working with the Pentagon must confirm that they are not using Anthropic’s Claude AI models in projects related to US military operations. Despite this designation, the shift away from Anthropic’s technology is expected to take time.
According to reports, Claude models are still being used in systems supporting US operations in Iran. The continued use highlights how deeply the technology has been embedded in existing military systems.
Anthropic has strongly challenged the designation and has filed a lawsuit against the administration of Donald Trump. The company argues that the decision is “unprecedented and unlawful” and has asked a court to pause the Pentagon’s move, warning that the action could affect hundreds of millions of dollars in government contracts.
Emil Michael said replacing Anthropic’s Claude models will not be immediate because they are deeply integrated into existing military infrastructure. He explained that systems already embedded within defence operations cannot simply be removed overnight.
Trump has reportedly said that federal agencies will be given six months to remove Anthropic products from government systems. However, an internal Pentagon memo indicates that exceptions may be granted in cases where the technology is essential for operations and where suitable alternatives are not available.
Speaking on CNBC’s programme Squawk Box, Michael said the government’s concerns are linked to policy preferences embedded within AI models during training, which could potentially conflict with the needs of the US military.