The Pentagon Designated Anthropic as a ‘Supply Chain Risk.’ Here’s What the Label Actually Means

Anthropic’s CEO, says the company will fight back through legal action. The move is unprecedented for a U.S. company.

By Ben Sherry

Illustration: Inc; Photo: Anthropic, Getty Images
Illustration: Inc; Photo: Anthropic, Getty Images

On Wednesday, March 4, The United States Department of War officially designated Anthropic, an AI industry leader, as a “supply chain risk to America’s national security.”

The move represented a major escalation in the negotiations between Anthropic and the Pentagon regarding the military’s use of Claude, the company’s family of AI models (and Inc.’s Co-Founder of The Year). But Anthropic now says this designation actually has a very narrow scope, and will only affect companies that use Claude in their specific dealings with the Department of War.

Anthropic is now the first American company to ever be designated a supply chain risk. It’s typically reserved for foreign companies, usually ones that have close relationships with their home country’s government. For example, in 2020, the Federal Communications Commission designated Huawei as a supply-chain risk due to its ties to the Chinese government. That designation was partially executed to prevent Huawei’s equipment from being used in the building of the United States’ 5G cellular infrastructure.

According to federal law, being designated a supply chain risk by the U.S. military means that military leadership has determined that an “adversary” could potentially “sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance” of a national security system in order to “surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system.”

Before Anthropic even received the designation, on Friday, February 27, President Trump posted on Truth Social that he had directed all federal agencies to stop using Claude. In an interview with Politico, Trump said: “I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs.”

The roots of the issue go back a couple years. In 2024, Anthropic and defense contractor Palantir signed a deal with the Biden administration (a deal the Trump administration expanded) that allowed the military to use Claude in classified environments, but it included carveouts that prevented the AI from being used to power fully-autonomous lethal weapons and to conduct mass domestic surveillance on American citizens.

Earlier this year, the Pentagon reopened negotiations with Anthropic in a bid to get those restrictions removed. The Trump administration claims that these exceptions—and Anthropic CEO Dario Amodei’s refusal to budge on this issue—amount to Anthropic making military policy decisions over the United States government and the American people.

Also on February 27, Secretary of War Pete Hegseth tweeted that Anthropic had “delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon,” and said that he had directed the Department of War to designate Anthropic as a supply chain risk.

“Effective immediately,” Hegseth wrote, “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” That designation later became official in March.

In a statement posted on the evening of March 5, Amodei confirmed that Anthropic had received formal notification of its supply chain risk designation, and said that the company has “no choice but to challenge it in court.” Amodei also attempted to clarify the scope of the designation, writing that it “plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts,” citing the legal statute that covers the process by which military leaders classify supply chain risks.

Basically, this means that if your company does business with the United States military, all you need to worry about is making sure that Claude isn’t used in the products and services that you provide to the military.

The Anthropic CEO wrote that the law “requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain.” Even for Anthropic customers who work with the Department of War, he said, “the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.”

Amodei’s assertions got some serious backup by Microsoft and Google. According to CNBC, a Microsoft spokesperson said that the company’s lawyers “have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers — other than the Department of War — through platforms such as M365, GitHub, and Microsoft’s AI Foundry and that we can continue to work with Anthropic on non-defense related projects.”

CNBC also reported that a Google spokesperson said that “the Determination does not preclude us from working with Anthropic on non-defense related projects, and their products remain available through our platforms, like Google Cloud.”

Mitchell Couper, vice president of data and analytics at procurement solutions company SpendHQ, tells Inc. that the designation “impacts you way more if you’re in the public sector.” Companies that don’t do regular business with the military have little to worry about, he says, but contractors who use Anthropic in their work with the U.S. military, such as Palantir, are likely concerned. According to The Washington Post, Palantir is currently using Claude to power its Maven Smart Systems, which are reportedly locating targets in real time in the ongoing war in Iran.

David Bader, a computer science professor at the New Jersey Institute of Technology and founder of the school’s Institute for Data Science, says that the Pentagon’s move is highly unusual. “When two parties can’t reach agreement on contract terms,” he says, “the normal course of action is to part ways.”

Bader adds that “retaliating against a company such as Anthropic with a national security-grade penalty for declining terms sends a clear message: Accept whatever the government demands, or face consequences designed for foreign adversaries.”

https://www.inc.com/ben-sherry/the-pentagon-designated-anthropic-as-a-supply-chain-risk-heres-what-the-label-actually-means/91310393

David A. Bader
David A. Bader
Distinguished Professor and Director of the Institute for Data Science

David A. Bader is a Distinguished Professor in the Department of Computer Science at New Jersey Institute of Technology.