Pentagon Threatens to Label AI Giant Anthropic a Supply Chain Risk Over Military Safety Guardrails

Pentagon Threatens to Label AI Giant Anthropic a Supply Chain Risk Over Military Safety Guardrails
  • Defense Secretary Pete Hegseth is considering cutting ties with Anthropic after months of heated negotiations.
  • The dispute centers on Anthropic’s refusal to remove restrictions on autonomous weapons and mass surveillance.
  • A supply chain risk designation would force all defense contractors to stop using Anthropic’s Claude AI.

The United States Department of Defense is currently locked in a high-stakes standoff with artificial intelligence startup Anthropic. Defense Secretary Pete Hegseth is reportedly close to designating the company as a supply chain risk. Such a move is typically reserved for foreign adversaries rather than domestic partners. This escalating tension follows months of disagreement regarding how the military can use Anthropic’s flagship AI model, Claude.

At the heart of the conflict are Anthropic’s strict ethical guardrails. The company insists that its technology must not be used to develop fully autonomous weapons systems. Anthropic also prohibits the use of its models for the mass surveillance of American citizens. Pentagon officials argue these limitations hamper military flexibility and could prevent warfighters from winning in urgent situations.

The relationship began on more positive terms last summer. The Defense Department awarded Anthropic a contract worth up to $200 million to customize AI for national security. Claude remains the only frontier AI model currently authorized for use on the military’s classified networks. However, the Pentagon now demands that AI providers agree to a standard of all lawful purposes without specific corporate restrictions.

Tensions reached a breaking point following reports that Claude was used during a mission to capture Nicolás Maduro. Anthropic reportedly questioned whether the operation violated its usage policies regarding kinetic fire and violence. This inquiry frustrated defense leaders who believe private companies should not oversee military operations. Chief Pentagon spokesman Sean Parnell stated that partners must be willing to help troops win in any fight.

If the Pentagon proceeds with the supply chain risk designation, the impact would be massive. Every company doing business with the U.S. military would have to certify they do not use Claude. This could force thousands of defense contractors to purge Anthropic’s tools from their internal workflows. A senior official noted that while disentangling the technology would be difficult, the company must pay a price.

Anthropic maintains that it remains committed to supporting national security objectives in good faith. A company spokesperson emphasized that they are working to resolve these complex issues while protecting democratic values. CEO Dario Amodei has previously warned that AI requires oversight to prevent the rise of uncontrollable drone swarms. The company views its safeguards as essential protections against autocratic-style surveillance.

Meanwhile, the Pentagon is moving forward with other AI providers. OpenAI, Google, and xAI have reportedly shown more flexibility regarding the military’s demand for unrestricted use. OpenAI recently made a customized version of ChatGPT available for millions of defense employees on an unclassified platform. The outcome of the Anthropic dispute will likely set the precedent for all future military AI contracts.