KEY POINTS
- The Justice Department filed a legal response Tuesday justifying the “national security supply chain risk” designation.
- The administration argues that Anthropic’s refusal to remove AI guardrails constitutes “conduct,” not protected “speech.”
- Defense Secretary Pete Hegseth maintains that unrestricted AI access is essential for modern military operations and domestic safety.
The Trump administration has formally defended its decision to blacklist the artificial intelligence company Anthropic, calling the move both “justified and lawful” in a high-stakes Tuesday court filing. The legal response comes as a direct opposition to Anthropic’s lawsuit, which seeks to overturn a Pentagon designation labeling the firm a “national security supply chain risk.” The dispute marks an unprecedented fracture between the federal government and a leading domestic technology provider.
The conflict centers on “red lines” established by Anthropic regarding the use of its Claude AI model. The company has consistently refused to allow its technology to be utilized for fully autonomous lethal weapons or mass domestic surveillance of American citizens. In its filing, the Justice Department asserted that the administration’s decision was not a form of retaliation against the company’s viewpoints. Instead, officials argue that the president and the Department of War have the authority to terminate business with any entity that limits the government’s operational flexibility.
Defense Secretary Pete Hegseth issued the formal designation on March 3, effectively barring the San Francisco-based startup from securing or maintaining military contracts. The administration’s latest filing claims that Anthropic is unlikely to succeed in its argument that the blacklist violates First Amendment free speech protections. “It was only when Anthropic refused to release the restrictions on the use of its products—which refusal is conduct, not protected speech—that the President directed all federal agencies to terminate their business relationships,” the government stated.
The economic implications for the AI sector are substantial. Anthropic executives previously warned the court that the blacklist could result in the loss of multiple billions of dollars in revenue for 2026. The company also alleged that government officials have been pressuring existing enterprise customers to abandon Claude in favor of competing models, such as OpenAI’s offerings. Several industry giants, including Microsoft and Google, have filed amicus briefs in support of Anthropic, citing concerns about a “scary precedent” for the entire technology ecosystem.
The administration has signaled a clear preference for AI partners who offer “unrestricted use” of their tools for national defense. Reports suggest that the government is already moving toward integrating alternative systems that align more closely with the Pentagon’s requirements. President Trump has publicly criticized Anthropic’s safety-first stance, labeling the company’s refusal to comply as a “woke” attempt to dictate terms to military commanders.
Legal experts suggest that the case will serve as a landmark test of executive power over private industry. If the court sides with the administration, it could grant the government broad authority to exclude domestic firms from the federal marketplace based on policy disagreements. Conversely, a ruling in favor of Anthropic would reinforce constitutional protections for companies that set ethical boundaries on their own technology.
As the legal battle intensifies in a California federal court, the outcome will likely define the boundaries of cooperation between Silicon Valley and the Department of War for years to come. For now, the administration remains firm in its stance that national security necessity outweighs private corporate restrictions.









