Key Points
- On March 3, the Defense Department placed Anthropic on its blacklist following the company’s rejection of demands to eliminate safety protocols preventing autonomous weapons and surveillance applications.
- In a March 17 court filing, the Trump administration argued the blacklisting was legally justified and dismissed Anthropic’s First Amendment constitutional claims.
- Federal officials characterized Anthropic as presenting an “unacceptable risk” to defense supply networks, citing concerns about potential AI system manipulation during military deployments.
- The AI company has initiated two legal challenges — one in California federal court and another before a D.C. appellate panel — contesting the government’s action.
- Microsoft filed an amicus brief backing Anthropic, cautioning that the designation threatens to undermine America’s artificial intelligence industry.
A legal battle between the United States government and Anthropic, the artificial intelligence firm behind Claude, is intensifying in federal court over a Pentagon designation that threatens the company’s financial future.
Defense Secretary Pete Hegseth issued the national security supply chain risk designation against Anthropic on March 3, following unsuccessful negotiations spanning several months between Pentagon officials and the AI developer.
At the heart of the controversy lies Anthropic’s steadfast position on usage limitations for its artificial intelligence systems. The company declined Pentagon requests to eliminate safeguards preventing the technology’s deployment in autonomous weaponry and domestic surveillance programs.
Pentagon officials deemed these restrictions incompatible with military requirements. In legal documents submitted to the court, defense officials contended that maintaining Anthropic’s involvement in military technology infrastructure would create “unacceptable risk” throughout defense supply networks.
Government attorneys additionally expressed alarm regarding Anthropic’s technical capability to “disable its technology or preemptively alter the behavior of its model” while military operations are underway should the company determine its ethical guidelines were being violated.
Administration Argues Actions Constitute Conduct Rather Than Expression
The Justice Department, representing the Trump administration’s position, rejected Anthropic’s constitutional free speech arguments. Government lawyers characterized the matter as involving contractual obligations and national security imperatives rather than protected expression.
The legal brief contended that Anthropic’s unwillingness to modify its usage restrictions — which federal attorneys labeled “conduct, not protected speech” — prompted President Trump to order all government agencies to terminate relationships with the artificial intelligence firm.
Anthropic submitted its primary legal challenge in California federal court on March 9. Company representatives described the blacklisting as “unprecedented and unlawful,” asserting violations of both First Amendment protections and constitutional due process guarantees.
A companion lawsuit was lodged with a Washington, D.C. appellate court targeting an additional Pentagon classification under alternative statutory authority — a designation potentially expanding the prohibition across the entire federal government apparatus.
Tech Giant Microsoft Backs Anthropic’s Legal Challenge
Microsoft, which incorporates Anthropic’s Claude system into its offerings while maintaining separate defense contracts, submitted a friend-of-the-court brief supporting the AI company’s position. The technology giant cautioned that the designation risks damaging America’s artificial intelligence sector.
“This is not the time to put at risk the very AI ecosystem that the administration has helped to champion,” Microsoft wrote.
Anthropic representatives indicated they were analyzing the government’s recent court submission. Company officials characterized the litigation as “a necessary step to protect our business, our customers, and our partners.”
Anthropic has also challenged government assertions that its technology constitutes a security threat. Company leadership maintains that artificial intelligence has not yet reached safety thresholds appropriate for autonomous weapons deployment and opposes surveillance applications on ethical grounds.
The White House did not respond to a request for comment.
Company leadership has projected the designation could generate multi-billion dollar revenue losses by 2026. This type of blacklisting has historically been applied to entities from adversarial foreign nations, including Chinese telecommunications manufacturer Huawei.



