Key Points
- Federal agencies received orders to cease all use of Anthropic’s AI systems following a national security supply-chain risk determination.
- The Pentagon finalized an agreement with OpenAI to integrate its AI technology into classified defense systems just hours after severing ties with Anthropic.
- A $200 million defense contract with Anthropic fell apart when the company declined to permit use of its AI for autonomous weaponry or domestic surveillance programs.
- While OpenAI claims its Pentagon agreement contains identical limitations to those Anthropic requested, skeptics doubt the company’s commitment to enforcement.
- Anthropic plans legal action against the supply-chain risk classification, arguing the designation lacks legal foundation.
The federal government terminated its partnership with artificial intelligence developer Anthropic last Friday, officially designating the company a national security supply-chain threat. Within the same day, competitor OpenAI revealed a major contract to integrate its AI systems into the Pentagon’s secure military networks.
President Donald Trump issued a directive requiring all federal departments to cease operations involving Anthropic’s technology immediately. Government entities currently utilizing the platform received a six-month deadline to migrate away from the company’s Claude AI models.
Pete Hegseth, the Secretary of Defense, declared via X that Anthropic represented a “Supply-Chain Risk to National Security.” Such classifications are typically assigned exclusively to entities connected with hostile foreign nations such as China.
The decision carries potential ramifications beyond direct government contracts. Defense contractors may now face requirements to demonstrate they’ve eliminated all Claude AI usage from their operations. Anthropic counts major technology firms among its backers, including Nvidia, Amazon, and Google.
Anthropic had achieved a significant milestone as the first artificial intelligence company authorized to operate within the Pentagon’s secure classified infrastructure. The July agreement carried a potential value reaching $200 million.
Negotiations collapsed when Anthropic declined to provide assurance that its AI technology would remain accessible for any lawful military application. The company maintained firm boundaries against deployment in autonomous weapons systems and domestic mass surveillance operations.
Pentagon officials argued that Anthropic should rely on the military’s commitment to legal compliance. On Thursday, Anthropic’s CEO Dario Amodei stated the organization “cannot in good conscience” accept those conditions.
OpenAI Captures the Contract
Sam Altman, CEO of OpenAI, revealed the new Pentagon partnership Friday evening through X. According to Altman, the contract incorporates the identical restrictions on mass surveillance and autonomous weapons systems that Anthropic had sought.
Altman additionally stated that OpenAI requested the government extend identical terms to competing AI firms. Elon Musk’s xAI had previously received military clearance for classified system deployment.
Greg Brockman, OpenAI’s President, along with his spouse, contributed $25 million to a political action committee supporting Trump during the previous year. The couple continues funding efforts aligned with Trump’s artificial intelligence policy objectives in forthcoming elections.
Anthropic Prepares Legal Challenge
Anthropic expressed being “deeply saddened” by the classification and announced intentions to pursue legal remedies. The organization characterized the determination as “legally unsound” and warned it establishes a troubling precedent for American technology companies engaged in government negotiations.
The General Services Administration confirmed plans to eliminate Anthropic from its catalog of approved products available to federal agencies.
Certain observers criticized OpenAI’s timing. Democratic political figure Christopher Hale announced on X that he terminated his ChatGPT membership and migrated to Claude Pro Max.
Anthropic emerged in 2021 when former OpenAI researchers departed due to concerns about inadequate safety prioritization. Both organizations have secured billions in recent funding and are exploring potential public stock offerings.
The controversy also involves a particular episode. Following Claude’s involvement in a January operation in Venezuela, an Anthropic staff member contacted a Palantir associate regarding the technology’s application. Defense officials interpreted the inquiry as inappropriate interference.
Anthropic characterized the communication as standard technical coordination between collaborative partners.



