Key Takeaways
- Defense Department officials argue that Claude AI contains inherent policy biases through its constitutional framework that could undermine military operations.
- Anthropic has become the first U.S.-based firm to receive a supply chain risk designation from the Pentagon.
- All defense contractors working with the Pentagon must now verify they aren’t utilizing Claude in any capacity.
- The AI company filed legal action against the Trump administration on Monday, characterizing the designation as “unprecedented and unlawful” with hundreds of millions at stake.
- Palantir’s CEO Alex Karp revealed his defense contracting firm continues deploying Claude for military-related tasks.
Earlier this month, the Pentagon took the extraordinary step of classifying Anthropic as a supply chain security concern—an unprecedented designation for an American technology firm. This label has traditionally been reserved for foreign entities considered national security threats.
During a Thursday appearance on CNBC’s “Squawk Box,” Defense Department Chief Technology Officer Emil Michael provided insight into the rationale. He pointed to Claude’s foundational “constitution”—Anthropic’s framework for guiding AI behavior—as creating inherent policy leanings that could impact the system’s military applications.
“We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our war fighters are getting ineffective weapons, ineffective body armor, ineffective protection,” Michael said.
The latest iteration of Claude’s constitutional document was released by Anthropic in January 2026. According to the company, this framework serves a “crucial role” in model development and “directly shapes Claude’s behavior.”
Under this new designation, all Pentagon suppliers and contractors face requirements to confirm they’ve eliminated Claude from any defense-related projects.
Michael said the decision was “not meant to be punitive.” He also noted that the U.S. government accounts for only a “tiny fraction” of Anthropic’s overall revenue.
Former OpenAI researchers established Anthropic in 2021. The company has successfully developed substantial enterprise partnerships, including initial agreements with defense agencies.
Anthropic responded aggressively to the Pentagon’s action. The company launched litigation Monday against the Trump administration, describing the supply chain classification as “unprecedented and unlawful.”
According to court documents, Anthropic asserts it faces “irreparable” damage with contract values reaching into the hundreds of millions now uncertain.
DOD Refutes Claims of Direct Company Contact
Michael rejected Anthropic’s assertions that government representatives were directly contacting businesses to discourage Claude usage. He characterized these allegations as “rumors.”
“The Department of War is not reaching out to companies to tell them what to do, so long as it’s not in our supply chain,” Michael said.
He recognized that phasing out Claude won’t happen instantly. The Pentagon has established a transition strategy, he explained, emphasizing that extracting deeply embedded AI systems requires far more effort than uninstalling basic software.
Military Operations Continue Using Claude
Interestingly, Claude remains operational in certain military settings despite the designation. CNBC has reported the AI platform supported U.S. military activities related to Iran.
Alex Karp, CEO of major defense contractor Palantir, acknowledged Thursday that his organization maintains its use of Claude.
Michael said the agency cannot “just rip out” Anthropic’s technology overnight and confirmed a transition plan is underway.



