Key Takeaways
- Sam Altman acknowledged OpenAI’s Defense Department partnership appeared hasty and “opportunistic and sloppy”
- Contract revisions will explicitly prohibit using OpenAI’s technology for domestic surveillance of American citizens
- Defense officials confirmed intelligence agencies including the NSA cannot access OpenAI’s systems
- The partnership announcement followed Trump’s directive banning Anthropic’s AI from federal use
- Altman advocated publicly for extending identical contract terms to Anthropic
OpenAI Overhauls Defense Partnership Following Sam Altman’s Acknowledgment of Missteps
Sam Altman, CEO of OpenAI, has publicly acknowledged his company’s Defense Department partnership was poorly executed. In what he characterized as an internal communication shared on X, Altman stated the organization “shouldn’t have rushed” the deal’s public announcement.
“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” Altman explained.
The partnership was revealed last Friday, mere hours following President Donald Trump’s order prohibiting federal agencies from utilizing Anthropic’s artificial intelligence systems. The announcement also preceded U.S. military operations against Iran by just hours.
The announcement’s timing triggered substantial criticism across social media platforms. Numerous users reportedly uninstalled ChatGPT and migrated to Anthropic’s Claude application in response.
OpenAI is currently collaborating with Defense Department officials to modify the agreement’s language. These modifications are designed to more explicitly reflect the company’s core values within the official documentation.
A critical new provision specifies that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” Defense officials have additionally confirmed that intelligence organizations like the NSA will not have access to OpenAI’s platforms.
Providing services to such agencies would necessitate a distinct contractual amendment, Altman indicated.
The Anthropic Controversy That Preceded OpenAI’s Deal
These developments emerged from failed negotiations between Anthropic and military officials. Anthropic had demanded assurances preventing its technology from being deployed for surveilling Americans domestically or creating fully autonomous weapon systems.
Defense Secretary Pete Hegseth announced Friday that Anthropic would receive a supply-chain threat classification after talks broke down. Government representatives had allegedly spent months criticizing Anthropic for prioritizing AI safety concerns excessively.
Public awareness of the conflict grew after revelations that Anthropic’s Claude AI had supported U.S. military operations during a January mission to apprehend Venezuelan president Nicolás Maduro. Anthropic made no public statement opposing that deployment.
Anthropic had actually pioneered AI integration on the Defense Department’s secure classified infrastructure through an agreement established last year.
Altman Advocates for Fair Treatment of Competitor Anthropic
Altman’s statement also addressed the consequences facing Anthropic. He disclosed weekend conversations with government representatives where he challenged the supply-chain risk designation.
“I reiterated that Anthropic should not be designated as a supply chain risk, and that we hope the Department of Defense offers them the same terms we’ve agreed to,” he stated.
Anthropic was established in 2021 by previous OpenAI researchers who departed following disputes regarding the organization’s strategic direction.
The company has cultivated an identity centered on responsible AI development. Pentagon representatives have not issued public commentary regarding Altman’s proposal for equivalent contractual terms.



