Key Takeaways
- The U.S. Department of Defense has reached a classified AI agreement with Google, placing it alongside OpenAI and xAI
- Pentagon officials can deploy Google’s AI technology for “any lawful government purpose” under the arrangement
- While safety mechanisms remain in place, Google lacks veto authority over legitimate government operations
- Domestic mass surveillance and fully autonomous weaponry without human control are explicitly prohibited
- The deal follows Anthropic’s designation as a supply-chain risk after maintaining strict guardrails
According to reporting from The Information released Tuesday, Google has finalized an arrangement with the U.S. Department of Defense to deliver AI capabilities for classified operations.
This partnership places Google in the same category as OpenAI and Elon Musk’s xAI, which already maintain contracts supplying artificial intelligence to the Pentagon’s classified infrastructure.
GOOGL stock experienced a 1.72% increase in response to the announcement.
Back in 2025, the Pentagon established agreements valued at up to $200 million apiece with leading AI companies, including Anthropic, OpenAI, and Google.
These classified systems support critical functions such as strategic mission planning and weapons coordination.
Key Terms of the Agreement
The arrangement permits the Pentagon to utilize Google’s artificial intelligence technology for “any lawful government purpose.”
Google must assist in modifying its AI safety protocols and filtering systems when requested by government officials.
The framework explicitly excludes applications involving domestic mass surveillance or weaponry operating autonomously without proper human supervision.
Nevertheless, Google cannot override or block legitimate government operational choices.
A Google representative stated the organization “remains committed to the consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.”
“We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security,” the spokesperson added.
The Anthropic Precedent
This development follows a notable dispute between Anthropic and the DoD that occurred earlier this year.
Anthropic declined to eliminate safety restrictions that prevented its AI systems from supporting autonomous weapons or domestic surveillance activities.
In response, the Pentagon classified Anthropic as a supply-chain risk — sending an unmistakable message to competing AI companies about the consequences of non-cooperation.
Google’s arrangement suggests a more accommodating position regarding these protective measures.
While Pentagon officials have publicly declared no intention to conduct mass surveillance on American citizens or deploy weapons lacking human involvement, they’ve insisted on enabling “any lawful use” of AI across their networks.
The U.S. Department of Defense — recently rebranded as the Department of War by President Donald Trump — has not yet provided comment on the matter. Reuters was unable to independently confirm the reporting.
Google acknowledged its ongoing support for government organizations across both classified and unclassified initiatives.
Reporting from the Washington Post on Monday disclosed that hundreds of Google staff members had signed a petition addressed to CEO Sundar Pichai, calling on the company to decline classified AI collaborations with the Pentagon.



