Google has entered a classified agreement with the U.S. Department of Defense, granting the Pentagon unfettered access to its artificial intelligence models for "any lawful government purpose." The deal, disclosed by The Information, covers commercial AI systems hosted on Google’s infrastructure, though full contract details remain classified.
According to an anonymous source within the company, the agreement includes a non-negotiable clause: Google retains no control over how the Pentagon deploys the technology. While the search giant asserts it will not enable domestic mass surveillance or autonomous weapons without "appropriate human oversight," critics argue the contract lacks enforceable safeguards. A Google spokesperson reiterated the company’s stance, stating:
We believe providing API access to our commercial models—with industry-standard safeguards—represents a responsible approach to supporting national security.
The spokesperson emphasized that Google opposes the use of its AI for mass surveillance or autonomous weaponry, even with oversight.
Internal backlash grows over military AI partnerships
The Pentagon’s classified AI access has drawn sharp criticism from Google employees. Over 560 workers signed an open letter to CEO Sundar Pichai, urging the company to reject military applications of its technology. The letter warns that AI systems, as currently designed, can centralize power and perpetuate mistakes with severe consequences:
- Civil liberties at home and abroad are already at risk due to misuses of AI.
- The tech is being deployed in ways that could cause "inhumane or extremely harmful" outcomes.
- Employees argue that Google’s involvement in military AI undermines ethical responsibility.
"Human lives are already being lost," the letter states, "and we cannot ignore our role in enabling systems that may harm people."
Rival AI firms navigate Pentagon partnerships differently
Google joins OpenAI and Elon Musk’s xAI in providing the Pentagon with classified AI access. OpenAI’s deal was reported earlier this year, while xAI’s Grok model reportedly powers classified military systems. These agreements contrast sharply with Anthropic’s approach. The company previously refused Pentagon demands to remove safety restrictions on weapon and surveillance-related AI applications.
The refusal reportedly angered President Trump, leading to a federal blacklist of Anthropic’s services. The move underscores a growing divide between AI developers prioritizing ethical constraints and the Pentagon’s expanding appetite for unfiltered access to advanced models. A Google spokesperson did not provide further clarification when contacted for additional details.
What’s next for AI in military applications?
The classified nature of these contracts obscures critical details, leaving open questions about oversight mechanisms and long-term implications. While Google frames its deal as a responsible contribution to national security, critics warn that the absence of enforceable controls risks normalizing unchecked AI deployment in high-stakes military contexts. As AI models grow more powerful, the debate over their ethical use in defense operations will likely intensify—both inside tech companies and across regulatory bodies.
AI summary
Google’un ABD Savunma Bakanlığı ile yaptığı gizli AI anlaşması, ulusal güvenlikteki rolünü genişletirken, şirket çalışanları ve uzmanlar etik kaygıları dile getiriyor.