Google has inked a contract with the U.S. Department of Defense, granting the Pentagon access to the company's AI models for classified operations. The deal, reported by The Information, allows the military to use the technology for "any lawful government purpose."
The signing coincided with an open letter from over 600 Google employees, many from its DeepMind AI lab, urging CEO Sundar Pichai to reject any classified military collaboration. "We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways," the employees wrote, as cited by the Washington Post.
Their key concern is that classified contracts prevent Google's own representatives from understanding how the technology is deployed. "The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads," the letter states.
A Google Public Sector spokesperson described the new contract as an extension of a November agreement, emphasizing the company's commitment to AI usage that avoids "domestic mass surveillance or autonomous weaponry without appropriate human oversight."
Legal Loopholes Undermine Safety Pledges
The contract includes a clause stating the AI "is not intended for domestic mass surveillance or autonomous weapons without appropriate human oversight." However, it also notes: "This Agreement does not confer any right to control or veto lawful Government operational decision-making."
Legal experts argue the clause carries no binding force. Charlie Bullock, a lawyer and Senior Research Fellow at the Institute for Law and AI, told The Information that the phrasing "is not intended for, and should not be used for" merely signals displeasure without constituting a breach. Amos Toh of NYU's Brennan Center added that "appropriate human oversight" does not necessarily require a human between target identification and a strike order. The Pentagon has not ruled out fully autonomous weapons.
Google's deal appears more permissive than similar arrangements. OpenAI retained full control over its "Safety Stack" in its February Pentagon contract, while Google committed to adjusting its safety filters upon government request. Elon Musk's xAI also holds a classified AI contract with the Pentagon.
Earlier this year, Anthropic was excluded from a Pentagon deal after demanding contractual guarantees against mass surveillance and autonomous weapons—a decision the company is now suing over. In 2018, Google withdrew from Project Maven after employee protests, pledging to avoid AI for weapons or surveillance. It quietly dropped those restrictions last year.
Project Maven, now operated by Palantir, has been used for target selection in the Iran conflict, with support from Anthropic's Claude model.