The US Department of War has signed agreements with eight major technology companies to deploy artificial intelligence across classified military networks. The partnerships include SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, Amazon Web Services, and Oracle. The stated goal is to "accelerate the transformation toward establishing the United States military as an AI-first fighting force" and strengthen "decision superiority across all domains of warfare."
The tools are intended for "lawful operational use," a term that has sparked controversy. Anthropic, notably absent from the list, previously pushed back against similar wording, with CEO Dario Amodei arguing that current laws leave loopholes for activities like mass surveillance using commercial data sets. In response, the Pentagon labeled Anthropic a supply chain risk, and the Trump administration ordered federal agencies to cease using its technology. Anthropic has since filed a lawsuit.
In a leaked memo, Amodei criticized OpenAI's Pentagon contract as "80% safety theater." OpenAI has established three red lines: no domestic mass surveillance, no autonomous weapons, and no automated high-risk decisions. However, legal experts question the enforceability of these commitments without explicit contractual carve-outs.