Google has signed a new artificial intelligence agreement with the U.S. Department of Defense, raising concerns over the scope of military applications. The deal, described as covering "any lawful" use of Google's AI technologies, has sparked debate about the ethical boundaries of corporate involvement in defense projects.
The contract allows the Pentagon to leverage Google's AI tools for unspecified purposes, provided they comply with U.S. and international law. Critics argue that the broad language leaves room for interpretation and could lead to the weaponization of AI, despite Google's previous pledges to avoid such applications.
This marks a significant shift from 2018, when Google faced employee backlash over Project Maven, a military drone imagery analysis program, leading the company to adopt AI principles that prohibited use in weapons systems. However, recent moves suggest a relaxation of those restrictions as the company seeks to expand its government contracts.
No details have been released on the financial terms or specific AI systems involved. The announcement has reignited discussions among tech ethics advocates about the role of major tech firms in national security.