AI powerhouse Anthropic has secured a significant preliminary victory in its high-stakes legal battle against the United States Department of Defense.
On Thursday, federal Judge Rita Lin temporarily halted orders from President Donald Trump and Defense Secretary Pete Hegseth that would have effectively banned government agencies from utilizing Anthropic's software.
In a pointed ruling, Judge Lin determined that the administration's actions appeared to be an effort to "cripple Anthropic" and suppress public discourse. The judge characterized the sudden ban as a likely case of "classic First Amendment retaliation," stemming from the AI firm's vocal reservations regarding the military's deployment of its technology.
The injunction ensures that government agencies and military contractors can continue leveraging Anthropic's highly popular AI assistant, Claude, while the broader lawsuit makes its way through the courts.
The Pentagon did not issue a formal statement, but Emil Michael, the US Undersecretary of Defense, took to the social media platform X to condemn the decision. Michael described the ruling as a "disgrace" and alleged it was riddled with "dozens of factual errors," though he declined to provide specific examples. The White House has similarly remained silent on the matter.
Meanwhile, a spokesperson for Anthropic expressed satisfaction with the California federal court's decision, emphasizing that the company's ultimate goal is to collaborate with the government to provide "safe, reliable AI" for the American public.
The legal dispute erupted earlier this month after the administration aggressively targeted the tech firm. Following public disparagement by President Trump, Hegseth officially designated Anthropic as a "supply chain risk"—a severe label historically reserved for hostile foreign entities, making this the first time a domestic company has faced such a classification.
The conflict has its roots in a stalled $200 million contract expansion. The Pentagon insisted on a clause allowing "any lawful use" of Anthropic's software. However, Anthropic CEO Dario Amodei and his team pushed back, harboring deep concerns that such broad language could pave the way for their AI to be integrated into lethal autonomous weapons or mass surveillance programs targeting American citizens.
While the Defense Department argued in court that Anthropic's refusal to accept the new contract terms justified the "supply chain risk" label, Judge Lin was unpersuaded. She highlighted that public statements by Trump and Hegseth attacked the company as "woke" and run by "left-wing nut jobs," rather than addressing legitimate cybersecurity threats.
"If this were merely a contracting impasse, DoW would presumably have just stopped using Claude," Judge Lin wrote in her order. "The challenged actions, however, far exceed the scope of what could reasonably address such a national security interest."