San Francisco Federal Court District Judge Rita Lin sided with Anthropic in its request for a preliminary injunction in its legal battle against the Trump administration, calling it “illegal First Amendment retaliation.”

This decision temporarily halts the government’s actions to blacklist the AI company and prevents the enforcement of a directive from President Donald Trump that bans federal agencies from using Anthropic’s Claude models.

“These broad measures do not appear to be directed at the government’s stated national security interests. If the concern is the integrity of the operational chain of command, the Department of War [Defense] could just stop using Claude. Instead, these measures appear designed to punish Anthropic,” the judge cited in a 42-page report.

The Judge noted that the defendant’s designation of Anthropic as a “supply chain risk” is both contrary to the law and arbitrary and capricious. 

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” the report cited.

The conflict began when the Department of Defense declared Anthropic a supply chain risk, a designation typically reserved for foreign entities. This label requires defense contractors to avoid using Anthropic’s technology. The AI company, in response, filed a lawsuit to challenge this designation, arguing that these actions could cause significant harm to its business.

The court held a virtual hearing earlier this week where Anthropic and the Department of Defense each brought their cases in front of the judge. During the hearing, Lin questioned the U.S. government about its motives for labeling Anthropic a national security threat. 

While the preliminary injunction provides temporary relief, the final resolution of the case may take several months.

“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI,” a spokesperson for Anthropic emailed Benzinga. 

The Department of Defense did not provide a direct statement but referred to the following comment from Under Secretary of War Emil Michaels post on X.

“There are dozens of factual errors in the 42 page judgment rushed out in 48 hours DURING A TIME OF CONFLICT that seeks to upend @POTUS’ role as Commander In Chief and disrupt the @SecWar’s full ability to conduct military operations with the partners it chooses. A disgrace,”  Michaels wrote.

Photo: Shutterstock