San Francisco Federal Court District Judge Rita Lin has given Anthropic until 6 p.m. PST Tuesday to provide a declaration stating that multiple government agencies have terminated or ceased the use of Anthropic’s Claude model following the U.S. Department of Defense’s designation of the company as a national security risk.
The live virtual court hearing noted that the Office of Personnel Management, the Nuclear Regulatory Commission, as well as several other agencies terminated its use of Anthropic technology.
Attorneys for the DOD have until 6 p.m. tomorrow to provide counter-evidence to this claim.
Anthropic asked Lin to temporarily halt the DOD decision to blacklist its AI model, Claude, as well as President Donald Trump’s order prohibiting federal agencies from using the technology, CNBC reported.
The artificial intelligence company stated that the blacklist is causing “severe, immediate and irreparable financial and reputational harm to the company.”
The DOD continues to argue its case that Anthropic poses a national security supply chain risk. The artificial intelligence company filed a lawsuit against the DOD earlier this month, escalating its ongoing clash with the Trump administration.
Last week, in the ongoing lawsuit, the DOD flagged new national security risks tied to Anthropic’s hiring of foreign personnel, including workers from China.
Anthropic employs “a large number of foreign nationals to build and support its LLM products, including many from the People’s Republic of China (PRC), which increases the degree of adversarial risk should those employees comply with the PRC’s National Intelligence Law,” the court filing stated.
More than 30 employees from Google, OpenAI and Google DeepMind filed an amicus brief supporting Anthropic’s lawsuit against the DOD.
The DOD’s attorneys have asked the court to deny the preliminary injunction on the basis that a private company should not be allowed to make any decisions regarding how military missions are conducted.
“Anthropic revealed itself to be an untrustworthy and unreliable partner in recent negotiations,” said Eric Hamilton, an attorney representing the DOD in the case.
Hamilton further argues that if the court grants injunctive relief to Anthropic, it should immediately pause that order while an appeal is pursued. At a minimum, a request for a seven-day temporary stay to allow time to seek relief from a higher court, adding that Anthropic does not oppose this.
The attorney also urged the court to rule on the stay request at the same time it issues any injunction. Hamilton also emphasized any order should make clear the government is not obligated to continue using Anthropic’s services and may end the relationship.
Anthropic is requesting to return to the status quo of the morning of Feb. 27 and is asking the court to enter a preliminary injunction. (The date is significant because it marks the deadline from which the lawsuit stems, when Anthropic was required to remove safety restrictions on its AI model, Claude, for use by the DOD in autonomous weapons and domestic surveillance.)
“There’s a range of lawful actions that the defendants could have taken. What they can’t do is engage in unconstitutional retaliation for our protected speech. They can’t impose an immediate prospective debarment of Anthropic for all future government contracting that is not supported by any lawful executive authority,” the attorney for Anthropic stated.
Lin stated that she is taking the matter under submission and expects to deliver a verdict in the next few days.
Anthropic did not respond to a request for comment. The DOD stated that “they do not comment on pending litigation.”
Recent Comments