More than 30 employees from Google, OpenAI and Google DeepMind have filed an amicus brief supporting Anthropic‘s lawsuit against the U.S. Department of Defense (DoD).
• Alphabet stock is showing upward bias. What’s ahead for GOOG stock?
Participants in the brief include senior research engineer Zhengdong Wang, research scientist Alexander Matt Turner and senior research engineer Noah Siegel from Google DeepMind, along with technical staff member Gabriel Wu, researcher Pamela Mishkin, and research scientist Roman Novak from OpenAI, among several others. All signed in their personal capacities, with the brief noting that their views do not necessarily reflect those of their employers.
Anthropic has been under fire for its AI models being used by the Pentagon. Tensions stem from concerns about national security and from the DoD under Trump seeking unrestricted military use of AI.
The amicus brief lists four arguments:
- The “Supply Chain Risk: designation is improper retaliation that harms the public interest. The brief argues that the DoD’s “Supply Chain Risk” designation is retaliatory and destabilizes the AI sector, “undermining American innovation and competitiveness.” The researchers contend that, if dissatisfied, the Pentagon could have simply purchased AI services elsewhere.
- The concerns underlying Anthropic’s “Red Lines” are real and require a response. The brief emphasises that the U.S. lacks federal laws regulating military or intelligence use of AI domestically. “In the absence of public law, the contractual and technological requirements that AI developers impose on the use of their systems represent a vital safeguard against their catastrophic misuse,” the brief stated.
- Mass domestic surveillance powered by AI poses profound risks to democratic governance, even in responsible hands. “AI-enabled mass domestic surveillance, deployed without transparent legal constraints and independent oversight, removes those structural protections in ways that no amount of good faith can replace,” the researchers warned.
- Fully autonomous lethal weapon systems present risks that also must be addressed. The brief stressed that current AI models are “not reliable enough to bear the responsibility of making lethal targeting decisions entirely alone.” The researchers described this as a “technical judgement, not a political one.”
Amicus briefs — “friend of the court” filings — allow non-parties with a strong interest in a case’s outcome to provide additional expertise or arguments for judicial consideration.
The brief urges the court to grant Anthropic’s requested relief and concludes: “Until a legal framework exists to contain the risk of deploying frontier AI systems, the ethical commitments of AI developers and their willingness to defend those commitments publicly are not obstacles to good governance or innovation. They are contributions to it. The Court should say so.”
Photo: Shutterstock
Recent Comments