On Saturday, Caitlin Kalinowski said she resigned from OpenAI, arguing that potential uses of AI for warrantless monitoring of Americans and weapon systems operating without a human decision demanded more careful debate than they received. Her exit lands as OpenAI expands into classified Pentagon projects under an arrangement that kept two stated limits in place: no domestic mass surveillance and a requirement for human control over any use of force.

In a post on X, Kalinowski said she still believes AI can matter for national security, but drew hard boundaries around domestic spying without court oversight and lethal autonomy without human authorization.

She also framed the resignation as a values call rather than a personal dispute, while saying she respects Sam Altman and remains proud of what her robotics group built.

“I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together,” she wrote in the post.

She shared the same message on her LinkedIn post as well.

Caitlin Kalinowskis Bold Departure Sparks Debate

Kalinowski’s concerns echo the same two fault lines now shaping how top AI labs negotiate with the U.S. national security apparatus: surveillance at home and autonomy in the use of force. In her post, she said those issues were not weighed with the level of deliberation she expected.

At the same time, Altman has described OpenAI’s posture changing from avoiding classified engagements to taking them on with the Department of War, calling the shift urgent and more complex than earlier work. He also said OpenAI had previously passed on classified opportunities that rival lab Anthropic accepted.

OpenAI’s Pentagon arrangement, as described alongside Altman’s comments, kept two guardrails intact while adding operational measures such as putting OpenAI engineers on-site to watch model behavior and safety. Altman also said the company would build technical constraints meant to keep systems operating within expected limits, and that the Department of War wanted those protections as well.

AI Safety Negotiations Amid Pentagon Deal

This resignation comes in the wake of OpenAI’s recent Pentagon deal, which was struck just hours after the Trump administration blacklisted rival Anthropic for refusing to eliminate similar safety clauses from their own agreement. OpenAI’s CEO Sam Altman emphasized that the Department of War accepted two core principles: prohibitions against domestic mass surveillance and a requirement for human oversight in the use of force.

This unfolding situation highlights the contrasting approaches of the two companies, as Anthropic’s refusal to adjust its terms led to a “supply chain risk” designation from Defense Secretary Pete Hegseth, while OpenAI successfully negotiated terms that align with existing U.S. law and policy. This juxtaposition raises questions about the ethical implications of AI in national security, particularly regarding surveillance and autonomous weaponry.

The Pentagon Deal: A Critical Turning Point

When Anthropic refused to change its position, Defense Secretary Pete Hegseth labeled the company a supply chain risk, and President Donald Trump directed agencies and military contractors to cut ties. Anthropic said on Friday it was “deeply saddened,” called the designation “legally unsound,” and warned it would “set a dangerous precedent for any American company that negotiates with the government.”

Altman also said OpenAI negotiated so comparable terms could be available to other AI developers, not only his firm. Even so, the split outcome remains stark: OpenAI says it got acceptance of the two guardrails, while Anthropic ended up blacklisted despite describing similar red lines.

Altman said the Department of War viewed the principles as consistent with existing U.S. law and policy, and he cast OpenAI’s quick move as an attempt to avoid what he viewed as a dangerous competitive trajectory among AI labs. Kalinowski’s resignation, by contrast, spotlights how internal talent may react when the same boundaries are perceived as insufficiently examined.