On Sunday, Sam Altman said OpenAI has moved beyond limiting itself to unclassified projects and is now willing to take on classified work with the Department of War, describing the shift as urgent and far more complicated than earlier efforts.

The change comes after OpenAI reached a Pentagon arrangement that kept two guardrails in place—no domestic mass surveillance and human control over any use of force—as laid out in Pentagon deal details.

In post on X Altman said the company had been planning to stick to non-classified engagements for a long stretch. He also said OpenAI had previously declined classified opportunities that Anthropic accepted.

OpenAI’s Bold Shift Towards Classified Projects

Altman said talks with the Department of War on non-classified work had been underway for many months, but that the classified track accelerated sharply during the week. He framed the decision as support for a mission he called critical, while arguing the government should not be outmuscled by private executives.

The Pentagon arrangement described alongside the announcement includes practical steps beyond policy language, including placing OpenAI engineers on-site to monitor model behavior and safety. Altman also said OpenAI will build technical controls intended to keep systems operating within expected bounds, and that the Department of War wanted those protections as well.

The timing matters because it landed within hours of a major break between Washington and a rival lab. The Trump administration blacklisted Anthropic after a dispute tied to the same two restrictions that OpenAI says the Pentagon accepted in its own deal.

How Will This Impact AI Competition?

Anthropic’s Claude had already reached classified military networks under a contract that could run up to $200 million, but the relationship deteriorated when the Pentagon pushed to delete contractual limits tied to surveillance of Americans and autonomous weapons use. The department said it needed freedom to deploy the system for all lawful uses, even while stating it had not sought those contested applications.

When Anthropic wouldn’t budge, Defense Secretary Pete Hegseth tagged the company as a supply chain risk, and President Donald Trump directed federal agencies and military contractors to sever ties with it. Anthropic responded Friday that it was “deeply saddened” and said it would fight the designation in court, calling it “legally unsound” and warning it would “set a dangerous precedent for any American company that negotiates with the government.”

Altman said the rush on OpenAI’s side was meant to cool down what he viewed as a dangerous trajectory for Anthropic, for competition among AI labs, and for the U.S. as a whole. As X noted, he also said OpenAI negotiated so that comparable terms would be available to other AI developers, not just his company.

Gen Z’s Entrepreneurial Shift Disrupts Tech Landscape

This development comes as Sam Altman has previously expressed admiration for the entrepreneurial spirit of Gen Z college dropouts, stating at the DevDay conference that he is “envious of the current generation of 20-year-old dropouts.” His comments highlight a growing trend where young entrepreneurs are prioritizing innovation and startup culture over traditional education paths, a sentiment echoed by other successful tech leaders like Mark Zuckerberg.

This shift in mindset may influence the competitive landscape for AI, as more emerging companies may choose to focus on practical applications and innovative solutions, reflecting a broader change in how talent is developed and utilized in the tech industry.

Two Key Safety Conditions Behind Pentagon Deal

The two conditions at the center of OpenAI’s Pentagon work mirror the lines Anthropic says it drew: a ban on domestic mass surveillance and a requirement that humans retain control over decisions involving force, including autonomous weapons systems. Altman also said the Department of War viewed those principles as consistent with existing U.S. law and policy.

Even with similar stated red lines, the outcomes diverged sharply—OpenAI says it secured acceptance of the guardrails, while Anthropic ended up blacklisted. The unresolved question is what OpenAI agreed to that Anthropic didn’t, given that both sides publicly described nearly identical constraints.