On Friday, the OpenAI said it uncovered a security problem tied to Axios, a third-party developer library, and moved to tighten the way its macOS apps are verified so impostor software can’t masquerade as official releases.

The disclosure lands as physical security risks have also escalated around the company, after molotov cocktail attack allegations led San Francisco police to arrest a 20-year-old suspect following an early-morning incident near OpenAI CEO Sam Altman‘s home and threats reported near the firm’s headquarters.

Reuters reported that OpenAI said it did not find signs that customer information was accessed, that its internal environment or intellectual property was breached, or that its codebase was modified.

In the San Francisco case, police said officers were called around 4:12 a.m. to a report of an incendiary device thrown at a residence, and the suspect ran off before being detained about an hour later after another call about a person threatening to ignite a separate building.

What OpenAI’s Security Breach Reveals

As per the report, the OpenAI is updating its security credentials and requiring Mac users to upgrade to the latest applications releases.

The company also set a deadline: starting May 8, older builds of its macOS desktop software are slated to lose updates and support, and could stop working.

That software-hardening push comes as OpenAI has been navigating criticism tied to a reported deal involving U.S. government use of its tools in classified military settings.

Altman, writing in a blog post after the firebomb allegation, said, “A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology.”

How A Supply-Chain Attack Unfolded

OpenAI said Axios was tampered with on March 31 as part of a wider software supply-chain campaign that the company believes traces back to North Korea-linked actors.

The company said the compromise caused a GitHub Actions workflow to pull and run a malicious Axios version, and that workflow could reach certificate and notarization materials used to sign macOS apps.

The outlet reports that OpenAI’s internal probe found the workflow’s signing certificate most likely remained intact despite the malicious attack.

OpenAI also said passwords and OpenAI API keys were not impacted.

In the San Francisco arrest, authorities said evidence ties the suspect to both the alleged Molotov incident and the later threats, and police reported no injuries.

Cybersecurity Enhancements Fuel Revenue Aspirations

This recent security enhancement comes as OpenAI has set ambitious targets for its advertising revenue, projecting $2.5 billion this year and aiming for a staggering $100 billion by 2030. These projections were presented to investors, highlighting the company’s strategy to leverage its AI capabilities in ad matching, which is increasingly critical in a market dominated by tech giants like Google and Meta.

Additionally, OpenAI is reportedly finalizing a model with enhanced cybersecurity features through its “Trusted Access for Cyber” program, which it plans to deploy to a select group of companies, reflecting its commitment to addressing security concerns in tandem with its growth trajectory. This emphasis on security is particularly relevant given the recent incidents surrounding the company.

Why Timely Response Is Crucial For Tech Firms

OpenAI confirmed it is cooperating with law enforcement in the Altman incident, and a spokesperson told Reuters, “Thankfully, no one was ​hurt. We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe,” while adding the company is assisting investigators.

Altman also urged a lower temperature in the debate around artificial intelligence, writing, “While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally,”

On the product side, OpenAI’s macOS update requirement effectively turns patching into a gatekeeper for app legitimacy, aiming to reduce the odds that a forged build can circulate with credible-looking signing.

The company framed the move as a preventative step tied to how its macOS apps are certified, rather than a response to confirmed user-data theft.