U.S. officials reportedly warned major banks about a powerful new artificial intelligence system that could expose critical cybersecurity weaknesses.
The alert came during a closed-door meeting involving top regulators and banking executives in Washington, The New York Times reports, raising concerns about emerging AI-driven threats.
Government Officials Flag Rising AI Risks
Treasury Secretary Scott Bessent addressed leaders from major banks earlier this week. The meeting included executives from Bank of America Corporation (NYSE:BAC), Citigroup, Inc. (NYSE:C), and Wells Fargo & Company (NYSE:WFC).
Federal Reserve Chair Jerome H. Powell also attended the discussion. Officials focused on growing cyber risks tied to advanced artificial intelligence systems.
Authorities warned that new AI models could uncover software vulnerabilities faster than traditional security methods. This capability could create opportunities for malicious actors.
Anthropic Model Raises Security Concerns
The warnings centered on a new model from Anthropic called Claude Mythos Preview. The company said the system can detect hidden software flaws beyond human capability.
Officials cautioned that such tools could become dangerous if accessed by hackers. They stressed that sensitive financial data could face increased exposure risks.
Anthropic acknowledged these risks and limited access to the model. The company created a restricted initiative called “Project Glasswing” involving around 40 organizations.
Banks Begin Controlled Testing
JP Morgan Chase & Co. (NYSE:JPM) joined the initiative to test the model. The bank said it would evaluate AI tools for defensive cybersecurity applications.
CEO Jamie Dimon did not attend the meeting due to prior commitments. However, the bank remains involved in early-stage testing efforts.
Officials emphasized urgency in addressing AI-related threats across financial systems. Kevin A. Hassett said, “We’re taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out.”
Policy Tensions Add Complexity
The U.S. government and Anthropic are currently engaged in a legal dispute. The Defense Department labeled the company a “supply chain risk.”
This designation followed disagreements over restrictions on military use of AI technology. The situation highlights growing tensions between innovation and national security priorities.`
Photo courtesy: Shutterstock
Recent Comments