Powell, Bessent Warn Banks About Security Risks From Anthropic's Mythos AI: Bloomberg
Treasury and Federal Reserve officials reportedly alerted banks to cybersecurity risks tied to Anthropic’s advanced new Mythos AI model.
By Jason NelsonEdited by Andrew HaywardApr 10, 2026Apr 10, 20263 min read
In brief
- U.S. officials warned major banks about cybersecurity risks tied to Anthropic’s Mythos AI model, Bloomberg reports.
- The system can reportedly identify and exploit vulnerabilities in operating systems and browsers.
- Anthropic has limited access to the model while it evaluates potential security risks.
U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell reportedly convened a meeting with Wall Street bank CEOs earlier this week to warn about cybersecurity risks tied to a new artificial intelligence model from Anthropic.
According to a report by Bloomberg, the meeting included executives from Citigroup, Bank of America, Wells Fargo, Morgan Stanley, and Goldman Sachs. Officials discussed Anthropic’s new AI model Mythos, which has recently drawn broad concern over its apparent advanced cybersecurity capabilities.
Officials convened the meeting to ensure banks understand the risks posed by systems capable of identifying and exploiting software vulnerabilities across operating systems and web browsers, and to encourage institutions to strengthen defenses against potential AI-assisted cyberattacks targeting financial infrastructure.
Security researchers have warned that tools capable of automatically discovering vulnerabilities could accelerate both defensive security work and malicious hacking if misused.
Anthropic’s Mythos model first surfaced online in March after draft materials about the system leaked online, revealing what the company described as its most capable AI model yet. In testing, the system reportedly found thousands of previously unknown software vulnerabilities, including zero-day flaws across major operating systems and web browsers.
Anthropic researchers said in a report earlier this week that Mythos Preview’s vulnerability-discovery capabilities were not intentionally trained, but instead emerged from broader improvements in the model’s coding, reasoning, and autonomy.
“The same improvements that make the model substantially more effective at patching vulnerabilities also make it substantially more effective at exploiting them,” the firm wrote.
Because of those capabilities, Anthropic has restricted access to a small group of cybersecurity organizations.
“Given the strength of its capabilities, we’re being deliberate about how we release it,” Anthropic said in a statement. “As is standard practice across the industry, we’re working with a small group of early access customers to test the model. We consider this model a step change and the most capable we’ve built to date.”
To address that risk, Anthropic is testing Mythos through Project Glasswing, a collaboration with major technology and cybersecurity companies that uses the model to identify and patch vulnerabilities in critical software before attackers can exploit them.
“Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone,” the company said in a statement. “Frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play.”
Anthropic did not immediately respond to Decrypt’s request for comment.