‘Claude Mythos’ so good at finding security cracks, it’s too dangerous to be released publicly
The leaders of some of America’s largest banks were warned by a top US government official this week about a new artificial intelligence model from Anthropic that could lead to heightened risks of cyberattacks, according to three people briefed on the matter.
Treasury Secretary Scott Bessent delivered the stark message on Tuesday morning to a small group of CEOs, including those from Bank of America, Citi and Wells Fargo, in a hastily arranged meeting in Washington, DC.
Bessent cautioned the banks that allowing the new AI software to run on their internal computer systems could pose a serious risk to sensitive customer data, said the sources on condition of anonymity as they were not authorised to discus the issue publicly.
Federal Reserve chairman Jerome Powell, who has spoken publicly in recent weeks about the threat of cyberattacks against the financial system, also attended the meeting.
The warnings relate to a new intelligence model that Anthropic has named Claude Mythos Preview. The company has said the model is particularly good at identifying security vulnerabilities in software that human developers could not find.
At Tuesday’s meeting, the people briefed on the matter said, the bank executives were told that the new model might be so effective at finding security weaknesses inside banks that hackers or other third-party bad actors could get their hands on the information and exploit it.
Anthropic itself has warned about the risks. The company said this week that the model’s advancements were so powerful and potentially dangerous that they could not safely be released to the public yet and would instead be contained to a coalition of 40 companies that it called “Project Glasswing”.
That group includes at least one bank, JPMorgan Chase, the nation’s largest, which earlier said it would use the software “to evaluate next-generation AI tools for defensive cybersecurity across critical infrastructure”.
Jamie Dimon, the CEO of JPMorgan, was invited to Tuesday’s briefing but skipped it for previously arranged travel plans, according to a person familiar with the matter.
The Trump administration and Anthropic are locked in a legal battle over the recent move by the Department of Defense to designate the company a “supply chain risk”. The government issued that designation after Anthropic insisted on putting limits on the use of its AI technology in war.
In a statement, a Treasury spokesperson said, “This week’s meeting was convened by Secretary Bessent to initiate a process for planning and coordination of our approach to the rapid developments taking place in AI.”
The existence of the meeting was reported earlier by Bloomberg News. The Federal Reserve declined to comment.
“We’re taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out,” Kevin Hassett, director of the National Economic Council, told Fox News on Friday. “There’s definitely a sense of urgency.”
Logan Graham, an Anthropic executive, said in a statement that the new technology would help “secure infrastructure that is critical for global security and economic stability”.
- This article originally appeared in The New York Times







