Sometimes the best way to understand technology companies is to watch what they fight over and who suddenly wants it.
Anthropic just announced Mythos, a new AI model designed to spot security vulnerabilities. Nothing unusual there. Except according to Bloomberg reporting, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell called a meeting with bank executives this week specifically to encourage them to use it. JPMorgan Chase was the initial partner, but Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are now reportedly testing it too.
Here’s where it gets interesting. Anthropic is actively limiting access to Mythos. Not because it’s weak or unreliable, but allegedly because it’s too good at finding security holes. Anthropic even trained it for purposes beyond cybersecurity, which somehow made it better at catching vulnerabilities anyway. The company itself acknowledged this paradox in its announcement. Some observers have suggested the scarcity angle is hype or, more cynically, a smart play to drive enterprise demand.
Either way, the optics matter here.
The Legal Elephant in the Room
Anthropic is currently in court with the Trump administration. The Department of Defense has designated the company as a supply-chain risk, a move that came after the company and DoD couldn’t reach agreement on how Anthropic’s models could be used by the government. Anthropic wanted guardrails; the administration apparently wanted broader access.
So now we have a situation where top government officials are actively recruiting financial institutions to adopt an Anthropic product while the same administration is suing the company over access restrictions. That’s not just a mixed signal. That’s a contradiction playing out in real time.
The Financial Times adds another wrinkle: U.K. financial regulators are also concerned about the risks Mythos poses, suggesting this isn’t just a U.S. business story but a broader governance question about what happens when powerful security tools proliferate.
What’s Actually Happening Here
The most charitable read is that different branches of government are acting independently. Treasury and the Fed care about financial system security. The DoD cares about government procurement and security protocols. They might just have different priorities.
The less charitable read is that there’s pressure on Anthropic to be cooperative while litigation plays out, and access to a genuinely useful security tool is being used as leverage or incentive.
What we can say for certain: Anthropic built something powerful enough that senior government officials felt compelled to personally encourage its adoption. And the company built guardrails strict enough that those same officials apparently can’t get what they want without negotiation.
The real question is whether limiting access to AI security tools in the name of safety serves anyone well when the tools end up in private bank servers anyway. That’s the conversation nobody seems willing to have directly.


