Anthropic is having a very strange week. The Pentagon slapped it with a “supply-chain risk” designation that typically gets reserved for foreign adversaries, which could essentially lock the company out of government contracts. But then the Treasury Secretary and the White House Chief of Staff met with Anthropic CEO Dario Amodei, called it “productive and constructive,” and talked about collaboration.
So what’s actually going on here?
According to reporting from Axios, the Pentagon and the rest of the Trump administration appear to be in direct conflict over whether Anthropic should be trusted. That’s an important distinction because it suggests this isn’t a unified government position. It’s a fight within the government itself.
The Pentagon vs. Everyone Else
The original dispute centered on something actually pretty principled on Anthropic’s part. The company wanted to maintain safeguards around its technology, specifically pushing back against military use for fully autonomous weapons and mass domestic surveillance. When negotiations failed, the Pentagon responded with the supply-chain risk label.
That move was aggressive enough that Anthropic is now challenging it in court. But here’s where the story gets interesting: an administration source told Axios that “every agency” except the Department of Defense wants to use Anthropic’s technology. Not some agencies. Not most agencies. Every other one.
This is either a sign that the Pentagon is being unnecessarily restrictive, or there’s a deeper strategic disagreement about how the administration should approach AI companies. Probably both things are true simultaneously.
The Quiet Thawing
Before the Friday meeting, there were already signs that not everyone in the administration wanted to freeze Anthropic out. Reports suggested Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell had been encouraging major banks to test Anthropic’s new Mythos model. That’s not the action of people who want the company isolated.
Anthropic co-founder Jack Clark also signaled that the company doesn’t see the Pentagon dispute as existential. He called it a “narrow contracting dispute” that wouldn’t prevent Anthropic from briefing the government about its latest models. Translation: we’re not burned bridges with you, and we don’t think you want to burn them with us.
The Friday meeting seemed to confirm Clark’s reading. The White House statement specifically mentioned “shared approaches and protocols to address the challenges associated with scaling this technology,” which is diplomatic language for “we want to work with you on this.” Anthropic’s own statement doubled down on collaboration, mentioning cybersecurity, America’s lead in the AI race, and AI safety as shared priorities.
What This Really Means
The optics here are fascinating. Anthropic took a stand on limiting military applications of its technology, got punished for it by the Pentagon, and is now building relationships with nearly every other powerful institution in government. That’s not a loss. That’s leverage of a different kind.
The company isn’t winning over the Defense Department, at least not yet. But it’s winning over Treasury, the White House, and seemingly the entire rest of the executive branch. That’s arguably better for a company that wanted safeguards in the first place. The Pentagon can’t use Anthropic’s models if nobody else in government thinks that’s acceptable.
This also creates pressure on the Pentagon to reconsider. If every other agency wants to use the technology and the White House is publicly discussing collaboration, the military’s supply-chain risk label starts to look like an outlier position rather than serious policy.
The question now is whether Anthropic and the administration can actually find a middle ground that lets the company operate with the safeguards it wants while still serving government priorities. Or whether this will turn into something messier. For a company that just wanted to say “we don’t want to build autonomous weapons,” it’s surprisingly close to getting what it asked for.


