OpenAI Just Sided With the Pentagon While Anthropic Took a Stand

There’s a moment happening right now in Silicon Valley that feels pretty pivotal. Sam Altman just announced that OpenAI has reached a deal with the Department of Defense to let the military use its AI models on classified networks. Sounds straightforward enough, right? Except it’s not. Because this announcement came right after a pretty dramatic standoff between the Pentagon and Anthropic, and the fallout tells you everything you need to know about how divided the AI industry actually is.

The Anthropic Pushback

Anthropic CEO Dario Amodei drew a line in the sand. His company wasn’t going to sign off on AI being used for just “all lawful purposes” the way the Pentagon wanted. Instead, Anthropic pushed back specifically on two things: mass domestic surveillance and fully autonomous weapons systems. Pretty specific boundaries, right?

That position got support. Over 60 OpenAI employees and 300 Google employees signed an open letter backing Anthropic’s stance. Which is wild when you think about it. These are people working inside the very companies that might benefit from these military contracts, and they’re still saying “no thanks.”

But here’s where it gets ugly. Trump immediately fired back with a social media post calling Anthropic employees “Leftwing nut jobs.” Secretary of Defense Pete Hegseth went further, designating Anthropic as a supply-chain risk and effectively blacklisting the company from doing business with any military contractor. Six-month phase-out. Done.

OpenAI’s Different Path

Then Altman dropped his announcement about OpenAI’s deal. And here’s the thing that’s actually interesting: Altman claims OpenAI negotiated the same protections Anthropic was fighting for. He says the agreement includes prohibitions on domestic mass surveillance and maintains “human responsibility for the use of force, including for autonomous weapon systems.”

So why did it work for OpenAI and not Anthropic? The answer probably lives somewhere between diplomacy and business calculus. Altman told OpenAI employees that the government agreed to let the company build its own “safety stack” to prevent misuse. If the model refuses to do something, the government won’t force OpenAI to make it do it anyway.

That’s genuinely different from what the Pentagon was demanding earlier. Someone at OpenAI figured out how to say yes while maintaining some control over how the technology gets used. Anthropic said no with the same conditions on the table.

The Bigger Picture

What’s fascinating here is that this isn’t really about technology in isolation. It’s about how companies navigate power. Anthropic took a principled stand and got crushed by the federal government for it. OpenAI found a way to collaborate while claiming the same guardrails exist.

Neither approach is obviously right or wrong. Anthropic’s employees probably feel like they made a moral choice. OpenAI’s employees might feel like they found a pragmatic solution. But you have to wonder what happens when the government eventually wants something that contradicts those safety agreements.

The timing is also worth noting. Altman released his statement right before the U.S. and Israel started bombing Iran, with Trump calling for regime change. So we’re sitting here talking about AI safeguards while actual military operations are happening. The hypothetical has a way of becoming very real very quickly.

What started as a debate about principles turned into a business decision that will probably reshape how every other AI company negotiates with the government going forward. And the next CEO who takes Anthropic’s position will know exactly what they’re signing up for.

Written by

Adam Makins

I can and will deliver great results with a process that’s timely, collaborative and at a great value for my clients.