Sam Altman basically did something we rarely see in tech leadership: he admitted his company screwed up. On Monday, the OpenAI CEO took to X to confess that the Pentagon deal his company struck on Friday was “opportunistic and sloppy.” Which is… not exactly the vibe you want when you’re talking about AI systems that could influence military operations.
But here’s what makes this whole saga actually interesting. It’s not really about OpenAI versus Anthropic, or even about whether AI should be used in warfare. It’s about the fact that nobody actually knows what they’re doing yet, and everyone’s pretending they do.
The Deal Nobody Liked
The Pentagon agreement popped up Friday and immediately felt wrong to people. So wrong, actually, that ChatGPT uninstalls jumped 295% over the weekend. That’s the kind of reaction that makes executives nervous. Meanwhile, Claude surged to the top of Apple’s App Store as users basically said “we’re taking our AI elsewhere, thanks.”
What was the actual problem? OpenAI’s original deal didn’t explicitly ban domestic surveillance on Americans. It didn’t restrict intelligence agencies like the NSA without further modification. It looked like a handshake deal made in a back room, which, to be fair, is kind of what it was.
Altman’s Monday post tried to fix this. The company promised to add language explicitly prohibiting surveillance of US citizens. NSA usage would need additional contract modifications. Basically, OpenAI said “yeah, we should’ve put this in writing the first time.”
When Anthropic Said No
The whole situation emerged from Technology drama between Anthropic and the Defense Department. Anthropic, OpenAI’s rival, had drawn what it called a “red-line” principle: no autonomous weapons. Full stop. The Trump administration didn’t appreciate that stance and reportedly blacklisted Claude from certain military uses.
Except, and this is where it gets weird, Claude ended up being used in the US-Israel war with Iran anyway. Almost immediately after the ban. So much for red lines.
The Pentagon declined to comment on any of this, which tells you everything you need to know about how transparent these arrangements actually are.
The Human in the Loop Problem
Palantir has been doing something similar but different. Their software brings together massive amounts of military data, satellite feeds, intelligence reports. Then AI systems like Claude analyze it to help military planners make “faster, more efficient, and ultimately more lethal decisions where that’s appropriate.”
That phrase alone should make your skin crawl a little.
Palantir’s approach includes what they call “human in the loop” oversight. Lieutenant Colonel Amanda Gustave from NATO’s Task Force Maven emphasizes that AI would never make the final call on anything. A human always decides. Palantir doesn’t ban autonomous weapons outright, but says humans need to stay involved.
Here’s the problem though: AI large language models hallucinate. They make things up. They’re confident about being wrong. You can have the best “human in the loop” protocol imaginable, but if the AI confidently tells a military analyst that intelligence report X confirms threat Y, and it’s completely fabricated, what happens then?
The Real Issue Nobody’s Talking About
Professor Mariarosaria Taddeo from Oxford pointed out something crucial: “With Anthropic out of the Pentagon, the most safety-conscious actor was now out from the room.” That’s not a small thing. Having companies that actually take ethical constraints seriously involved in these discussions matters.
But what we’re really seeing is a regulatory vacuum. There are no clear rules for how AI should be used in military contexts. There’s no government body with real expertise overseeing these contracts. You’ve got private companies making real-time decisions about what’s acceptable while administrations change and redlines shift.
OpenAI admitted it rushed Friday’s announcement to “de-escalate” and “avoid a much worse outcome.” Translation: they were worried Anthropic was going to get all the defense contracts, so they wanted a seat at the table. Then it blew up in their face because the agreement looked exactly like what it was, a rushed deal without proper safeguards.
The fact that Altman called his own deal “opportunistic and sloppy” is refreshingly honest. But it also highlights something darker. If your first instinct is to move fast and patch problems later when it comes to military AI deployment, you’ve already lost the plot.
The questions aren’t really being answered yet. Who actually decides what AI can and can’t do in war? Who holds companies accountable when things go wrong? And maybe most important: should we be letting commercial AI firms make decisions this significant at all, or is this a space where government needs to actually build its own capabilities?


