OpenAI's Pentagon Deal Backfired Spectacularly, and Now Everyone's Switching to Claude

There’s a special kind of irony when the world’s most valuable AI company moves at lightning speed in all the wrong directions. OpenAI just learned that lesson the hard way after Sam Altman announced a Pentagon partnership that somehow managed to unite employees, consumers, and activists against the company at the exact same moment.

The deal itself wasn’t inherently controversial. Giving the Defense Department access to AI models for national security purposes sounds reasonable on paper. The problem wasn’t the idea, it was the execution and the optics. Altman rushed it through without the kind of deliberation that a decision this loaded actually deserves.

When Your Competitor Does It Better by Saying No

Anthropic’s CEO Dario Amodei took a different path. When the Pentagon came knocking with similar requests, he said no. Not because he’s anti-military or anti-security, but because his company wanted assurances that the technology wouldn’t enable autonomous weapons or mass domestic surveillance. Pretty reasonable red lines, honestly.

“We cannot in good conscience accede to their request,” Amodei said at the time.

That stance aged beautifully. While OpenAI was scrambling to manage fallout, Anthropic looked like the adult in the room. And the market noticed immediately.

The Exodus Starts From Inside

Caitlin Kalinowski, a hardware executive who’d just joined OpenAI from Meta in 2024 to lead its robotics division, announced her resignation publicly on Saturday. She didn’t mince words. In a post on X, she wrote that surveillance without judicial oversight and lethal autonomy without human authorization “are lines that deserved more deliberation than they got.”

She wasn’t alone. Research scientist Aidan McLaughlin also went public, writing “i personally don’t think this deal was worth it.” Other employees told CNN they “really respect” what Anthropic did. Technical staff started asking for transparency, with some saying they’d push to terminate the contract if the safeguards turned out to be theater.

When nearly 900 former and current employees from OpenAI and Google signed a petition supporting your competitor and opposing your company’s direction, you’ve got a real problem on your hands.

The Users Vote With Their Thumbs

The rebellion wasn’t just internal. On social media, users started flooding Reddit and X with calls to “cancel ChatGPT.” The movement picked up momentum fast. Uninstalls of ChatGPT spiked by more than 295% on February 28, the day after the deal dropped.

Within days, Claude had climbed to number one on the US Apple App Store’s most-downloaded free apps list. It stayed there. Claude also topped the productivity apps category, relegating ChatGPT to second place. Google’s Gemini and xAI’s Grok weren’t far behind, but the message was crystal clear: users were willing to switch.

Activists even showed up outside OpenAI’s Mission Bay headquarters in San Francisco calling for a “QuitGPT” movement. The anger went beyond just the Pentagon deal too, with some protesters voicing broader concerns about business practices and political connections.

Sam Altman Goes Into Damage Control

Altman recognized the hole he’d dug pretty quickly. On X, he acknowledged that the process “was definitely rushed, and the optics don’t look good.” That’s corporate speak for “we messed up the messaging and people are furious.”

He followed up with an internal memo on March 2 that got shared publicly. In it, he said OpenAI would revise the contract to include clearer safeguards preventing domestic surveillance. The original terms didn’t explicitly cover “commercially acquired” data, which was a glaring oversight. He admitted he “shouldn’t have rushed” the deal, saying “it just looked opportunistic and sloppy.”

Which, fair assessment.

The Political Response Was Predictable

California Democratic Rep. Sam Liccardo introduced an amendment to the Defense Production Act that would prohibit the Pentagon from retaliating against developers who institute safeguards on risky technology. It failed 16-25 in committee, but the message mattered: some lawmakers actually get why refusing a bad deal shouldn’t come with consequences.

Liccardo made the point brilliantly: “When the company that designs and builds the jet fighter tells us when to use the brakes, we should listen.” The Pentagon apparently disagreed. Democratic Sen. Brian Schatz of Hawaii even jumped on X to announce he’d downloaded Claude, which is a subtle but pointed way for a politician to take a side.

What This Actually Reveals

The real story here isn’t about whether AI should play a role in national security. It’s about how quickly a company can lose institutional trust by prioritizing speed over principle. OpenAI had employees, customers, and even political allies ready to defend meaningful national security applications of AI. What they wouldn’t defend was a rushed, opaque process that looked like profit-seeking dressed up in patriotism.

Anthropic didn’t win this round because they’re perfect. They won because they slowed down and drew clear lines. And then they actually stuck to them publicly.

Altman’s frantic corrections after the fact probably helped contain the damage, but the clock doesn’t rewind. Users who switched to Claude might stay there. Employees who watched their CEO move fast and break ethical guardrails have seen something they can’t unsee. Business momentum, it turns out, is easier to lose than to build.

The question now isn’t whether OpenAI will recover from this specific controversy. It’s whether the market will ever quite trust the company to slow down and think before the next major decision comes along.

Written by

Adam Makins

I can and will deliver great results with a process that’s timely, collaborative and at a great value for my clients.