When Sam Altman announced OpenAI’s Pentagon deal on February 28, he probably thought he was securing a major win for the company. A high-profile government contract with the U.S. Department of Defense. A sign of legitimacy and trust. Instead, he walked straight into one of the messiest PR disasters in recent business history.
The backlash started immediately. Not just from activists or internet trolls, but from OpenAI’s own employees. From millions of users. From the company’s direct competitor, who had already said no.
When Your Rival Does the Right Thing First
Anthropic CEO Dario Amodei had rejected the exact same Pentagon deal days earlier. His reasoning was simple and direct: no autonomous weapons, no mass surveillance of Americans without judicial oversight. “We cannot in good conscience accede to their request,” he said.
That’s a powerful statement. Especially when your biggest competitor then turns around and takes the deal.
Caitlin Kalinowski, OpenAI’s newly hired hardware executive from Meta, resigned over the announcement. “AI has an important role in national security,” she wrote on X. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
She wasn’t alone. Aidan McLaughlin, a research scientist at OpenAI, posted that he “personally don’t think this deal was worth it.” Another employee told CNN that many staffers “really respect” Anthropic for taking the harder path. Even before the deal, nearly 900 current and former employees from OpenAI and Google had signed a petition supporting Anthropic’s resistance to weaponized AI.
The message was clear: your own people don’t believe in what you just did.
The App Store Tells the Real Story
Here’s where it gets brutal. On February 28, ChatGPT uninstalls spiked by over 295%. Users flooded Reddit, Twitter, and every corner of the internet urging others to “cancel ChatGPT.” They didn’t just switch to another AI. They switched to Claude, Anthropic’s product.
By Monday, Claude hit number one on the U.S. Apple App Store’s free apps ranking. It stayed there through Saturday. It’s now also the top downloaded productivity app on the App Store, with ChatGPT and Google’s Gemini trailing behind.
That’s not just market movement. That’s a statement. Users literally voted with their downloads.
Meanwhile, activists gathered outside OpenAI’s Mission Bay headquarters in San Francisco calling for a “QuitGPT” movement. Some of the criticism went beyond the Pentagon deal itself, touching on broader concerns about AI companies’ relationships with power and politics.
Sam Altman’s Damage Control
To his credit, Altman didn’t ignore the problem. He fielded questions publicly on X the next day and admitted something important: “the process was definitely rushed, and the optics don’t look good.”
He followed up with an internal memo on March 2 (later shared publicly) acknowledging the mistakes. He said OpenAI would revise the contract to include clearer safeguards against mass domestic surveillance. He added explicit prohibitions on using their technology on commercially acquired data, which wasn’t covered in the original terms.
“I shouldn’t have rushed to get the deal out,” Altman wrote. “It just looked opportunistic and sloppy.”
That’s a real admission. Most executives wouldn’t say that.
But here’s the thing about rushing and then apologizing: the damage was already done. You can’t un-ring that bell. The employees who resigned aren’t coming back because of a revised contract. The users who switched to Claude made their choice. The question now is whether OpenAI can rebuild trust or if they’ve ceded the high ground to a competitor who was patient enough to say no.
The Political Angle
Rep. Sam Liccardo, a California Democrat, actually introduced an amendment to the Defense Production Act that would prohibit the Pentagon from retaliating against AI developers for instituting safeguards on risky technology. It failed 16-25 in the House Financial Services Committee. But his metaphor was perfect: “When the company that designs and builds the jet fighter tells us when to use the brakes, we should listen.”
The Pentagon didn’t listen. OpenAI, initially, didn’t either. Anthropic did.
What’s fascinating is that this wasn’t some abstract ethical debate. It was real people making real choices. Employees resigning. Users uninstalling apps. Politicians arguing for legal protections. A competitor suddenly leading the market not because they had better technology, but because they made a principled decision earlier.
The Pentagon’s negotiators probably thought they were just doing their job. OpenAI’s leadership probably thought this was a natural next step for a company that wanted to be taken seriously by government. But they misread the room. They misread their own employees. They definitely misread their users.
In a market where trust matters and where people have actual alternatives, being seen as “opportunistic and sloppy” about weapons and surveillance isn’t a small problem. It might be the kind of problem that takes years to recover from, if you can recover at all.


