---
layout: post
title: "Anthropic vs OpenAI: The AI Safety Showdown Nobody Expected"
description: "Anthropic's CEO slams OpenAI's defense deal as 'safety theater' while the public votes with their feet—uninstalling ChatGPT en masse."
date: 2026-03-04 06:00:21 +0530
author: adam
image: 'https://images.unsplash.com/photo-1765707886613-f4961bbd07dd?q=80&w=988'
video_embed:
tags: [news, tech]
tags_color: '#00ba65'
---

There's nothing quite like watching two AI powerhouses throw down over military contracts and ethics in a memo that somehow ends up all over tech Twitter. Anthropic's Dario Amodei just went nuclear on OpenAI's Sam Altman, calling their Department of Defense deal "safety theater" and accusing Altman of straight-up lying about what the contract actually covers.

Let's back up. Last week, Anthropic and the DoD couldn't come to terms. Anthropic, which already had a $200 million military contract, wanted the Pentagon to promise not to use its AI for domestic mass surveillance or autonomous weapons. Pretty reasonable asks, honestly. But the DoD wanted unrestricted access to basically anything marked as "lawful use."

So what did OpenAI do? They said yes.

## The Fine Print Nobody Trusts

Here's where it gets spicy. OpenAI claims their contract actually includes the same protections Anthropic wanted. Altman basically said, "Don't worry, the law currently says mass surveillance is illegal, so we made sure that's excluded." Which sounds great until you remember that laws change. What's illegal today could theoretically be legal tomorrow.

Amodei was not impressed. In his memo to staff, he described OpenAI's messaging as gaslighting and called out how <a href="https://infeeds.com/tags/?tag=technology">technology</a> companies keep trying to have it both ways, appealing to both ethics-minded employees and profit-hungry shareholders simultaneously.

"The main reason they accepted this and we did not is that they cared about placating employees, and we actually cared about preventing abuses," Amodei wrote. You can feel the frustration radiating off those sentences.

## The Public Is Picking a Side

Here's what's wild: regular people actually care about this stuff. ChatGPT uninstalls jumped 295% after OpenAI made its defense deal public. That's not a rounding error or bot activity. That's real people making a choice.

Amodei noticed. In his staff memo, he seemed genuinely surprised that the attempted spin wasn't working. "I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI's deal as sketchy or suspicious, and see us as the heroes," he wrote. Anthropic is now number two in the App Store.

The whole thing reveals something uncomfortable about <a href="https://infeeds.com/tags/?tag=business">business</a> in the AI era: you can't actually separate safety from PR anymore. Every decision gets parsed by everyone. Every line in a contract gets scrutinized. Every memo to staff eventually becomes public.

## What This Actually Means

Anthropic's stand might feel virtuous, but let's be real, it was also good business. They got to look like the principled player while OpenAI took the heat. Meanwhile, the Pentagon still gets access to cutting-edge AI either way.

But here's the thing that actually matters: the fact that this blew up publicly, that people voted with their wallets, that a CEO felt compelled to write this kind of scorching internal memo. It suggests that the AI industry's relationship with military applications is still unsettled. Nobody's comfortable yet.

Which means this fight isn't over. It's just getting started.

Written by

Adam Makins

I can and will deliver great results with a process that’s timely, collaborative and at a great value for my clients.