It’s almost poetic in its cruelty. Anthropic, the company that built its entire brand on being safety-first, finds itself blacklisted by the Trump administration for refusing to build surveillance tools and autonomous killing machines. The irony isn’t lost on anyone paying attention. But here’s the part that should keep AI executives up at night: this was entirely preventable.
Max Tegmark, the MIT physicist who founded the Future of Life Institute, laid it all out in a recent interview. His diagnosis is blunt and uncomfortable. The AI industry didn’t fail because it was too cautious or too idealistic. It failed because it did the opposite. These companies spent years making beautiful promises about safety and responsibility while actively lobbying against the very regulations that might have protected them.
“The road to hell is paved with good intentions,” Tegmark said, and he’s right. A decade ago, everyone was excited about AI curing cancer and strengthening America. Now we’re watching the government tear apart a company for not wanting its technology used to spy on Americans or power autonomous drones.
The Self-Inflicted Wound
Anthropic’s defense contract was worth up to $200 million. That’s real money, the kind that changes a company’s trajectory. But Dario Amodei, Anthropic’s founder, drew a line. No mass surveillance. No autonomous weapons without human control. For that principled stance, the company got blacklisted, and the Pentagon contract was severed in an instant.
The government could do this precisely because there are no laws stopping it from asking in the first place.
Think about that for a second. We have more regulation around opening a sandwich shop than we do around building artificial general intelligence. Tegmark’s sandwich shop analogy is worth revisiting: “If the health inspector finds 15 rats in the kitchen, he won’t let you sell any sandwiches until you fix it. But if you say ‘I’m going to release superintelligence which might overthrow the U.S. government, but I have a good feeling about mine’ - the inspector has to say ‘Fine, go ahead.’”
The absurdity is staggering. Yet this didn’t happen by accident. Anthropic, OpenAI, Google DeepMind, and xAI all made the same strategic choice: resist regulation, promise self-governance, and lobby for a light-touch approach. They had an opportunity years ago to convert their voluntary commitments into actual law. They didn’t take it.
The Hypocrisy Nobody Can Ignore
Here’s where it gets uncomfortable for everyone involved. These companies have spent years marketing themselves as responsible actors. Google dropped “Don’t be evil.” OpenAI removed “safety” from its mission statement. xAI shut down its entire safety team. And Anthropic, just this week, abandoned its core safety promise: not to release powerful AI systems until confident they wouldn’t cause harm.
The pattern is too obvious to ignore. When safety actually costs something, when it means saying no to a lucrative government contract or a new funding round, the rhetoric evaporates. The marketing holds up for a while, but eventually something gives.
Tegmark calls it “sowing the seeds of their own predicament.” He’s not wrong. These companies resisted regulation specifically so they wouldn’t have external constraints. Now that the government has decided to constrain them anyway, they have absolutely nothing to stand on. No legal protections. No industry standards. No agreed-upon boundaries that are embedded in law.
What Happens When Nobody’s in Charge
This is where technology policy gets dangerous. Without clear rules, power fills the vacuum, and right now the Trump administration is demonstrating exactly how much power a government can wield over a tech company when there’s no regulatory framework to push back against.
Sam Altman at OpenAI came out publicly supporting Anthropic’s position, which took some courage. He said OpenAI has the same red lines. But Google stayed silent. xAI stayed silent. That silence is deafening, and it tells you everything about where corporate interests actually lie when tested.
The “China threat” argument has been the go-to excuse for this entire mess. Whenever someone suggests regulation, the business lobby immediately responds: but what about Beijing? We’ll fall behind. We need to move fast. Tegmark’s pushback here is worth taking seriously. China is actually banning AI girlfriends. Xi Jinping, the guy literally nobody would describe as soft on corporate overreach, is not going to tolerate an AI company building something that could threaten his government.
The real threat isn’t losing a race to China. It’s building something nobody knows how to control and hoping it doesn’t decide humans are expendable.
How Close Are We, Really?
Six years ago, most AI experts thought human-level language AI was decades away. They were off by decades. GPT-4 is already at 27 percent toward AGI by the rigorous definition Tegmark helped create. GPT-5 jumped to 57 percent. The trajectory is non-linear and accelerating.
Tegmark told his MIT students yesterday that if AGI arrives in four years, they probably won’t have jobs to graduate into. He’s not trying to be alarmist. He’s looking at the data and telling people to prepare.
The window for getting this right is closing. And it’s not because the technology is outpacing regulation anymore. It’s because the companies that should have been advocating for smart regulation were too busy lobbying for the opposite.
The Path Not Taken
There’s actually a version of this story where things work out fine. If AI companies had taken their own safety rhetoric seriously, collaborated together, and asked the government to turn voluntary commitments into law, they’d be in a completely different position right now. They could still build amazing things. But they’d have to prove they understood how to control them first, like clinical trials for medicine.
Instead, we have a corporate amnesty that makes the tobacco and asbestos industries look like paragons of virtue. And now the bill is coming due in real time.
Anthropic’s blacklisting wasn’t about government overreach destroying a good company. It was the inevitable consequence of an entire industry choosing short-term freedom over long-term stability. The government didn’t create this mess. The companies did. They just spent years denying it while building something that might actually be worth regulating.
So the question everyone should be asking isn’t what happens to Anthropic next. It’s which other companies will follow them into the void now that the government knows exactly where the leverage points are.


