AI Giants Are Hiring Weapons Experts. Should We Be Terrified?

Anthropic just posted a job listing that sounds like it belongs in a spy thriller, not a tech company’s careers page. They’re hunting for someone with at least five years of experience in chemical weapons and explosives defense. Oh, and knowledge of dirty bombs would be nice too.

The official reason? They want to prevent their AI from being weaponized. Makes sense on the surface. But here’s where it gets weird.

The Job Nobody Wants to Think About

OpenAI is doing the exact same thing, except they’re dangling a salary of up to $455,000 to lure someone in. That’s nearly double what Anthropic is offering. This isn’t some fringe operation either. Both companies are treating this like any other specialized hire.

The logic is straightforward enough: if you’re going to build guardrails around AI systems that could theoretically explain how to build a nuclear weapon, you probably need someone who actually understands nuclear weapons. Someone who can look at a response and think, “Yep, that would actually work,” and then make sure the AI never says it again.

But here’s the problem nobody seems comfortable discussing in polite tech circles.

The Catch That Keeps Experts Up at Night

Dr. Stephanie Hare, a tech researcher on the BBC’s AI Decoded program, raised the question everyone’s avoiding: “Is it ever safe to use AI systems to handle sensitive chemicals and explosives information, including dirty bombs and other radiological weapons?”

Think about that for a second. You’re feeding weapons information into an AI system specifically so it can learn to recognize and block weapons information. The AI is literally absorbing this data. Even with the best intentions and the most robust guardrails, you’re creating a system that knows how to make terrible things.

It’s like hiring a burglar to teach your security team how burglars work, then hoping they don’t remember the job when they leave work.

The Government Got Involved, And Things Got Messy

This is where technology policy stops being abstract and starts getting uncomfortable. The US Department of Defense labeled Anthropic a “supply chain risk” after the company refused to let its systems be used for autonomous weapons or mass surveillance.

That’s the same designation they gave to Huawei. Think about that one for a minute.

Anthropic’s co-founder Dario Amodei wrote back in February that the technology simply isn’t ready for military applications. Fair point. But then the White House responded basically saying they don’t care what tech companies think, the military will do what it wants anyway.

Meanwhile, OpenAI played it differently. They said they agreed with Anthropic’s position on principle, then went ahead and negotiated their own government contract anyway. The contract hasn’t started yet, but it’s there, waiting.

The Irony Nobody’s Talking About

Claude, Anthropic’s AI assistant, is already embedded in Palantir systems and being deployed by the US military in the Iran conflict. So here’s a company fighting the government over military use of its AI while its AI is actively being used by the military.

This isn’t some theoretical debate happening in academic papers. It’s happening right now, deployed in active military operations, and there’s no international treaty or regulation governing any of it. Just companies making judgment calls and governments doing what they want anyway.

The tech industry has spent years warning about existential risks from AI. Meanwhile, they’re building systems that can describe how to make weapons, hiring experts to make sure they can recognize weapon-making information, and then watching as governments use these systems in actual wars.

There’s a question buried in all this that matters more than the hiring decisions or the government contracts: if we’re at the point where we need weapons experts embedded in AI companies to stop our AI from weaponizing itself, have we already crossed a line we shouldn’t have crossed in the first place?

Written by

Adam Makins

I’m a published content creator, brand copywriter, photographer, and social media content creator and manager. I help brands connect with their customers by developing engaging content that entertains, educates, and offers value to their audience.