The Pro-Human Declaration Takes On AI Without Waiting for Washington

Something genuinely strange happened last week. Steve Bannon and Susan Rice signed the same document. Mike Mullen put his name on it too. So did hundreds of other experts, former officials, and people who normally can’t agree on basically anything. The thing they all agreed on? That humanity needs a real plan for AI before it’s too late.

The Pro-Human Declaration landed right as the Pentagon was designating Anthropic a “supply chain risk” for refusing to hand over unlimited access to its technology. Hours later, OpenAI rushed into a defense deal with terms that legal experts quietly admitted were probably unenforceable. It was the kind of chaos that makes you realize Congress has been completely asleep at the wheel.

“There’s something quite remarkable that has happened in America just in the last four months,” Max Tegmark, the MIT physicist who helped organize the effort, told me. His data backs it up: 95% of Americans now oppose an unregulated race to superintelligence. That’s not usually how Americans feel about anything.

What the Declaration Actually Says

The document opens with a stark observation: humanity is at a fork in the road. One path leads to what they call “the race to replace” where humans get pushed out as workers, then as decision-makers, and power consolidates into the hands of unaccountable machines and the institutions running them. The other path builds AI that actually expands what humans can do.

Getting to that better future depends on five pillars. Keep humans in charge. Don’t concentrate power. Protect what makes human experience worth having. Preserve individual liberty. And hold AI companies legally accountable when things go wrong.

The teeth comes in the provisions. An outright ban on superintelligence development until there’s actual scientific consensus it can be done safely and real democratic agreement from the public. Mandatory off-switches on powerful systems. A prohibition on architectures designed for self-replication, autonomous self-improvement, or resistance to shutdown.

These aren’t suggestions. They’re written as requirements.

Why Now, Why Child Safety

Tegmark reached for an analogy that stuck with me. “You never have to worry that some drug company is going to release some drug that causes massive harm before people have figured out how to make it safe,” he said, “because the FDA won’t allow them to release anything until it’s safe enough.”

We have that system for pharmaceuticals. We don’t have anything like it for technology that’s increasingly woven into daily life. The gap is starting to feel less like an oversight and more like a policy failure.

Tegmark thinks the crack in the ice comes through child safety. The declaration specifically calls for mandatory pre-deployment testing of AI products aimed at younger users. Testing for increased suicidal ideation. Testing for exacerbation of mental health conditions. Testing for emotional manipulation.

“If some creepy old man is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that,” Tegmark said. “We already have laws. It’s illegal. So why is it different if a machine does it?”

It’s actually a smart pressure point. Child protection has historically been the one area where political coalitions hold. Once you establish the principle of pre-release testing for kids’ products, the scope tends to expand almost inevitably.

“People will come along and be like, let’s add a few other requirements. Maybe we should also test that this can’t help terrorists make bioweapons. Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government,” Tegmark explained.

The Real Collision

What makes this moment remarkable isn’t just the declaration itself. It’s the timing. The Pentagon standoff with Anthropic exposed something nobody was really talking about before: we have no coherent rules for AI governance. None. And the vacuum is being filled by corporate dealmaking and military pressure.

When Defense Secretary Pete Hegseth labeled an AI company a national security risk because it wouldn’t hand over unlimited access to its technology, it became clear this wasn’t about a contract dispute anymore. It was about control. Who gets to decide how powerful AI systems are deployed. Who controls the infrastructure. Who bears responsibility when things break.

OpenAI’s response, cutting its own deal with the Defense Department on terms that experts admitted would be hard to enforce, showed how quickly the race accelerates when government pressure arrives. That’s not governance. That’s panic.

The declaration exists in that gap. It’s what Congress should have produced months ago but didn’t. Now hundreds of people across the political spectrum have decided to do the work anyway.

“If it’s going to come down to whether we want a future for humans or a future for machines, of course they’re going to be on the same side,” Tegmark said about the unlikely coalition.

Which raises the uncomfortable question: how much longer can we pretend this is something Congress will handle on its own?

Written by

Adam Makins

I can and will deliver great results with a process that’s timely, collaborative and at a great value for my clients.