There’s a particular type of viral video doing the rounds on TikTok and Instagram right now that doesn’t show anything real. A water park in Croydon with a filthy pool. A zoo that doesn’t exist. An aquarium that never was. They’re all AI-generated, all designed to look just real enough that people actually believe them.
The scary part? They’re working. Millions of views. Thousands of shares. And most importantly, they’re making people angry. The kind of angry that gets weaponized into actual political backlash.
Welcome to “decline porn” - the newest frontier of online misinformation where AI tools make it possible for anyone with a laptop to create convincing fakes of Western cities supposedly falling apart due to immigration and crime. It’s a trend that’s caught the attention of the BBC, and what they uncovered reveals something deeply troubling about how easily technology can be turned into a tool for spreading divisive narratives.
The Artist Behind the Chaos
The videos started with someone using the handle RadialB, a guy in his 20s from north-west England who claims he’s just trying to be funny. He’s never even been to Croydon, yet his videos about the neighbourhood have racked up millions of views.
When asked about his intentions, RadialB was remarkably candid. He wanted people to believe the videos were real, at least at first. That was the hook. The selling point of AI, he explained, is that it looks real. If people immediately knew it was fake, they’d just scroll past. So he made them graphic, made them absurd, made them believable enough that you’d stop.
“I don’t deny it,” he said when confronted about the racism in the comments. He noted that platforms filter out the worst of it, but that doesn’t erase the fact that his content was generating racist engagement. When pressed about always depicting “roadmen” (a term he describes as a cultural archetype) in these scenarios, he argued he was just using prompts about puffer jackets and balaclavas because it made things “funny.” The racial element, he insisted, wasn’t intentional.
But intent and impact aren’t the same thing, are they?
When Satire Becomes Ammunition
Here’s where it gets complicated. RadialB claims some of his content is satirical, that he’s making fun of “English politics” which he describes as a “parasitic cesspit.” Some of the comments praising or raging about his videos might even be ironic. But when you’re talking about millions of views, the distinction between satire and sincere anger blurs into irrelevance.
Real people in Croydon noticed. A TikTok user called C.Tino pushed back, saying the trend was falsely portraying his neighbourhood as “ghetto” and that people were actually starting to believe this was real life. He wasn’t wrong. In January, YouGov found that a majority of Britons now think London is unsafe, despite only a third of people actually surveyed in the capital agreeing with that assessment.
The math doesn’t add up unless you factor in the echo chamber effect of viral videos, fake news, and deepfakes that blur the line between reality and fiction so thoroughly that most people can’t tell the difference anymore.
The Infrastructure of Misinformation
What’s remarkable is how quickly this trend scaled. RadialB didn’t invent decline porn, he just turbocharged it. Other creators saw the engagement numbers and started making their own versions. Some were in Israel, others in Brazil. Several accounts posed as British news outlets, only sharing AI videos or other negative content about Western cities.
YouTuber Kurt Caz, who has over four million subscribers, got caught doctoring thumbnails to make London look more dangerous. The thumbnail showed a man in a balaclava in front of Arabic shop signs, but the actual video showed English signs and a friendly interaction. When called out, Caz dismissed it as “clickbait” and suggested critics weren’t being thorough enough in their “hit piece.”
This is technology at a crossroads. The tools that allow creators to make compelling content also allow them to manufacture false realities at scale. The barrier to entry for creating convincing deepfakes has collapsed. AI tools are cheap, accessible, and getting better by the week.
The Bigger Picture
None of this happens in a vacuum. These videos fit neatly into an existing ecosystem of content pushing the narrative that Western cities are in decline due to immigration and crime. Sometimes these videos show real problems like homelessness or graffiti, but strip away context. Increasingly, though, they just fabricate the evidence entirely.
And these narratives have been picked up by some genuinely influential people. Elon Musk, who owns X and has 230 million followers, regularly posts about “uncontrolled migration” destroying Britain. He spoke at Tommy Robinson’s Unite the Kingdom rally, lending his megaphone to ideas that live in the space between concern about legitimate policy issues and outright xenophobia.
When Elon posts about decline, millions see it. When RadialB posts an AI video, millions see it. When thousands of copycats remix and reshare, tens of millions see it. The sheer volume creates its own truth, regardless of whether any of it is real.
What Now?
RadialB’s original TikTok account got banned for inappropriate content, but he just started a new one. OpenAI said his activity didn’t meet the threshold to report to authorities. The platforms have policies against racist abuse and AI content, but enforcement remains scattered and reactive. You can flag a video today and see ten more tomorrow.
The question isn’t really whether these videos will disappear. They won’t. The question is whether we’ve built a society where we can still distinguish between what’s real and what’s manufactured for engagement, or whether we’ve already lost that ability.


