There’s a new trend sweeping TikTok and Instagram that should genuinely concern us. It’s not a dance, a filter, or even a meme. It’s something far more insidious: AI-generated videos designed to look like real footage of Western cities crumbling under the weight of immigration and crime. These deepfakes are racking up millions of views, spawning copycat accounts, and sparking racist backlash from viewers who can’t tell the fake from the real.
Welcome to “decline porn.”
The trend exploded with a video of a crowd of young Black men in balaclavas slipping down a filthy water slide in Croydon, supposedly a “taxpayer-funded water park.” It’s absurd. It’s meant to be. But that’s exactly the problem. By wrapping fictional dystopias in just enough plausibility, these videos bypass our critical thinking and go straight into our feeds as supposed evidence of societal collapse.
Who’s Behind This Mess?
The BBC tracked down the creator behind the Croydon videos, a TikTok user called RadialB. He’s in his 20s, from the northwest of England, and has never even been to Croydon. That detail alone tells you everything you need to know about the authenticity of what’s being spread across the internet.
When asked about his motivations, RadialB is remarkably candid. He doesn’t deny that his content provokes racist reactions in the comments. He just shrugs it off. The platforms filter out the worst of it, he says, so it’s not really his problem. He frames the videos as satire, as jokes designed to game the recommendation algorithm. He wants people to think they’re real, even if they’re not, because “if people saw it and they immediately knew it was fake, then they would just scroll.”
That’s the deal with modern content creation right now: virality requires deception, and deception requires blurring the line between satire and reality until nobody knows where the joke ends and the actual harm begins.
RadialB claims he didn’t intend for his “roadmen” characters to be a particular race. He just used prompts about puffer jackets and balaclavas because, in his view, that makes them the “funniest” characters. The fact that AI models trained on real-world data consistently generated images of young Black men? That’s apparently just a coincidence. Or maybe it’s not. Maybe it’s the point, and he’s just unwilling to admit it.
The Algorithmic Wildfire
Here’s where things get really worrying. Once RadialB’s videos started getting traction, copycat accounts popped up everywhere. Users from Israel to Brazil started re-sharing them, sometimes translating them into Arabic or other languages. Some accounts even pretended to be British news outlets, exclusively sharing either these AI-generated videos or other negative content about UK and US cities.
The barrier to entry for this kind of technology is lower than ever. AI image generators have become so good, so accessible, and so easy to use that anyone with a laptop can manufacture convincing fake footage in minutes. RadialB himself acknowledges this: “It hugely lowers the barrier for entry for anyone who wants to make fake stuff.”
What started as one person’s idea has become a full ecosystem of grifters, engagement farmers, and international accounts all profiting off the same divisive narratives. They’re not creating the decline porn trend. They’re scaling it. They’re monetizing it. They’re weaponizing it.
The Real Problem Isn’t New, But It’s Getting Sharper
The narrative that Western cities are falling apart because of immigrants and crime isn’t new. What’s changed is the delivery mechanism. You used to need credentials to spread these ideas. You needed to be a journalist, a politician, or have some other platform. Now you just need an AI tool and a willingness to lie.
Look at Kurt Caz, a South African YouTuber with four million subscribers who posts travel videos with sensationalized titles like “Attacked by thieves in Barcelona!” He recently got called out for allegedly using AI to doctor a thumbnail showing a man in a balaclava in front of Arabic shop signs in Croydon. But when you watch the actual video, the signs are in English, the cyclist isn’t wearing a balaclava, and Caz is giving him the thumbs up after a friendly conversation. The thumbnail was fabricated to match the narrative. When criticized, Caz dismissed it as “clickbait.”
Even high-profile figures are playing this game. Elon Musk, who owns X with over 230 million followers, regularly posts about “massive uncontrolled migration” destroying Britain. He spoke at a far-right activist’s rally about the “destruction of Britain.” His influence legitimizes the same narratives that deepfake creators are pumping into the algorithm.
The Data Doesn’t Match the Narrative
Here’s the thing that really gets you: the actual data doesn’t support these narratives. YouGov released polling showing that a majority of Britons believe London is unsafe. But when they asked actual Londoners? Only a third agreed. And 81% of them said their own local area felt safe.
So the fear is real. The belief in decline is widespread. But it’s largely divorced from lived experience. It’s been constructed by curated videos, selective framing, and now outright fabrication.
RadialB says he never intended to become a “decline porn” influencer. He just wanted to make people laugh by gaming the algorithm. He appears genuinely unbothered about how his content gets weaponized for political narratives. His original TikTok account got banned for graphic content, so he just set up a new one doing the exact same thing.
OpenAI, meanwhile, investigated his account activity and decided it didn’t meet the threshold for flagging to authorities. So the content continues. The copycats continue. The engagement continues. The narrative spreads.
What Happens When Fiction Becomes Evidence?
The real damage here isn’t from individual videos. It’s from the cumulative effect of a constant stream of these narratives poisoning public discourse. Real people are making decisions based on fake videos. They’re forming opinions about neighborhoods they’ve never visited based on AI-generated hallucinations. They’re voting, arguing, and taking action based on digital fantasies presented as reality.
One Black TikTok user from Croydon, C.Tino, tried to push back. He pointed out that these videos are falsely portraying the area as a “ghetto.” He said people are starting to believe this is real life. But his voice is just one against millions of views.
The shame around posting fakes has completely evaporated. There’s no social cost anymore. RadialB can cheerfully admit his intent to deceive while claiming he bears no responsibility for the consequences. Creators can spread lies to millions while brushing off criticism as people not understanding the joke. Influencers can doctor footage and call it clickbait.
What happens when our information ecosystem becomes so flooded with convincing fake content that we can’t tell the difference anymore?


