There’s a video making the rounds on TikTok and Instagram that shows young men in balaclavas sliding into a filthy swimming pool, rubbish floating on the surface. The caption claims it’s a taxpayer-funded water park in Croydon, South London. It looks convincing enough that thousands of people believe it actually exists. There’s just one problem: it’s completely fake, generated entirely by artificial intelligence.
This video is just one example of what’s become a disturbing trend. Welcome to the world of “decline porn” – a new genre of fabricated content that portrays Western cities as crumbling, overrun with immigrants and crime. These videos are racking up millions of views across social platforms, and they’re doing real damage to how people perceive their own neighbourhoods.
The Architect Behind the Deepfakes
The person behind many of these Croydon videos goes by RadialB online. He’s in his twenties, from the north-west of England, and has never actually set foot in Croydon. When pressed about his intentions, he’s surprisingly candid.
“If people saw it and they immediately knew it was fake, then they would just scroll,” he told the BBC. He’s essentially admitting that the entire point is deception. The goal isn’t humour – or rather, it’s humour that requires people to believe a lie first.
RadialB frames these videos as absurdist art, something meant to be funny and provocative. But here’s where things get murky. He deliberately chooses prompts like “roadmen wearing puffer jackets, tracksuits, and balaclavas” because they’re the “funniest” characters to generate. The AI then produces videos almost exclusively featuring young Black men in stereotypical situations. He claims this wasn’t intentional, that he wasn’t trying to make the people depicted a certain race or ethnicity. Yet the results speak for themselves.
When confronted about the racist comments these videos inspire, he shrugs it off. Social media platforms filter out the worst abuse, he says, so it’s not really his problem. This is classic responsibility laundering – create the match, complain about the fire, and pretend the flames aren’t your doing.
How a Single Video Spawns an Industry
What’s most concerning isn’t RadialB’s content itself. It’s what happened next. Dozens of copycat accounts started producing similar videos. These accounts have collectively accumulated millions of views. Some are run by users in Israel, Brazil, and the Middle East – places with no connection to Croydon or even the UK. They’re sharing these videos purely for engagement and the chance to monetise them on platforms like Facebook.
The technology has made this astonishingly easy. AI image and video generation tools have lowered the barrier to entry so dramatically that anyone can now create convincing deepfakes in their bedroom. RadialB described it as a “huge jump” in quality and availability that “hugely lowers the barrier for entry” for anyone wanting to make fake content.
What we’re witnessing is the democratisation of misinformation. You don’t need filming equipment, actors, or locations anymore. You just need a computer and the willingness to lie.
The Broader Narrative
This trend didn’t emerge in a vacuum. It’s part of a much larger narrative ecosystem that’s been building for years. YouTubers like Kurt Caz have built audiences of millions by posting travel videos with alarmist titles like “Attacked by thieves in Barcelona!” and “Threatened in the UK’s worst town!” Some of these creators have been caught doctoring thumbnails with AI to make cities look worse than they actually are. One thumbnail showed a man wearing a balaclava in front of Arabic shop signs, yet in the actual video, the signs were in English and he wasn’t wearing a balaclava at all.
These narratives have also been amplified by high-profile figures with massive platforms. Elon Musk, with over 230 million followers on X, regularly posts about what he calls “a destruction of Britain” due to “massive uncontrolled migration.” Whether intentionally or not, creators like RadialB are feeding into a broader political project that’s already being pushed by influential billionaires and far-right activists.
The Reality Check Nobody’s Reading
Here’s what’s actually happening in London according to data. YouGov released a poll showing that while a majority of Britons believe London is unsafe, only a third of people surveyed in the capital agreed. Even more telling: 81% of Londoners said their own local area was safe. The perception of urban decay far outpaces reality.
But when people are scrolling through their feeds and seeing dozens of videos showing degraded infrastructure, filthy public facilities, and dangerous streets, how can they not be swayed? The videos look real because the technology is sophisticated enough to make them look real. Comments sections fill with outrage. Political talking points about immigration and crime get reinforced. Actual residents of these neighbourhoods, like one TikToker from Croydon called C.Tino, find themselves having to defend their homes against a fiction.
“These videos are making people think this is real life. It’s becoming out of hand now,” C.Tino said.
The Accountability Vacuum
What’s stunning about RadialB’s approach is how casually he dismisses the consequences of his work. His original account got banned for graphic content, but he just set up a new one posting the same material. OpenAI said his account activity didn’t meet the threshold to report to authorities. The platforms have policies against racist content and deepfakes, yet the videos remain live and continue spreading.
The creator, meanwhile, is clearly aware of the political reactions his content provokes. He sees 50 and 60-year-olds in the comments “raging and saying all this political stuff.” Yet he frames this as some observers being ironic or the content being misunderstood. It’s a convenient way to have your cake and eat it too – create deliberately misleading content designed to look real and provoke outrage, then claim you’re just joking around when called out.
What Comes Next
The question now is whether the platforms and the AI companies behind these tools are going to do anything meaningful about this. So far, the answer seems to be no. RadialB is still creating content. Copycat accounts are multiplying. The videos are still racking up millions of views.
This is a critical moment. The technology exists now. The templates for using it to spread disinformation exist now. The audiences primed to believe these narratives exist now. If nothing changes, we’re looking at a future where the line between reality and fabrication becomes so blurred that we can’t trust what we see online anymore. And if that happens, the real cities and real people living in them will continue to suffer the consequences of fictions that most people can’t quite identify as fakes.


