Starting Monday, the UK government is asking a pretty big question: should we just ban social media for anyone under 16? It sounds radical, but honestly, it’s not coming out of nowhere. This is the kind of conversation that’s been building for years, and now it’s finally hitting the mainstream in a serious way.
Technology Secretary Liz Kendall is framing this as a chance to help young people “thrive in an age of rapid technological change.” But let’s be real – this consultation isn’t just about letting the government stroke its chin thoughtfully. It’s a response to mounting pressure from politicians, charities, and the public who are genuinely worried about what social media is doing to kids.
Why Now? Australia Got There First
Australia threw down the gauntlet last year when it became the first country to actually ban under-16s from Instagram, Snapchat, YouTube, and TikTok. Spain is planning to follow suit. And suddenly, what seemed impossible a few years ago doesn’t look so fringe anymore.
The momentum is real. Over 60 Labour MPs have backed the ban. The House of Lords already voted in favour. Even Conservative leader Kemi Badenoch has said her party would introduce it if they got into power. This isn’t some niche talking point anymore – it’s got cross-party support.
The push has been fueled partly by tragic cases like Molly Russell, a 14-year-old who took her own life in 2017 after viewing self-harm and suicide content on Instagram. Her family’s foundation is welcoming this consultation as a crucial opportunity to tighten things up.
But Not Everyone’s on Board
Here’s where it gets complicated. While some charities like the Molly Rose Foundation see this as essential, others aren’t convinced a blanket ban is the answer. The NSPCC and several other organisations have warned that banning social media could actually create “unintended consequences.”
Their concern? That pushing kids off mainstream platforms could just drive them to less regulated corners of the internet where there’s actually even less oversight. They’re pushing for stronger enforcement of existing safety rules instead, rather than going nuclear with a total ban.
Sonia Livingstone, a professor of social psychology at the London School of Economics, put it well: “What everyone wants to see is better safety from Big Tech companies, and then children could express themselves and connect online as they want to.” That’s the tension right here – safety versus freedom.
What’s Actually Being Consulted On?
The government isn’t just asking “should we ban it?” They want input on less dramatic interventions too. There’ll be pilots running alongside this to test different approaches and gather “real-world evidence” about what actually works.
The consultation itself is being tailored – different versions for young people and for parents and carers to make it actually accessible. The government is planning community events with MPs, influencers, and schools to spark a wider public debate. They’re literally inviting anyone with an opinion: parents, teachers, academics, civil society organisations, and industry types.
You’ve got until May 26 to have your say, and the government plans to respond in the summer. An academic panel will also be reviewing the growing evidence, including what’s happening in Australia right now.
The Bigger Picture
This whole debate sits within a broader reckoning with how social media companies operate. In February, the EU told TikTok it needed to ditch its “addictive design” or face heavy fines. Meanwhile, a landmark trial is underway in California examining whether Instagram and YouTube are messing with kids’ mental health.
Technology companies have been given a relatively free pass for years to optimize their platforms for engagement without worrying too much about the psychological impact on young users. That era feels like it’s ending.
The real question isn’t whether we should have this conversation – we absolutely should. It’s whether a ban is actually the solution, or whether the real work lies in forcing platforms to design responsibly and enforcing the rules we already have.
What does protecting children online actually look like when the same tools that connect them to friends and learning can also expose them to harm?


