A Los Angeles jury just handed down a verdict that has Silicon Valley scrambling. Instagram and YouTube, two of the world’s most dominant platforms, were found to be deliberately engineered to be addictive. Worse, Meta and Google were ruled negligent in protecting children. The damages? $6 million to a young woman named Kaley, who suffered body dysmorphia, depression, and suicidal thoughts as a result.
This isn’t just another lawsuit. This is what some experts are calling big tech’s “big tobacco” moment.
For years, the tech industry has operated in what feels like a consequence-free zone. Sure, there’s been criticism. Documentaries. Congressional hearings. Angry parents. But actual legal accountability? That’s been rare. Now it’s here, and both Meta and Google are already preparing appeals because they know what this means.
The Era of Impunity Is Ending
Dr Mary Franks from George Washington University summed it up perfectly: “the era of impunity is over.” It’s a bold claim, but she’s onto something. This case doesn’t exist in isolation. It’s part of a wave of litigation that’s about to hit the tech industry hard.
TikTok and Snap both settled before trial. Word in the tech sphere is they couldn’t afford the legal fight. Meta and Google, meanwhile, racked up eye-watering legal bills defending themselves. That’s telling. If the most powerful companies in the world are sweating this hard, something fundamental is shifting.
The thing is, the defense arguments were weak. Meta claimed a single app can’t be solely responsible for a mental health crisis. Google insisted YouTube isn’t really a social network. The court essentially said: nice try, but no. Your platform was designed to hook people, especially kids, and you knew it.
Arturo Bejar, who worked at Instagram, went public with claims that he warned Mark Zuckerberg about these dangers years ago. Meta denied it. But whether you believe him or not, the court’s ruling suggests the internal warnings were there. Someone knew. Someone cared enough to say something. And nothing changed.
What Actually Changes Now?
That’s the million-dollar question. Well, technically the six-million-dollar question.
The most obvious route is regulation modeled on what already exists elsewhere. Australia blocked under-16s from the biggest platforms in December. The UK is considering it. Other countries are watching. This verdict adds serious weight to those arguments. It’s not just concerned parents and campaigners anymore. It’s now a court-backed finding that these platforms are harmful to children.
Then there’s Section 230, the legal shield that has protected tech companies from liability for user-generated content. It’s often described as essential to the tech industry’s survival. But skepticism is growing. The Senate Commerce Committee held a hearing on it just this week. The political winds are shifting.
Could we see health warnings slapped on screens like cigarette packets? Restricted advertising opportunities? It’s possible. But here’s what really terrifies the tech industry: what if platforms are forced to strip out the very features that make them profitable?
The endless scroll. The algorithmic recommendations. The autoplay. The notifications that pull you back in. These aren’t bugs in the system; they’re the entire business model. The success of these platforms depends on keeping people online for as long as possible, generating as much engagement as possible, so they can serve more ads.
That’s it. That’s the whole game.
The Long Game With Kids
Here’s something that should worry us more than it does: children today don’t contribute much to the advertising machine in most regulated territories like the UK. But today’s children are tomorrow’s adults. The ideal scenario for Meta and Google is that kids turn 18 already thoroughly addicted, already accustomed to the platform, already willing to become ad targets.
Facebook, Meta’s original platform, has become a joke as the “boomer platform.” Yet 2025 figures suggest nearly half of its worldwide users are aged 18-35. That’s not by accident. That generation grew up on it. They were the test subjects.
This is why Kaley’s victory matters so much. There are more cases coming this year. More trials scheduled. More opportunities for courts to establish that yes, tech companies knew what they were doing, and yes, they’re responsible for the harm.
Ellen Roome, a British mother who lost her 14-year-old son Jools to what she believes was an online challenge, has been campaigning hard for change. Her position is simple: just ban it for kids now. Don’t wait for more studies. Don’t commission more reports. We know enough.
The UK Parliament is currently divided on this. The House of Lords and Commons are in “ping pong” over an amendment to the Children’s Schools and Wellbeing Bill that would give ministers a year to decide which platforms to ban for under-16s. A year. Meanwhile, more kids are using these apps every day.
The Real Question
The verdict won’t stop social media. People will keep scrolling. The companies will appeal. They’ll probably lose some, win some. But something has shifted in how courts and regulators view these platforms. Platform design isn’t just a neutral feature set anymore. It’s a series of choices that can carry real legal consequences.
You have to wonder: in twenty years, will we look back at this period and genuinely question why we ever let children loose on social media without any guardrails at all?


