Meta's New Parental Spy Tool Won't Fix Its Child Safety Problem

Meta just rolled out a new feature that lets parents spy on what their teenagers ask AI chatbots on Facebook, Instagram, and Messenger. It’s called AI Insights, and the company is marketing it as a win for child safety. But a closer look reveals something more troubling: Meta is essentially handing parents a surveillance tool while simultaneously ducking responsibility for actually moderating its platforms.

The feature, now available in the US, UK, Australia, Canada, and Brazil with global rollout coming soon, shows parents what topics their kids have discussed with Meta AI over the past week. School, entertainment, health, travel, writing, lifestyle. The categories sound innocent enough. There’s even an alert system if teens ask about suicide or self-harm on Instagram, which Meta added back in February.

On the surface, this looks like progress. Parents get visibility. Teens get some guardrails. Everyone wins, right?

Not really.

The Surveillance Shift

What Meta is doing here is clever from a corporate optics standpoint but fundamentally backwards as policy. The company is shifting responsibility for content moderation directly onto parents’ shoulders while framing it as empowerment.

According to reporting from CNET, Ardath Whynacht, an associate professor in sociology at Mount Allison University who specializes in mental health and family violence, put it bluntly: “Parental surveillance is not content moderation. As companies like Meta do less content moderation, they expose children and youth to harm more frequently. It shouldn’t be the parent’s job to make the product less harmful.”

That’s the core issue. Meta has billions of dollars, armies of engineers, and no shortage of resources. Yet instead of building safer systems from the ground up, the company is asking middle-class parents to become full-time content monitors for their kids. It’s not a feature. It’s an abdication.

The problem gets worse when you consider who this surveillance actually hurts most.

Who Really Gets Hurt

Queer and trans youth are particularly vulnerable here. Many of these young people turn to digital spaces to find community and support they can’t access in their physical environments. A feature that amplifies parental surveillance could force them out of those spaces entirely, pushing them into less monitored corners of the internet where actual predators operate.

“Fear of parental surveillance might force children into even more unsafe corners of the web,” Whynacht told CNET. And she noted something darker: “It’s a sad fact that kids often need protection from their parents as much as they need protection from harms online.”

This is where the discussion gets uncomfortable for companies like Meta. Not all families are safe. Not all parents have their kids’ best interests in mind. By creating better surveillance tools, Meta isn’t necessarily creating safer environments. It might be creating better instruments of control.

Meta’s Track Record Speaks for Itself

It’s hard to separate this announcement from Meta’s broader pattern of legal setbacks and accusations. The company was ordered to pay $375 million after being found liable in a child exploitation case. It was also found liable in a California case where a woman alleged Instagram and YouTube were deliberately designed to be addictive to children. More than 40 US states filed lawsuits against Meta in 2023, claiming the company was deliberately trying to addict children to its apps and contributing to a youth mental health crisis.

Against that backdrop, AI Insights reads less like genuine concern for child safety and more like PR management.

Donna Rice Hughes, CEO of the child safety organization Enough is Enough, told CNET that while the feature is “a step in the right direction,” it’s not nearly enough. She also pointed out that Meta lobbied to kill the Kids Online Safety Act in the US House in 2024, which tells you something about where the company’s actual priorities lie.

“Meta cannot be trusted when it comes to teen safety and continues to put profits over safety,” Hughes said.

What Actually Needs to Happen

Hughes is right that parents should use whatever tools are available, including Meta’s new Insights feature. Conversations between parents and kids about online safety matter. But relying on parents alone to keep tech companies honest is a losing game.

The real solution requires Meta and other tech giants to do the hard work of building genuinely safer products, not just better surveillance infrastructure. It means stronger content moderation at scale, design changes that don’t exploit psychological vulnerabilities, and actual regulatory oversight with teeth.

Parents simply can’t continue to shoulder this burden alone. And they shouldn’t have to.

The question isn’t whether Meta’s new feature will help some parents feel more in control. It probably will. The real question is whether it will make teens actually safer, or just give a $375-billion company another tool to pass the buck while claiming to care about the kids it profits from daily.

Written by

Adam Makins

I’m a published content creator, brand copywriter, photographer, and social media content creator and manager. I help brands connect with their customers by developing engaging content that entertains, educates, and offers value to their audience.