WhatsApp just rolled out something that sounds great on the surface: a private chat mode with its AI where even Meta can’t see what you’re saying. No logs. No storage. Conversations vanish into the digital ether once you close them. According to BBC reporting, WhatsApp head Will Cathcart framed this as a response to what users actually want, saying people feel uncomfortable sharing personal information with tech companies yet still want answers on sensitive topics like health, relationships, and finances.
It’s easy to understand the appeal. We live in an age where every keystroke feels like it’s being catalogued somewhere, sold to someone, or fed into a machine learning model training your replacement. The promise of true privacy with an AI feels almost radical.
But here’s where it gets complicated.
The Accountability Problem Nobody’s Talking About
According to BBC reporting, Prof Alan Woodward from Surrey University flagged a serious concern: what happens when the AI gives you genuinely dangerous advice and there’s no record of it? If someone asks Meta AI about their depression and gets harmful guidance that contributes to a crisis, how would anyone ever know? The chat history doesn’t exist anymore. It’s gone. Neither the user nor Meta can retrieve it.
This isn’t theoretical. According to BBC reporting, companies like OpenAI and Google have already faced wrongful death lawsuits. Those cases typically hinge on evidence of what the AI said and how it might have influenced outcomes. With incognito mode, that evidence simply won’t exist.
Woodward put it plainly to the BBC: “You are placing a great deal of trust in the AI not to lead users astray.” That’s a lot of trust to place in a system operated by a company whose core business model is advertising and engagement, not necessarily user safety.
Why Meta Is Doing This Now
Meta’s financial situation provides useful context here. According to investment analyst Susannah Streeter, the company is planning to spend $145 billion on AI infrastructure in 2026, and investors are watching closely to see returns on that staggering investment. Meta needs AI adoption at scale. It needs billions of people using these tools regularly, across Instagram, Facebook, Messenger, and now WhatsApp.
Incognito mode removes a genuine friction point that’s been holding people back. When Meta added AI to WhatsApp last year, users complained loudly about not being able to turn it off. Now there’s a way to use it without feeling surveilled. It’s a smart move for adoption numbers.
But here’s the thing: smoother user adoption doesn’t solve the accountability problem. It potentially makes it worse.
The Technical Side Isn’t the Issue
Cathcart described this as equivalent to end-to-end encryption, though the actual technology is different. According to BBC reporting, Woodward said there’s a low risk of this second system compromising WhatsApp’s existing security. That’s the good news. The technical execution doesn’t appear to be the weak link here.
What concerns Woodward and other security experts is the policy gap. Meta AI will use guardrails to try refusing obviously harmful requests, but disappearing messages mean there’s no paper trail if those guardrails fail. There’s no way to audit whether an AI misdirected someone on a critical health decision or encouraged self-harm.
The lack of accountability isn’t a technical problem. It’s a governance problem.
The Bigger Picture on AI and Business
What’s happening at Meta mirrors a larger tension in AI development. Companies want to move fast, deploy at scale, and show investors impressive user numbers. Privacy advocates want safeguards and transparency. Regulators are still trying to figure out what safeguards even look like.
Incognito mode for AI conversations feels like a win for privacy on the surface. For individual users asking personal questions, it probably is. But it also creates a zone where the AI operates without any outside visibility or oversight. That’s comfortable for the user in one sense and genuinely worrying in another.
Meta owns WhatsApp, Instagram, Facebook, and Messenger. It’s blocked other AI chatbots from accessing WhatsApp, meaning billions of people on that platform can only interact with Meta’s own AI. That’s enormous leverage over conversations about deeply personal subjects.
The incognito mode means those conversations become invisible to outside scrutiny, but also to Meta itself. It’s privacy through obscurity, not privacy through architecture. And if something goes wrong, good luck proving it ever happened.


