AI is Automating Opinion Polls. That Should Worry Us.

Imagine calling a number and hearing a young woman’s voice ask you about politicians. Her questions are thoughtful, her follow-ups are probing. She sounds human. But she isn’t. She’s a string of code, one of three AI agents already analysing your every word before you finish speaking.

This isn’t science fiction. According to BBC reporting, it’s happening right now at Naratis, a French startup that’s attempting to remake opinion polling from the ground up.

The polling industry is in crisis. Response rates have cratered from over 30% in the 1990s to below 5% today, according to AI consultant Stéphane Le Brun. Fewer people answering means higher costs, less representative data, and a public that trusts polls less with each passing election cycle. Something had to give.

Enter Technology that moves fast. Naratis, founded in 2025 by 28-year-old engineer Pierre Fontaine, claims to have solved the problem by automating qualitative research, the slowest and most expensive corner of polling. What once took weeks and tens of thousands of euros can now happen in a day or two. The company claims its method is “10 times faster, 10 times cheaper and 90% as accurate as human polling.”

That’s a bold pitch. And it raises an uncomfortable question: what are we actually measuring anymore?

When Speed Becomes the Problem

The appeal is obvious. Traditional qualitative studies involve recruiting paid respondents, scheduling interviews, then waiting weeks for analysis. It’s expensive and it’s slow. Naratis replaces human interviewers with conversational AI that can conduct thousands of interviews simultaneously. No scheduling conflicts. No fatigue. No bias from the interviewer’s tone or body language.

Fontaine makes a fair point about one kind of bias. People sometimes lie to other humans, especially about sensitive topics. A machine might get more candid answers. In France, polling has historically underestimated far-right support, presumably because voters felt uncomfortable admitting it to another person. An AI agent might solve that problem.

But here’s where it gets thorny. The speed and scale come from something Fontaine calls “parallelisation.” Multiple AI agents working at once. That’s efficient. It’s also a fundamentally different thing than what polling used to do.

When you have a human interviewer talking to a respondent, you get texture. You get moments where someone contradicts themselves and has to think harder. You get context. You get nuance. An AI agent, by contrast, follows a script that’s been designed to extract data. It’s incredibly good at that job. It’s just not clear it’s doing the same job at all.

The Hallucination Problem

Then there’s the small matter of AI systems simply making things up.

AI doesn’t understand the world the way humans do. It generates text based on statistical patterns in training data. Sometimes those patterns lead it to invent plausible-sounding answers that are completely false. In polling, that’s catastrophic. You’re supposed to be measuring what people think, not what a neural network guesses people probably think.

There’s another risk, equally pernicious. AI tends to produce “common sense” responses that reflect what people usually think about a topic. It’s trained on mountains of internet text, much of which represents conventional wisdom. Ask it about politicians and it might regurgitate the most predictable criticism rather than capturing what this particular person actually feels. That’s the opposite of what polling is supposed to do.

These aren’t theoretical concerns. The polling industry has already failed spectacularly to predict major events. Brexit. Trump in 2016. Those failures were largely quantitative polling problems, Fontaine argues, not qualitative ones. But that’s a technicality. Confidence in polling is already low. Adding a layer of AI systems that can hallucinate seems unlikely to help.

The Synthetic Data Trap

This is where things get really interesting. Some firms aren’t just using AI to conduct interviews. They’re generating synthetic respondents entirely, using “digital twins” and AI-created profiles based on real-world patterns.

The idea has some merit in business research. You want to study a hard-to-reach group? Generate some synthetic versions based on the real people you did reach. But in political polling, the major firms are pumping the brakes.

Ipsos refuses to publish political polls based on AI-generated data. OpinionWay’s CEO Bruno Jeanbart was blunt about it: “We would never publish an opinion poll based on AI-generated data,” citing concerns about trust. He expects France may eventually prohibit it outright.

And he’s probably right to be cautious. If you’re generating responses rather than collecting them, what exactly are you measuring? You’re measuring your model of reality, not reality itself. You’re measuring what your training data thinks people are like. You’re measuring assumptions baked into your algorithm. That’s not polling. That’s fiction dressed up in statistics.

The Human Remains Essential

The most thoughtful voices in the industry acknowledge the limits. Le Brun put it clearly: “The goal is end-to-end automation, but today it would be unsafe and socially unacceptable to remove humans entirely.”

There’s still a need for human judgment. Someone has to decide what questions matter. Someone has to interpret what the data means. Someone has to take responsibility when the system fails. That human layer isn’t a bug in the system. It’s a feature that prevents the system from becoming something other than polling altogether.

The likely future is hybrid. AI will handle the mechanical work of conducting interviews at scale. It’ll process social media data and integrate new sources of information. It’ll make some analyses faster and cheaper. But in political polling especially, the boundary between augmenting human respondents and simulating them will matter enormously. That boundary is also the boundary between research and fiction.

Naratis and its competitors are betting that speed and scale are worth the risks. That conversations with machines can replace conversations with humans. That data collected from AI agents is basically the same as data collected from people. Maybe they’re right. But the polling industry has already lost public trust once. Whether AI restores it or destroys it entirely depends entirely on whether we use it to hear what people actually think, or just to confirm what we already believe we know.

Written by

Adam Makins

I’m a published content creator, brand copywriter, photographer, and social media content creator and manager. I help brands connect with their customers by developing engaging content that entertains, educates, and offers value to their audience.