Politics1 hr ago

Naratis AI Polling Claims 10‑Fold Speed and Cost Gains with Near‑Human Accuracy

French startup Naratis says its AI can run qualitative polls ten times faster and cheaper, achieving 90% of human accuracy amid falling survey response rates.

Nadia Okafor/3 min/GB

Political Correspondent

TweetLinkedIn
A group discussion

A group discussion

Source: BbcOriginal source

*TL;DR: Naratis promises AI‑driven qualitative polls that are ten times faster, ten times cheaper, and 90% as accurate as traditional human interviews.

Context Survey response rates have collapsed from over 30% in the 1990s to under 5% today, driving up costs and eroding confidence in poll results. At the same time, AI tools are reshaping market research, prompting firms to experiment with automated interviewers.

Key Facts - Naratis founder Pierre Fontaine describes a system where respondents converse with a young, female‑voiced AI instead of ticking boxes. The AI probes how opinions form, tracks shifts, and flags non‑human respondents. - The company claims its method is *ten times faster* and *ten times cheaper* than conventional qualitative research, delivering results within 24 hours for many projects. - Accuracy is positioned at *90% of human‑level* performance, meaning the AI’s insights should match most human interview outcomes while cutting labour costs. - Parallelisation—running many AI interviewers at once—replaces the one‑by‑one approach of human interviewers, enabling near‑real‑time reaction to political events. - Critics note past polling failures, but Fontaine argues those errors stem from quantitative forecasts, not the deeper, exploratory focus of qualitative work.

What It Means If Naratis delivers on its promises, political consultants could obtain rapid, cost‑effective insight into voter sentiment, potentially offsetting the decline in traditional survey participation. Faster turnaround may allow campaigns to test messaging within hours of a news break, a capability previously limited to expensive focus groups.

However, the reliance on AI introduces new risks. Machine‑generated responses can “hallucinate”—fabricating plausible but false statements—while bias in training data may skew results. Established firms such as Ipsos and OpinionWay already use AI for data analysis but stop short of publishing AI‑only polls, citing trust concerns.

The industry now faces a trade‑off: embrace AI’s speed and depth or guard against its opacity. As AI interviewers become more common, regulators and pollsters will need standards to verify that the 90% accuracy claim holds across diverse populations.

Watch next: monitor early adopters’ field tests for discrepancies between AI‑generated insights and traditional focus groups, and watch for any regulatory guidance on AI‑driven political polling.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...