Would you trust an AI to tell you who to vote for?
When over 1,000 Dutch citizens were asked how likely they were to ask ChatGPT or another AI tool for advice on political candidates, parties, or election issues, the results were worrying:
- 77% said it’s unlikely they’d do so
- 13% sat on the fence (a cautious “maybe”)
- But 1 in 10 said they’re likely to ask AI for voting advice
If we extrapolate this to the general population, this means that roughly 1.8 million citizens nationwide could be asking an AI chatbot for advice. And given how people often underreport behaviors they think are “socially undesirable”, the real number could be even higher.
This is what a survey fielded by the AlgoSoc Consortium (a collaboration of the University of Amsterdam, Utrecht University, TU Delft, Tilburg, Erasmus University Rotterdam) tells us. In the lead up to the elections, they asked 1,220 Dutch citizens how likely they were to ask ChatGPT or another AI chatbot tool for advice on political candidates, parties, or election issues.

The data, drawn from the LISS Panel of Centerdata, revealed pronounced differences across age groups in the likelihood of using AI for advice. Among respondents who were likely to use AI, 42.5% were under 34 years old, compared to only 19.2% of those unlikely to use AI. Conversely, older age groups were more represented among those not likely to use AI: 52.9% were 55 or older, while this age group accounted for just 31% of likely users.
Why you shouldn’t ask chatbots for advice
The Dutch Data Protection Authority (AP) recently tested several chatbots against trusted tools like Kieskompas and StemWijzer. The results were clear: these chatbots are not ready to guide voters.
According to the AP’s October 2025 report: Chatbots “surprisingly often” give the same two parties, PVV or GroenLinks-PvdA, as the top match, regardless of what the user asks. In more than 56% of cases, one of these two came out on top. For one chatbot, that number was over 80%.
Meanwhile, parties like D66, VVD, SP, or PvdD barely appeared, and others (BBB, CDA, SGP, DENK) were practically invisible, even when the voter’s positions matched theirs perfectly.
In other words, AI is biased and gets it completely wrong:
“Chatbots may seem like clever tools but as a voting aid, they consistently fail. Voters may unknowingly be steered toward parties that don’t align with their views. This threatens the integrity of free and fair elections.”
- AP vice-chair Monique Verdier
AI chatbots can mislead late-deciding voters
Unlike official voting aids, chatbots are not designed for politics. Chatbot answers are generated from messy, unverifiable data scraped from across the internet, including social media and outdated news.
That means they can:
- Mix up facts or misrepresent party positions.
- Reflect biases baked into their training data.
- And present all this as if it’s objective truth.
So while a chatbot might sound confident, it’s not necessarily correct.
This is worrying because voters often look for voting advice late in the campaign, when attention is high but time to reflect is short. Research shows that many people turn to tools like StemWijzer or Kieskompas just days before an election (Van de Pol et al 2014). It’s reasonable to expect the same pattern for AI chatbots, except these tools aren’t designed, tested, or transparent enough to handle that responsibility.
So, if you’re on your way to the polls tomorrow… maybe don’t stop to ask ChatGPT, “Who should I vote for?”
What you can do instead
If you’re curious about which party aligns with your views, skip the chatbot detour and use verified, transparent sources:
🗳️ StemWijzer or Kieskompas: independent, evidence-based, and audited tools.
📰 Reputable journalistic sources that check their facts.
🗣️ Or go straight to the original sources: party manifestos and official websites.