So what does it do to democracy if we can predict the results of an election with 100% accuracy? Nate Silver’s predictions at the NY Times’ Fivethirtyeight.com election poll blog correctly called the results of 50 out of 50 states in this year’s US elections. In 2008, Mashable writes, he only got 49 out of 50 states (Obama won Indiana by 0.1%). Here’s the side by side comparison Mashable showed us, in this tweet from interaction designer Michael Cosentino:

The ability to accurately predict the results of an election, even a relatively simple two party election as in the US, is quite new. As recently as this summer a blogger for The Economist by the name of “M.D.” wrote that forecasts in general are not very accurate, although “the 2008 election happened to be a good year for the forecast industry, with all 15 forecast models with which I am familiar, save one, predicting Barack Obama’s victory.”

Given Nate Silver’s results this year, I’m guessing that 2008 didn’t just “happen to be” a good year. What’s happening is that we’re getting very, very good at analysing big data. Also, more and more applicable data is available in a format that we can analyse – we’re using Twitter as well as traditional polls.

Interestingly a quick search on Google Scholar found plenty of articles discussing how to make more accurate election forecasts, but I didn’t find anything about whether perfectly accurate election forecasts are something we really want. Nate Silver’s prediction victory is reported in many news outlets (including Norwegian Dagbladet) but the only criticism of the model that I’ve seen is to question its accuracy – please tell me there are people considering what it means for democracy?

What is the point of voting, if we have 99.999% accurate predictions? Is voting an anachronism when we can simply analyse the population as a whole using astounding amounts of data? If predictions match election results perfectly, are they now unbiased? If we know that predictions are extremely accurate, does it change the way we vote, or the kinds of people who turn out to actually vote? Perhaps using prediction software that included the whole population could be more democratic than the current system of actually going to a physical place to vote, which has all kinds of built in exclusion of some kinds of voices.

But it is also easy to imagine a world where presidents are chosen by algorithms analysing the people’s sentiments and opinions. We’ve stopped using old-fashioned voting, because the software is so much more fair. But what happens when the algorithm is tweaked in favour of one of the candidates. So easy to do. Such profound consequences.


Discover more from Jill Walker Rettberg

Subscribe to get the latest posts sent to your email.

Leave A Comment

Recommended Posts

Academics in Norway: Sign this petition asking for research-based discussions of how to use AI in universities

I just signed a petition calling for Norwegian universities to use research expertise on AI when deciding how to implement it, rather than having decisions be made mostly administratively. ,  If you are a researcher in Norway, please read it and sign it if you agree – and share with anyone else who might be interested. The petition was written by three researchers at UiT: Maria Danielsen (a philosopher who completed her PhD in 2025 on AI and ethics, including discussions of art and working life), Knut Ørke (Norwegian as a second language), and Holger Pötzsch (a professor of media studies with many years of research on digital media, video games, disruption, and working life, among other topics).  This is not about preventing researchers from exploring AI methods in their research. It is about not uncritically accepting the hype that everyone must use AI everywhere without critical reflection. It is about not introducing Copilot as the default option in word processors, or training PhD candidates to believe they will fall behind if they do not use AI when writing articles, without proper academic discussion. Changes like these should be knowledge-based and discussed academically, not merely decided administratively, because they alter the epistemological foundations of research. Maria wrote to me a couple of months ago because she had read my opinion piece in Aftenposten in which I called for a strong brake on the use of language models in knowledge work. She was part of a committee tasked with developing UiT’s AI strategy and was concerned because there was so much hype and so few members of the committee with actual expertise in AI. I fully support the petition. There are probably some good uses for AI in research, but the uncritical, hype-driven insistence that we must simply adopt it everywhere is highly risky. There are many researchers in Norway with strong expertise in AI, language, ethics, working life, and culture. We must make use of this expertise. This is also partly about respect for research in the humanities, social sciences, psychology, and law. Introducing AI at universities and university colleges is not merely a technical issue, and perhaps not even primarily a technical one. It concerns much more: philosophy of science, methodological reflection, epistemology, writing, publishing, the working environment, and more. […]

screenshot of Grammarly - main text in the middle, names of experts on the left with reccomendations and on the right more info about the expert review feature
AI and algorithmic culture Teaching

Grammarly generated fake expert reviews “by” real scholars

Grammarly is a full on AI plagiarism machine now, generating text, citations (often irrelevant), “humanizing” the text to avoid AI checkers and so on. If you’re an author or scholar, they also have been impersonating and offering “feedback” in your name. Until yesterday, when they discontinued the Expert Review feature due to a class action lawsuit. Here are screenshots of how it worked.