Katie Warfield posted a link to this fascinating study of what people think they look like, or wish they look like or to be more accurate, which of a series of photoshopped versions of a photograph of their face they have the most positive response to, as measured through their brainwaves. The image on the left is one of the original portraits, and on the right we see the manipulated version of the portrait that the subject felt the most positive about: their “cerebrally sincere self-image” according to the voice-over in the beautiful video created by photographer Scott Chasserot, who led the project.

Let’s stay with that term for a moment. “Cerebrally sincere.” It carries with it an idea that there is a deeper truth inside of us that we are not consciously aware of, but that technology can reveal. This is becoming an increasingly common idea, even ideology, in contemporary culture. Think of the British Airways Happiness Blanket, where customers flying in first class were given blankets that measured their brainwaves and lit up red or blue LEDs on the blanket to show flight attendants (and viewers of the commercial) whether they were happy or nervous. Or look at the work of the Dutch team of researchers who developed a tool to automatically log unconscious emotions by analysing physiological data. We need technology to measure our unconscious emotions, the researchers argue, because: “To offer capabilities that are superior to diaries, lifelogging applications should try to capture the complete experiences of people including data from both their external and internal worlds” (Ivonin et al., 2012).


At TEDxBergen last Saturday I gave a talk called “What Can’t We Measure in a Quantified World?” (a shortened version of chapter five of my new book, Seeing Ourselves Through Technology, which you can download for free from Amazon or the publisher) where I used José van Dijck’s very useful term dataism, from an article she published this spring in Surveillance Society

I think dataism describes something very important that we need to think carefully about as we develop technologies that are more and more able to gather vast quantities of data and to analyse that data. Today we are also seeing a shift in what kinds of data we collect. Data isn’t just objective information any more. It’s interpretations of our emotions, translated into numbers and graphs and rendered more reliable – more cerebrally sincere – than our own thoughts.


Others have written about dataism without using the word. Johanna Drucker proposed in 2011 that we call data capta rather than data, which would emphasise a constructivist approach: capta is taken from reality, while data is conceived as given, objective. In an article published last year, Annette Markham notes how the meaning of the term data ‘”gradually shifted from a description of that which precedes argument to that which is pre-analytical and pre-semantic. Put differently, data is beyond argument. It always exists, no matter how it might be interpreted. Data has an incontrovertible ‘itness.'” In a study of lifeloggers published this year, Minna Ruckenstein noted that “Significantly, data visualizations were interpreted by research participants as more ‘factual’ or ‘credible’ insights into their daily lives than their subjective experiences. This intertwines with the deeply-rooted cultural notion that ‘seeing’ makes knowledge reliable and trustworthy.”

Back in 1973, Susan Sontag noted something similar of photography. She wrote, “Photographed images do not seem to be statements about the world so much as pieces of it, miniatures of reality that anyone can make or acquire” (On Photography, page 4). I think the ease of filtering and photoshopping photographs today makes us far less susceptible to this delusion than we were in the 1970s, before we had smartphones with cameras and image editing software in our pockets. But our general literacy about data is at about the same stage as our photography literacy was in the 1970s. Most of us have seen data visualisations in newspapers or elsewhere. Some of us have activity trackers or lifelogging apps, and can view representations of our personal data according to preset templates over which we have very little control. Very few of us know how to actually gather, download and analyse or visualise data ourselves. And not many of us have expertise in any of the methods required to really analyse data well: we don’t know much about statistics or about reliability and uncertainty. And the visualisations and the software and the gadgets generally hide the uncertainties of data from us. They present our data to us as “miniatures of reality,” as The Truth.


I think the word dataism is one that might actually make sense to the general public. Because despite humanities scholars and digital humanists debating the shortcomings of big data is useful, this is a topic that is increasingly important for the public, for everyone. If a computer’s analysis of my unconscious emotions or “cerebrally sincere” ideal self-image is going to be seen as more genuine than my own report of my feelings and ideas, what does that do to me? To us? To the way we see each other? Is this the true post-human?

This is a topic that the humanities and the digital humanities in particular needs to address. We need to discuss what data is and what it represents academically and carefully as scholars, and we also need to talk with the general public about what data can and can’t do.

Dataism is a societal challenge that cannot be understood without the humanities. Computers are wonderful tools for measuring and counting anything and everything. But we need to think about what measurements can tell us.

3 thoughts on “Cerebrally sincere? Why the humanities need to address dataism.

  1. Harald

    Dataism seems like a 21st Century reenactment of the discussions on positivism in social sciences and humanities 50 years ago.

    1. Jill

      Yes. I think we forgot those debates. A new kind of data and we forget what we learned long ago.

  2. Kent Lundgren

    Our history, knowledge and perspective determines what we see

Leave A Comment

Recommended Posts

Triple book talk: Watch James Dobson, Jussi Parikka and me discuss our 2023 books

Thanks to everyone who came to the triple book talk of three recent books on machine vision by James Dobson, Jussi Parikka and me, and thanks for excellent questions. Several people have emailed to asked if we recorded it, and yes we did! Here you go! James and Jussi’s books […]

Image on a black background of a human hand holding a graphic showing the word AI with a blue circuit board pattern inside surrounded by blurred blue and yellow dots and a concentric circular blue design.
AI and algorithmic culture Machine Vision

Four visual registers for imaginaries of machine vision

I’m thrilled to announce another publication from our European Research Council (ERC)-funded research project on Machine Vision: Gabriele de Setaand Anya Shchetvina‘s paper analysing how Chinese AI companies visually present machine vision technologies. They find that the Chinese machine vision imaginary is global, blue and competitive.  De Seta, Gabriele, and Anya Shchetvina. “Imagining Machine […]

Do people flock to talks about ChatGPT because they are scared?

Whenever I give talks about ChatGPT and LLMs, whether to ninth graders, businesses or journalists, I meet people who are hungry for information, who really want to understand this new technology. I’ve interpreted this as interest and a need to understand – but yesterday, Eirik Solheim said that every time […]