When I was at the INDVIL workshop about data visualisation on Lesbos a couple of weeks ago, everybody kept citing Donna Haraway. “It’s the ‘god trick’ again,” they’d say, referring to Haraway’s 1988 paper on Situated Knowledges. In it, she uses vision as a metaphor for the way science has tended to imagine knowledge about the world.

The eyes have been used to signify a perverse capacity–honed to perfection in the history of science tied to militarism, capitalism, colonialism, and male supremacy–to distance the knowing subject from everybody and everything in the interests of unfettered power. (p. 581)

Haraway connects this to what I would call machine vision (“..satellite surveillance systems, home and office video display terminals, cameras for every purpose from filming the mucous membrane lining the gut cavity of a marine worm living in the vent gases on a fault between continental plates to mapping a plantery hemisphere elsewhere in the solar system..”) and states that these technologies don’t just pretend to be all-seeing, objective and complete, they also make this seem ordinary, part of everyday life:

Vision in this technological feast becomes unregulated gluttony; all seems not just mythically about the god trick of seeing everything from nowhere, but to have put the myth into ordinary practice. (p. 581)

Of course, “that view of infinite vision is an illusion, a god trick” (p. 582). But it’s not an illusion that we seem to have escaped since 1988. Google’s satellite maps, for instance, have that lovely feel of “seeing everything from nowhere.” I heard Rob Tovey present a fascinating paper about this at the Post Screen Festival in Lisbon a couple of years ago (“God’s Eye View: The Satellite Photography of Google“, 2016), noting not only the “god trick,” but also the mechanics of how these images are created from multiple photographs using specific projection techniques rather than others. A map is far from objective.

Section of a satellite map image of part of the city of Bergen, showing streets, trees, buildings, parked cars.

Haraway’s conclusion is that the only way of achieving any kind of objectivity in science, and for her this is a feminist point, though valid for all science, is by admitting that knowledge is partial and situated. Perhaps something like a 360 degree photosphere, taken by an individual like myself using Google’s Street View app on my phone, could be classified as an example of a visual position that is partial?

If you click through that screenshot to see the way Google displays my photo, you’ll see you can drag it around to see everything I saw in every direction.

Well, almost everything I saw. If you look down, you won’t see my feet.

A photograph of a puddle in a muddy path. A narrow google map below it shows where the photo was taken.

Google edited them out.

The knowing self is partial in all its guises, never finished, whole, simply there and original; it is always constructed and stitched together imperfectly, and therefore able to join with another, to see together without claiming to be another. Here is the promise of objectivity: a scientific knower seeks the subject position, not of identity, but of objectivity, that is, partial connection.

Those 360 images are certainly constructed and stitched together, and have a more specific standpoint or position, maybe even an implicit subject position from which you see. The glitches in the stitching together of the images remind us that they are imperfect, partial representations.

And yet the human is edited out.

Knowledge from the point of view of the unmarked is truly fantastic, distorted, and irrational. The only position from which objectivity could not possibly be practiced and honoured is the standpoint of the master, the Man, the One God, whose Eye produces, appropriates, and orders all difference. (..) Positioning is, therefore, the key practice in grounding knowledge organised around the imagery of vision. (p. 587)

I wonder whether today it is Google and technology, rather than the patriarchal male master, whose “Eye produces, appropriates, and orders all difference.”

And of course, as the scholars at our workshop about data visualisation pointed out, data visualisations are another way in which information is presented as objective, as seen from a disembodied, neutral viewpoint. The kind of viewpoint that doesn’t exist.

Above all, rational knowledge does not pretend to disengagement: to be from everywhere and so nowhere, to be free from interpretation, from being represented, to be fully self-contained or fully formalisable. (p. 590)

And of course, data visualisations tend to show the big picture. It’s nicely organised, you can see the patterns, and there are no “troubling details,” as Johanna Drucker puts it in Graphesis (2014).

Data visualisation showing flows of refugees from some countries to others.

That’s the god trick alright.


Discover more from Jill Walker Rettberg

Subscribe to get the latest posts sent to your email.

1 Comment

  1. […] her actual paper, Situated Knowledges (1988). Since it’s a bit long, I decided to read this article by Dr. Walker Rettberg where they summarize certain points. The first quote really caught my […]

Leave A Comment

Recommended Posts

Academics in Norway: Sign this petition asking for research-based discussions of how to use AI in universities

I just signed a petition calling for Norwegian universities to use research expertise on AI when deciding how to implement it, rather than having decisions be made mostly administratively. ,  If you are a researcher in Norway, please read it and sign it if you agree – and share with anyone else who might be interested. The petition was written by three researchers at UiT: Maria Danielsen (a philosopher who completed her PhD in 2025 on AI and ethics, including discussions of art and working life), Knut Ørke (Norwegian as a second language), and Holger Pötzsch (a professor of media studies with many years of research on digital media, video games, disruption, and working life, among other topics).  This is not about preventing researchers from exploring AI methods in their research. It is about not uncritically accepting the hype that everyone must use AI everywhere without critical reflection. It is about not introducing Copilot as the default option in word processors, or training PhD candidates to believe they will fall behind if they do not use AI when writing articles, without proper academic discussion. Changes like these should be knowledge-based and discussed academically, not merely decided administratively, because they alter the epistemological foundations of research. Maria wrote to me a couple of months ago because she had read my opinion piece in Aftenposten in which I called for a strong brake on the use of language models in knowledge work. She was part of a committee tasked with developing UiT’s AI strategy and was concerned because there was so much hype and so few members of the committee with actual expertise in AI. I fully support the petition. There are probably some good uses for AI in research, but the uncritical, hype-driven insistence that we must simply adopt it everywhere is highly risky. There are many researchers in Norway with strong expertise in AI, language, ethics, working life, and culture. We must make use of this expertise. This is also partly about respect for research in the humanities, social sciences, psychology, and law. Introducing AI at universities and university colleges is not merely a technical issue, and perhaps not even primarily a technical one. It concerns much more: philosophy of science, methodological reflection, epistemology, writing, publishing, the working environment, and more. […]

screenshot of Grammarly - main text in the middle, names of experts on the left with reccomendations and on the right more info about the expert review feature
AI and algorithmic culture Teaching

Grammarly generated fake expert reviews “by” real scholars

Grammarly is a full on AI plagiarism machine now, generating text, citations (often irrelevant), “humanizing” the text to avoid AI checkers and so on. If you’re an author or scholar, they also have been impersonating and offering “feedback” in your name. Until yesterday, when they discontinued the Expert Review feature due to a class action lawsuit. Here are screenshots of how it worked.