When I was at the INDVIL workshop about data visualisation on Lesbos a couple of weeks ago, everybody kept citing Donna Haraway. “It’s the ‘god trick’ again,” they’d say, referring to Haraway’s 1988 paper on Situated Knowledges. In it, she uses vision as a metaphor for the way science has tended to imagine knowledge about the world.
The eyes have been used to signify a perverse capacity–honed to perfection in the history of science tied to militarism, capitalism, colonialism, and male supremacy–to distance the knowing subject from everybody and everything in the interests of unfettered power. (p. 581)
Haraway connects this to what I would call machine vision (“..satellite surveillance systems, home and office video display terminals, cameras for every purpose from filming the mucous membrane lining the gut cavity of a marine worm living in the vent gases on a fault between continental plates to mapping a plantery hemisphere elsewhere in the solar system..”) and states that these technologies don’t just pretend to be all-seeing, objective and complete, they also make this seem ordinary, part of everyday life:
Vision in this technological feast becomes unregulated gluttony; all seems not just mythically about the god trick of seeing everything from nowhere, but to have put the myth into ordinary practice. (p. 581)
Of course, “that view of infinite vision is an illusion, a god trick” (p. 582). But it’s not an illusion that we seem to have escaped since 1988. Google’s satellite maps, for instance, have that lovely feel of “seeing everything from nowhere.” I heard Rob Tovey present a fascinating paper about this at the Post Screen Festival in Lisbon a couple of years ago (“God’s Eye View: The Satellite Photography of Google“, 2016), noting not only the “god trick,” but also the mechanics of how these images are created from multiple photographs using specific projection techniques rather than others. A map is far from objective.
Haraway’s conclusion is that the only way of achieving any kind of objectivity in science, and for her this is a feminist point, though valid for all science, is by admitting that knowledge is partial and situated. Perhaps something like a 360 degree photosphere, taken by an individual like myself using Google’s Street View app on my phone, could be classified as an example of a visual position that is partial?
If you click through that screenshot to see the way Google displays my photo, you’ll see you can drag it around to see everything I saw in every direction.
Well, almost everything I saw. If you look down, you won’t see my feet.
Google edited them out.
The knowing self is partial in all its guises, never finished, whole, simply there and original; it is always constructed and stitched together imperfectly, and therefore able to join with another, to see together without claiming to be another. Here is the promise of objectivity: a scientific knower seeks the subject position, not of identity, but of objectivity, that is, partial connection.
Those 360 images are certainly constructed and stitched together, and have a more specific standpoint or position, maybe even an implicit subject position from which you see. The glitches in the stitching together of the images remind us that they are imperfect, partial representations.
And yet the human is edited out.
Knowledge from the point of view of the unmarked is truly fantastic, distorted, and irrational. The only position from which objectivity could not possibly be practiced and honoured is the standpoint of the master, the Man, the One God, whose Eye produces, appropriates, and orders all difference. (..) Positioning is, therefore, the key practice in grounding knowledge organised around the imagery of vision. (p. 587)
I wonder whether today it is Google and technology, rather than the patriarchal male master, whose “Eye produces, appropriates, and orders all difference.”
And of course, as the scholars at our workshop about data visualisation pointed out, data visualisations are another way in which information is presented as objective, as seen from a disembodied, neutral viewpoint. The kind of viewpoint that doesn’t exist.
Above all, rational knowledge does not pretend to disengagement: to be from everywhere and so nowhere, to be free from interpretation, from being represented, to be fully self-contained or fully formalisable. (p. 590)
And of course, data visualisations tend to show the big picture. It’s nicely organised, you can see the patterns, and there are no “troubling details,” as Johanna Drucker puts it in Graphesis (2014).
That’s the god trick alright.
[English summary: info about two recent talks I gave about algorithmic bias in society]
Algoritmer, stordata og maskinlæring får mer og mer å si for samfunnet vårt, og brukes snart i alle samfunnsområder: i skolen, rettsstaten, politiet, helsevesenet og mer. Vi trenger mer kunnskap og offentlig debatt om dette temaet, og jeg har vært glad for å kunne holde to foredrag om det den siste måneden, en lang og en kort – og her kan du se videoene om du vil!
Sist onsdag holdt jeg et innlegg på Bergen offentlige bibliotek med fullsatt sal og en av de beste påfølgende debattene jeg har vært med på. Ikke bare IT-folk og studenter og Facebookbrukere, men også helsearbeidere, barnehagelærere og psykiatere fortalte om hvordan algoritmer brukes i deres yrker, og hva slags tvil og bekymringer de og deres kollegaer har. Innlegget ble streamet og du kan se hele her:
I mars var jeg invitert til å holde et 10-minutters innlegg for 600 kommunepolitikere på Kommunalpolitisk toppmøte, som hadde digitalt utenforskap som tema. Jeg argumenterte for at digital utenforskap handler om mer enn bare tilgang til nettet, og at vi også må tenke på hvordan samfunnsgrupper og individer kan ekskluderes eller diskrimineres gjennom algoritmisk styring.
Det er kommet ut en rekke gode bøker om dette temaet de siste månedene – flest fra USA, hvor utviklingen er kommet lenger enn her. Om du kjenner til flere bøker, særlig norske eller europeiske, så håper jeg du legger igjen tips i kommentarfeltet!
Bår Stenvik: Informasjonen (roman). Tiden, 2018
Denne romanen skal jeg lese straks jeg er ferdig med Ada Palmers fremtidssamfunn: “Informasjonen er et kjærlighetsdrama mellom en mann, en kvinne og et dataprogram.”
Datatilsynets rapport Hva vet de om deg? 2018.
En rapport som viser hva fire vanlige, norske virksomheter lagrer om deg som kunde.
Virginia Eubanks: Automating Inequality – How High-Tech Tools Profile, Police, and Punish the Poor. Macmillan, 2018.
Denne boken gjenforteller tre historier som viser hvordan automatisering av tildeling av velferdstjenester kan slå veldig feil. Argumentet er at algoritmisk styring slik den har vært brukt gjenskaper forskjeller. Lytt til et radio-intervju om boken eller se henne presentere den selv.
Safiya Umoja Noble: Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, 2018.
Første gangen Noble googlet “black girls” for å finne aktiviteter til ti-åringen sin, fikk hun bare porno som treff. Boken starter med dette eksempelet, men går mye lenger i å vise hvordan google og andre søkemotorer har dyptgående problemer med rasisme. Se et kort foredrag hvor Noble presenterer boken sin.
Meredith Broussard: Artificial Unintelligence – How Computers Misunderstand the World. MIT Press, 2018.
Broussard er IT-utvikler og journalist, og i denne boken viser hun hvordan teknologi definitivt ikke løser alle problemer.
Andrew Guthrie Ferguson : The Rise of Big Data Policing – Surveillance, Race, and the Future of Law Enforcement. NYU Press, 2017.
I Norge har tollvesenet bestilt programvare som bruker storgata for å forutsi hvem som er sannsynlige lovbrytere. I Danmark bruker politiet “predictive policing”. Bruk av stordata og algoritmer kan endre politiarbeid også i Norge – og da er det viktig å forstå hva det vil innebære.
Cathy O’Neil: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Penguin, 2016.
Jeg var på et foredrag av O’Neil i fjor, og hun er en rasende god taler. Her kan du bl.a. se en kortere TED-talk hun har gjort om temaet.
Best Guess for this Image: Brassiere ( The sexist, commercialised gaze of image recognition algorithms.)
Did you know the iPhone will search your photos for brassieres and breasts, but not for shoulders, knees and toes? Or boxers and underpants either for that matter. “Brassiere” seems to be a codeword for cleavage and tits.
I discovered this not by suddenly wanting to see photos of bras, but because I did a reverse image search for a sketch of a pregnant woman’s belly selfie to see if the sketch had anonymised it sufficently that Google wouldn’t find the original. Lo and behold, all the “related images” were of porn stars with huge tits and ads of busty women trying to sell bras, which surprised me, given that the original was of a woman with a singlet pulled up to show her pregnant belly. I would have expected person, woman, pregnant, selfie, belly, bare arms to show up as descriptions, but brassiere? Was that really the most salient feature of the image? Apparently so, to Google.
Usefully, one of the text hits for the image was to this article explaining with horror that Apple has “brassiere” as a search term for your photos. Well, clearly Google does too.
I promptly did a search in the photos on my iPhone, and was appalled to see a selfie I took one morning show up?—?I wasn’t wearing a bra, but a singlet, and the image is mostly of my face, neck and upper chest.
Seriously? I suppose you might think that the main point of that image was the triangle of my singlet that could have been a bra, but really?
The other images are a screenshot of some Kim Kardashian thing I saved for some reason I don’t remember, and fittingly enough, in the middle there, is a video of part of Erica Scourti’s excellent video poem, Body Scan, which is precisely about how commercial image recognition commodifies the human body. (Here is a paper I wrote about Body Scan)
The app Erica Scourti was using, CamFind, is in some ways more nuanced than the iPhone’s image recognition, which has no idea how to look for a human hand or a knee or a foot. That’s because those words weren’t among the images the system was trained to recognise.
Yeah. Somebody decided to program the systems to look for breasts and bras, but not for knees or toes or backs. I wonder why.
In her book on racist search engine results, Safiya Umoja Noble argues that one reason why a Google Search for “black girls” a few years ago only gave you porn results was that Google prioritised sales and profit above all, and so prioritised the results people would pay for rather than, say, showing web sites that black ten-year-old girls might like to visit.
Presumably that’s why “brassiere” is a search term for my photos, too. Some people will pay to see photos of tits and some people want to sell bras. The fact that other people want to sell socks and mittens just isn’t as lucrative as bras and tits.
Actually, my iPhone can find photos of mittens. Or at least, it can find photos I took that it thinks show mittens. I guess they must have fed the machine learning algorithm more photos of breasts and brassieres than of mittens, becuase the mitten recognition is far less accurate.
Two feet. My theatrical daughter’s efforts at terrifying me by sending a photo of her hand gorily made-up in stage makeup. A picture of a kid in a snapchat filter drinking juice. None of those are mittens.
It’s entirely probably that the image recognition algorithms were trained on pornography and ads for bras. There’d be a precedent for it: the Lena image, which is the most commonly used test image for image compression algorithms, is a scan of a Playboy centrefold, so that naked shoulder actually leads to a naked body in a fake Wild West setting. (This image is one of the main cases discussed in Dylan Mulvin’s forthcoming book Proxies: The Cultural Work of Standing In).
So why does this matter? It matters because these algorithms are organising our personal photos, memories we have captured of ourselves and of our loved ones. Those algorithms that create those cute video compilations of my photos, showing the kids’ growing up over the years, or all the smiley photos from our family holiday?—?they are also scanning my private photos for breasts and cleavage.
I really don’t like that my phone thinks the best way to describe my selfie is “brassiere”. I hate that. Image recognition needs to do something more than simply replicate a commercialised version of the male gaze.
Amazing news today: my ERC Consolidator project is going to be funded! This is huge news: it’s a €2 million grant that will allow me to build a research team to work for five years to understand how machine vision affects our everyday understanding of ourselves and our world.
Here is the short summary of what the project will do:
In the last decade, machine vision has become part of the everyday life of ordinary people. Smartphones have advanced image manipulation capabilities, social media use image recognition algorithms to sort and filter visual content, and games, narratives and art increasingly represent and use machine vision techniques such as facial recognition algorithms, eye-tracking and virtual reality.
The ubiquity of machine vision in ordinary peoples’ lives marks a qualitative shift where once theoretical questions are now immediately relevant to the lived experience of ordinary people.
MACHINE VISION will develop a theory of how everyday machine vision affects the way ordinary people understand themselves and their world through 1) analyses of digital art, games and narratives that use machine vision as theme or interface, and 2) ethnographic studies of users of consumer-grade machine vision apps in social media and personal communication. Three main research questions address 1) new kinds of agency and subjectivity; 2) visual data as malleable; 3) values and biases.
MACHINE VISION fills a research gap on the cultural, aesthetic and ethical effects of machine vision. Current research on machine vision is skewed, with extensive computer science research and rapid development and adaptation of new technologies. Cultural research primarily focuses on systemic issues (e.g. surveillance) and professional use (e.g. scientific imaging). Aesthetic theories (e.g. in cinema theory) are valuable but mostly address 20th century technologies. Analyses of current technologies are fragmented and lack a cohesive theory or model.
MACHINE VISION challenges existing research and develops new empirical analyses and a cohesive theory of everyday machine vision. This project is a needed leap in visual aesthetic research. MACHINE VISION will also impact technical R&D on machine vision, enabling the design of technologies that are ethical, just and democratic.
The project is planned to begin in the second half of 2018, and will run until the middle of 2023. I’ll obviously post more as I find out more! For now, here’s a very succinct overview of the project, or you can take a look at this five-page summary of the project, which was part of what I sent the ERC when I applied for the funding.
I’m on sabbatical from teaching at the University of Bergen this semester, and am spending the autumn here at MIT. Hooray!
It’s a dream opportunity to get to hang out with so many fascinating scholars. I’m at Comparative Media Studies/Writing, where William Uricchio has done work in algorithmic images that meshes beautifully with my machine vision project plans, and where a lot of the other research is also very relevant to my interests. I love being able to see old friends like Nick Montfort, look forwards to making new friends and catching up with old conference buddies. And just looking at the various event calendars makes me dizzy to think of all the ideas I’ll get to learn about.
Nancy Baym and Tarleton Gillespie at Microsoft Research’s Social Media Collective have also invited me to attend their weekly meetings, and the couple of meetings I’ve been at so far have been really inspiring. On Tuesday I got to hear Ysabel Gerrad speaking about her summer project, where she used Tumblr, Pinterest and Instagram’s recommendation engines to find content about eating disorders that the platforms have ostensibly banned. You can’t search for eating disorder-related hashtags, but there are other ways to find it, and if you look at that kind of content, the platforms offer you more, in quite jarring ways. Nancy tweeted this screenshot from one of Ysabel’s slides – “Ideas you might love” is maybe not the best introduction to the themes listed…
Thinking about ways people work around censorship could clearly be applied to many other groups, both countercultures that we (and I know we is a slippery term) may want to protect and criminals we may want to stop. There are some ethical issues to work out here – but certainly the methodology of using the platform’ recommendation systems to find content is powerful.
Yesterday I dropped by the 4S conference: Society for Social Studies of Science. It’s my first time at one of these conferences, but it’s big, with lots of parallel sessions and lots of people. I could only attend one day, but it’s great to get a taste of it. I snapchatted bits of the sessions I attended if you’re interested.
Going abroad on a sabbatical means dealing with a lot of practical details, and we’ve spent a lot of time just getting things organised. We’re actually living in Providence, which is an hour’s train ride away. Scott is affiliated with Brown, and we thought Providence might be a more livable place to be. It was pretty complicated just getting the kids registered for school – they needed extra vaccinations, since Norway has a different schedule, and they had to have a language test and then they weren’t assigned to the school three blocks from our house but will be bussed to a school across town. School doesn’t even start until September 5, so Scott and I are still taking turns spending time with the kids and doing work. We’re also trying to figure out how to organize child care for the late afternoon and early evening seminars and talks that seem to be standard in the US. Why does so little happen during normal work hours? Or, to be more precise, during the hours of the day when kids are in school? I’m very happy that Microsoft Research at least seems to schedule their meetings for the day time, and a few events at MIT are during the day. I suppose it allows people who are working elsewhere to attend, which is good, but it makes it hard for parents.
I’ll share more of my sabbatical experiences as I get more into the groove here. Do let me know if there are events or people around here that I should know about!
I’m going to be spending next semester as a visiting scholar at MIT’s Department of Comparative Media Studies, and there are a lot of practical things to organize. We have rented a flat there, but still need to rent out our place at home (anyone rneed a place in Bergen from August to December?). I’ve done the paperwork for bringing Norwegian universal health insurance with us to the US, and still have a few other forms to fill out for taxes. I think we can’t do anything about the kids’ schools before we get there.
But today’s big task was going to the US embassy in Oslo to apply for a visa.
Notes of interest about visiting the US embassy:
- They’ll store your phone and other small items in a box at the gate, but no large items or laptops.
- There are no clocks on the walls of the waiting room. Rows of chairs face the counters where the embassy employees take your paperwork and then call you up for your interview.
- They only let you bring your paperwork with you, nothing else. It was a two hour wait. There is no reading material provided except some children’s books. So the room was full of silent people with no phones, staring into space. The lack of phones or newspapers did NOT make them speak to each other.
- I had luckily brought a printout of a paper that needs revising and they seemed to think that was part of my paperwork so didn’t confiscate it. They wouldn’t let me bring my book or even my pencil. Luckily there was a pen chained to a dish at a counter not being used so I borrowed that and now have a wonderfully marked up essay that, once my computer is out, I can hopefully fix in a jiffy after my two hours of paper-based work on it. I was the only person in the waiting room not staring into space.
I am so excited: I won the John Lovas Memorial award last night at the Computers and Writing Conference for my Snapchat Research Stories! The award is given by Kairos: A Journal of Rhetoric, Technology, and Pedagogy, the leading digitally-native journal for “scholarship that examines digital and multimodal composing practices, promoting work that enacts its scholarly argument through rhetorical and innovative uses of new media.” Here are the editors, Cheryl Ball and Doug Eyman, flanking my friend and earlier colleague Jan Rune Holmevik, who was at the conference and very kindly accepted the award for me:
The award has been given to a long and impressive list of academic bloggers. This is the first year it has been opened up to other forms of social media knowledge sharing, and I am honored to be the first award-winner to win for something other than blogging. Yay!
The John Lovas Award is sponsored by Kairos in recognition and remembrance of John Lovas’s contributions to the legitimation of academic knowledgesharing using the emerging tools of Web publishing, from blogging, to newsletters, to social media. Each year the award underscores the valuable contributions that such knowledge-creation and community-building have made to the discipline by recognizing a person or project whose active, sustained engagement with topics in rhetoric, composition, or computers and writing using emerging communication tools best exemplifies John’s model of a public intellectual.
John Lovas was an influential early scholarly blogger, especially important within the fields of composition and rhetoric. I’ve been rereading some of his blog posts, and note that he experimented with visual argumentation in his blog, something that was quite unusual at the time, because it was more complicated to get images off cameras and onto the web than now, and bandwidth was limited too so images had to be carefully compressed in a photo editor so they would load before viewers got bored. So I like to think that John Lovas would have appreciated the combination of visual and textual communication about research that and other academics on Snapchat are exploring.
Here is an archive of some of my Snapchat Research Stories – they are better on snapchat, add me on Snapchat to see them live – I’m jilltxt. Thank you so much for this recognition – I really wish I could have been at the conference.
I found an old notebook when I was tidying my desk today.
Its from 1997 and 1998, when I was working on my MA in comparative literature and writing about creative, non-fiction hypertext.
I read all the 1990s hypertext theory and took careful notes.
Thinking about what David Kolb wrote about scholarly hypertext and whether you can actually do philosophy in a non-linear format.
I worried about reading too much and not writing enough.
And noted that while Walter Ong was interesting, he didn’t mention the internet.
Then I got to go to my first conference! ACM Hypertext 1998 – it was amazing. My MA advisor, Espen Aarseth, paid for my flight and hotel out of a grant he had and gave me two tasks: hand out flyers for a conference he was organising, and go and introduce myself to Stuart Moulthrop and tell him hi from Espen.
I have very thorough notes from the conference. Very thorough.
I even took thorough notes from discussions in the panel on hypertext and time. I love that Markku Eskelinen asked “Where is Genette?” Of course he did.
I was so touched to see these traces of my younger self. So earnest. So diligent.
Snapchat’s live stories usually present the world in a way that emphasises diversity, tolerance and respect for different races, religions and sexualities. But sometimes they fail miserably – like in the Live Story about yesterday’s Australia Day, which is now available globally.
Australia Day is celebrated on January 26, the day the first fleet arrived in Australia from Britain, and there is a strong movement to #changethedate so that it celebrates Australia, and not the European invasion of indigenous Australian land. That movement is actually so strong that yesterday 50,000 people marched in Melbourne, and thousands more around in other cities all over Australia. Here is a photo of the rally in Melbourne yesterday:
Or take a look at The Guardian’s live blog to see a more diverse view of the day, including the formal celebrations and more.
Now look at how Snapchat presents it in its Live Story. I’ve taken one screenshot of each snap, but they’re all videos so imagine panning and sound. The story is still on Snapchat as of Jan 27, 09:53 am Central European Time.
The first seven snaps are all of young, white people partying or at the pool. The last three are of fireworks.
It’s a short Live Story – the Live Story from the Women’s March last weekend had 71 snaps, so was obviously of a different scope altogether. But what an unbelievably skewed version of Australia Day this shows. What a skewed and stereotypical version of the Australian population it shows. Especially seen in contrast to the coverage of the inauguration and the Women’s March last weekend, this is pretty astounding.
I’ve previously noted that the Norwegian national day as seen on Snapchat appears to be nothing but young people partying in national costumes, which is not how the day looks to me. No doubt most of Snapchat’s portrayals of national, “exotic” festivals (at least exotic to young Americans) leave out a lot, or present things in a skewed manner. But at least Norway doesn’t have 50,000 people protesting the day that Snapchat somehow forgets to include in their story.
It looks as though Triple J may have sponsored this Live Story, based on the emphasis on their Hottest 100 in the first snaps. Triple J has been the radio channel for youth and music for decades, but their emphasis on a music countdown on what more and more people are calling Invasion Day rather than Australia Day may be ripe for change.
Another way in which Snapchat spreads very partial information about the politics of Australia Day is with their selfie filters and geofilters. I couldn’t access them in Norway, but the Live Story seems to have a couple of non-branded Australia Day geofilters, and some sponsored by Triple J. I imagine that Triple J actually sponsored the Live Story, or at least had significant influence on it, based on the number of geofilters they seem to have for the day, and their emphasis on the Hottest 100 on Australia Day. If that’s so, perhaps Snapchat’s US team, which seems to be pretty savvy about diversity in their own country, simply didn’t pay much attention. That would also explain why the narrative arc of the Live Story is pretty flat compared to many of the US Live Stories, which are more skilfully put together.
On Twitter, Elle Hunt shows us how politically biased the selfie filters are, too. This is what happens when advertisers control our means of production:
Here are the full images of her snaps:
But hey. Most people on Twitter like it. They love stories about young, white people getting lit.
But if Snapchat aims to be a news channel, and to spread public information about the public sphere, we need to know where they stand and especially, who is paying for it. In their Terms of Service, they write that
Live, Local, and any other crowd-sourced Services are inherently public and chronicle matters of public interest (..)
If so, their financing and bias should be transparent to the viewers.