Lesson plan: DIKULT103 29.01.2019 – Video Game Aesthetics and Orientalism in Games

ReadingsUnderstanding Video Games Chapter 5, Woke Gaming Chapter 6 (Kristin Bezio: The Perpetual Crusade: Rise of the Tomb Raider, Religious Extremism, and the Problem of Empire. (p 119-138))

Learning goals: After doing the reading, taking the quiz and attending the class, students can

  • Explain how video game aesthetics incorporate game mechanics as well as visuals, sounds, etc. 
  • Use some of the terms in Understanding Video Games chapter 5 to describe games
  • Explain Said’s concept of orientalism and discuss it in relation to video games
Continue Reading →

18. February 2019 by Jill
Categories: Uncategorized | Leave a comment

Hostile machine vision

One of our goals in MACHINE VISION is to analyse how machine vision is represented in art, stories, games and popular culture. A really common trope is showing machine vision as hostile and as dangerous to humans. Machine vision is used as an effective visual metaphor for something alien that threatens us.

My eight-year-old and I watched Ralph Breaks the Internet last weekend. I found it surprisingly satisfying – I had been expecting something inane like that emoji movie, but the story was quite engaging, with an excellent exploration of the bad effects of neediness in friendships. But my research brain switched on in the computer virus scene , towards the end of the movie, because we see “through the eyes of the virus”. Here is a shot of the virus, depicted as a dark swooshing creature with a single red eye:


And here you see the camera switch to what the virus sees. It is an “insecurity virus”, that scans for “insecurities” (such as Vanellope’s anxious glitching and Ralph’s fear of losing Vanellope) and replicates them.

And of course it uses easily-recognisable visual cues that signify “machine vision” – a computer is seeing here.

I noticed an almost-identical use of this visual metaphor at another visit to the cinema with the kids, though this time in an ad from the Australian Cancer Council. Here, the sun is presented as seeing human skin like an alien.

The way humans see skin is not the same way the sun sees skin. And each time the sun sees your skin, when the UV is 3 or above, it’s doing damage beneath the surface. It builds up, until one day, it causes a mutation in your DNA, which can turn to skin cancer. Don’t let the sun see your DNA. Defend yourself.

The visuals are different. While Ralph Breaks the Internet uses an overlay of data, the ad shifts from a “human” camera angle to zooming in, black and white, fading around the sides of the image, a shaky camera, and then appears to penetrate the skin to show what we assume is the DNA mutating. The sound effects also suggest something dangerous, perhaps mechanic.

Certainly machine vision isn’t always represented as hostile. It’s often presented as useful, or protective, or simply as a tool. This year we are going to be tracking different representations and simulations of machine vision in order to sort through the different ways our culture sees machine vision. Hostile is definitely one of those ways.

If you have suggestions for other examples we should look at, please leave a comment and tell us about them!

11. February 2019 by Jill
Categories: Machine Vision | 2 comments

Seeing brainwaves

Last week I was in London, where I visited Pierre Huyghe’s exhibition Uumwelt at the Serpentine Gallery. You walk in, and there are flies in the air, flies and a large screen showing images flickering past, fast. The images are generated by a neural network and are reconstructions of images humans have looked at, but that the neural network hasn’t had direct access to – they are generated based on brainwave activity in the human subjects.

The images flicker past in bursts, fast fast fast fast fast slow fast fast fast, again and again, never resting. Jason Farago describes the rhythm as the machine’s “endless frantic attempts to render human thoughts into visual form”, and frantic describes it well, but it’s a nonhuman frantic, a mechanical frantic that doesn’t seem harried. It’s systematic, mechanical, but never resting, never quite sure of itself but trying again and again. I think (though I’m not sure) that this is an artefact of the fMRI scanning or the processing of the neural network  that Huyghe has chosen to retain, rather than something Huyghe has introduced.

Huyghe uses technology from Yukiyasu Kamitani’s lab at Kyoto University. A gif Kamitani posted to Twitter gives a glimpse into how the system uses existing photographs as starting points for figuring out what the fMRI data might mean – the images that flicker by on the right hand side sometimes have background features like grass or a horizon line that is not present in the left image (the image shown to the human). Here is a YouTube version of the gif he tweeted:

The images and even the flickering rhythms of the Kamitani Lab video are really quite close to Huyghe’s Uumwelt. At the exhibition I thought perhaps the artist had added a lot to the images, used filters or altered colours or something, but I think he actually just left the images pretty much as the neural network generated them. Here’s a short video from one of the other large screens in Uumwelt – there were several rooms in the exhibition, each with a large screen and flies. Sections of paint on the walls of the gallery were sanded down to show layers of old paint, leaving large patterns that at first glance looked like mould.

The neural network Kamitani’s lab uses has a training set of images (photographs of owls and tigers and beaches and so on) which have been viewed by humans who were hooked up to fMRI, so the system knows the patterns of brain activity that are associated with each of the training images. Then a human is shown a new image that the system doesn’t already know, and the system tries to figure out what that image looks like by combining features of the images it knows produce similar brain activity. Or to be more precise, “The reconstruction algorithm starts from a random image and iteratively optimize the pixel values so that the DNN [DNN=deep neural network] features of the input image become similar to those decoded from brain activity across multiple DNN layers” (Shen et.al. 2017) Looking at the lab’s video and at Uumwelt, I suspect the neural network has seen a lot of photos of puppy dogs.

I’ve read a few of the Kamitani Lab’s papers, and as far as I’ve seen, they don’t really discuss how they conceive of vision in their research. I mean, what exactly does the brain activity correspond to? Yes, when we look at an image, our brain reacts in ways that deep neural networks can use as data to reconstruct an image that has some similarities with the image we looked at. But when we look at an image, is our brain really reacting to the pixels? Or are we instead imagining a puppy dog or an owl or whatever? I would imagine that if I look at an image of somebody I love my brain activity will be rather different than if I look at an image of somebody I hate. How would Kamitani’s team deal with that? Is that data even visual?

Kamitani’s lab also tried just asking people to imagine an image they had previously been shown. To help them remember the image, they were “asked to relate words and visual images so that they can remember visual images from word cues” (Shen et.al. 2017). As you can see below, it’s pretty hard to tell the difference between a subject’s remembered swan or aeroplane and their remembered swan or aeroplane. I wonder if they were really remembering the image at all, or just thinking of the concept or thing itself.

Figure from a scientific paper.

Figure 4 in Shen, Horikawa, Majima and Kamitani’s pre-print Deep image reconstruction from human brain activity (2017), showing the reconstruction of images that humans imagined.

Uumwelt means “environment” or “world around us” in German, though Huyghe has given it an extra u at the start, in what Farago calls a “stutter” that matches the rhythms of the videos, though I had thought of it as more of a negator, an “un-environment”. Huyghe is known for his environmental art, where elements of the installation work together in an ecosystem, and of course the introduction of flies to Uumwelt is a way of combining the organic with the machine. Sensors detect the movements of the flies, as well as temperature and other data that relates to the movement of humans and flies through the gallery, and this influences the display of images. The docent I spoke with said she hadn’t noticed any difference in the speed or kinds of images displayed, but that the videos seemed to move from screen to screen, or a new set of videos that hadn’t been shown for a while would pop up from time to time. The exact nature of the interaction wasn’t clear. Perhaps the concept is more important than the actuality of it.

The flies apparently are born and die within the gallery, living their short lives entirely within the artwork. They are fed by the people working at the gallery, and appear as happy as flies usually appear, clearly attracted to the light of the videos.

Dead flies are scattered on the floors. They have no agency in this Uumwelt. At least none that affects the machines.

28. November 2018 by Jill
Categories: Digital Art, Machine Vision | Tags: , , , , , , | Leave a comment

Updates on algorithms and society talks

I’ve given a few more versions of the “algorithms and society” talks from this spring. You can still see the videos of those talks, but here are a few links to new material I’ve woven into them:

Social credit in China – this story by the Australian Broadcasting Company paints a vivid picture of what it might be like to live with this system. It’s hard to know exactly what is currently fact and what is conjecture.

Ray Serrato’s Twitter thread about YouTube recommending fake news about Chemnitz,and the New York Times article detailing the issue.

19. September 2018 by Jill
Categories: Algorithmic bias | 2 comments

Generating portraits from DNA: Heather Dewey-Hagborg’s Becoming Chelsea

Did you know you can generate a portrait of a person’s face based on a sample of their DNA? The thing is, despite companies selling this service to the police to help them identify suspects, it’s not really that accurate. That lack of precision is at the heart of Heather Dewey-Hagborg’s work Probably Chelsea, a display of 30 masks showing 30 possible portraits of Chelsea Manning based on a sample of her DNA that she mailed to the artist from prison. The work is showing at Kunsthall 3.14 here in Bergen until the end of September.

Many masks resembling human faces hang from the ceiling in an art gallery.

Continue Reading →

11. September 2018 by Jill
Categories: Digital Art, Machine Vision, Visualise me | 1 comment

My ERC interview: the full story

It seems more and more research funding is awarded in a two-step process, where applicants who make it to the second round are interviewed by the panel before the final decisions are made. I had never done this kind of interview before I went to Brussels last October, and was quite nervous. I must have done OK, because I was awarded the grant, and my ERC Consolidator project, Machine Vision in Everyday Life: Playful Interactions with Visual Technologies in Digital Art, Games, Narratives and Social Media, officially started on August 1! Hooray!  Continue Reading →

22. August 2018 by Jill
Categories: Academia | Leave a comment

The god trick and the idea of infinite, technological vision

When I was at the INDVIL workshop about data visualisation on Lesbos a couple of weeks ago, everybody kept citing Donna Haraway. “It’s the ‘god trick’ again,” they’d say, referring to Haraway’s 1988 paper on Situated Knowledges. In it, she uses vision as a metaphor for the way science has tended to imagine knowledge about the world. Continue Reading →

21. June 2018 by Jill
Categories: Machine Vision | Leave a comment

Skal samfunnet styres av algoritmer? To foredrag og syv bøker

[English summary: info about two recent talks I gave about algorithmic bias in society]

Algoritmer, stordata og maskinlæring får mer og mer å si for samfunnet vårt, og brukes snart i alle samfunnsområder: i skolen, rettsstaten, politiet, helsevesenet og mer. Vi trenger mer kunnskap og offentlig debatt om dette temaet, og jeg har vært glad for å kunne holde to foredrag om det den siste måneden, en lang og en kort – og her kan du se videoene om du vil! Continue Reading →

23. April 2018 by Jill
Categories: Algorithmic bias | Leave a comment

Best Guess for this Image: Brassiere ( The sexist, commercialised gaze of image recognition algorithms.)

Did you know the iPhone will search your photos for brassieres and breasts, but not for shoulders, knees and toes? Or boxers and underpants either for that matter. “Brassiere” seems to be a codeword for cleavage and tits. Continue Reading →

28. March 2018 by Jill
Categories: Machine Vision, Visualise me | 1 comment

My project on machine vision will be funded by the ERC!

Amazing news today: my ERC Consolidator project is going to be funded! This is huge news: it’s a €2 million grant that will allow me to build a research team to work for five years to understand how machine vision affects our everyday understanding of ourselves and our world.

Three images showing examples of machine vision: Vertov's kinoeye, a game that simulates surveillance, Spectacles for Snapchat.

Here is the short summary of what the project will do:

In the last decade, machine vision has become part of the everyday life of ordinary people. Smartphones have advanced image manipulation capabilities, social media use image recognition algorithms to sort and filter visual content, and games, narratives and art increasingly represent and use machine vision techniques such as facial recognition algorithms, eye-tracking and virtual reality.

The ubiquity of machine vision in ordinary peoples’ lives marks a qualitative shift where once theoretical questions are now immediately relevant to the lived experience of ordinary people.

MACHINE VISION will develop a theory of how everyday machine vision affects the way ordinary people understand themselves and their world through 1) analyses of digital art, games and narratives that use machine vision as theme or interface, and 2) ethnographic studies of users of consumer-grade machine vision apps in social media and personal communication. Three main research questions address 1) new kinds of agency and subjectivity; 2) visual data as malleable; 3) values and biases.

MACHINE VISION fills a research gap on the cultural, aesthetic and ethical effects of machine vision. Current research on machine vision is skewed, with extensive computer science research and rapid development and adaptation of new technologies. Cultural research primarily focuses on systemic issues (e.g. surveillance) and professional use (e.g. scientific imaging). Aesthetic theories (e.g. in cinema theory) are valuable but mostly address 20th century technologies. Analyses of current technologies are fragmented and lack a cohesive theory or model.

MACHINE VISION challenges existing research and develops new empirical analyses and a cohesive theory of everyday machine vision. This project is a needed leap in visual aesthetic research. MACHINE VISION will also impact technical R&D on machine vision, enabling the design of technologies that are ethical, just and democratic.

The project is planned to begin in the second half of 2018, and will run until the middle of 2023. I’ll obviously post more as I find out more! For now, here’s a very succinct overview of the project, or you can take a look at this five-page summary of the project, which was part of what I sent the ERC when I applied for the funding.

28. November 2017 by Jill
Categories: Machine Vision | 2 comments

← Older posts