Thinking about larps for research dissemination about technology and society

I spent some of last week at a wonderful larp (live action roleplaying) camp for kids run by Tidsreiser, and had a wonderful time. I have secretly wanted to try larping since I was a teenager, but there weren’t any local ones, then I didn’t dare try, and then I sort of forgot and just settled into being a boring grownup. Luckily, one of the advantages of having kids is you get to try out new stuff. So after a year of sitting around watching the kids battling and sneaking around the forest with their latex swords, and dropping them off at the Nordic Wizarding Academy (Trolldomsakademiet), I’ve started joining in a bit, and I absolutely love it.

Kids with swords at Eventyrspill last winter.

After chatting with the fascinating game masters and larpwriters at last week’s camp, and trying out some more different kinds of larp there, I started thinking about what a great tool larping could be for teaching and research dissemination – perhaps especially in subjects like digital culture, or for our research on the cultural implications of machine vision, because one of our main goals is to think through ethical dilemmas – what kind of technologies do we want? What kinds of consequences could these technologies have? What might they lead to? A well-designed larp could give participants a rich opportunity to act out situations that require them to make choices about or experience various consequences of technology use. This post gathers some of my initial ideas about how to do that, and some links to other larps about technology people have told me about.

To my delight, when I started talking about this idea, I discovered that two of the larpwriters at the camp, Anita Myhre Andersen and Harald Misje, are also working with the University Museum here at the University of Bergen, which is just relaunching this autumn with a big plan to host more participatory forms of research dissemination. We’re going to meet up after the summer holidays to talk about possibilities.

What might a machine vision larp include?

So what would a larp about machine vision be like? There’d need to be some technology. At a minimum lots of cameras – surveillance cameras, body cams, smart baby monitors or smart door bell cameras. Somewhere, somebody watches those images, or someone can gain access to them somehow. Someone can maybe manipulate the images, share the images, alter the images. Perhaps there’s a website that participants could access from their phones with news, in-game blogs, private photo messaging – and perhaps some people might have access to more of this than others, and some might find ways to access “private” images by nefarious means. There might be tools that could (fictionally) analyse people’s emotions, health, attractiveness, mental state, whatever, based on the images. Maybe we could adapt some of the scenarios from this speculative design research paper by James Pierce: “Smart Home Security Cameras and Shifting Lines of Creepiness: A Design-Led Inquiry” (Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems).

One of the scenarios in James Pierce’s CHI’19 paper asks how an employer could use information about their nanny’s emotional state. Something like this could probably be fictionalized and used in a larp.

I’m thinking participants would be given roles such as

  • director of/salesperson for a technology company – perhaps one trying to sell emotional recognition software to the city for use in schools, public surveillance, by the police, etc., or trying to sell smart home surveillance systems, or something slightly more outlandish, smart body cams with built-in facial recognition and networked tracking of suspect individuals for personal protection for young women worried about being raped or something, or even more Black Mirrorish, optical implants with total recall. Their goal would be to convince people to use their technology.
  • people with roles that allow them to make choices about buying/implementing technology that will affect other people (e.g. politicians, bureaucrats, the chief of police, the principal of a school or university, a shop owner).
  • Groups or individuals with ill intent who (using some game mechanism) can gain access to or alter personal data from the technology or generate deepfake videos or something to do something scary – this could be scammers, an oppressive regime, or something else.
  • Activists who are for/against the technology for various reasons. Could have backstories explaining why they hold strong opinions. (Could lead to interesting protests etc – have materials available for making banners etc 🙂
  • Regular users who experience some specific situation that makes them think about the technology. This could include something like a teacher or parent or prisoner told to wear a body cam to monitor all their social interactions, or somebody who works in a shop who is being constantly surveilled.
  • People whose job it is to watch the surveillance feeds, monitor the “smart” facial analysis algorithms etc.

Participants could be told that their character follows a specific ethical framework, such as utilitarianism, care ethics, deontology, ubuntu, confucianism etc. (If using this in teaching, I’d base it on Charles Ess’s chapter on ethical frameworks in his book Digital Media Ethics.)

Obviously these are all very early ideas, from a professor with very little larping experience (i.e. me), and we may end up doing something completely different.

Other larps and related projects dealing with contemporary technology and ethics

To learn more about what’s out there, I posted a question to the Association of Internet Researchers’ mailing list to see if any fellow internet researchers had experience with using LARPs in connection with research. As usual for questions to the list, I got some great answers, both on and off list. Here are some of the projects, people and books people told me about.

The most developed LARP-based teaching program for universities that I’ve seen so far is Reacting to the Past at Barnard College in New York City. Reacting to the Past is a centre that has developed lots of LARPs for teaching history. They have a system that seems really well-thought out for taking games through various levels of development and play testing, and once a game is very thoroughly tested, they publish it so others can use it in their own teaching. Here are their published LARPs. Their focus is on historical situations, so none of their games seem to be directly applicable to the emphasis I want on ethical negotiations about possible near futures – except possibly Rage Against the Machine: Technology, Rebellion, and the Industrial Revolution. I’ve filled out the form to request to download the materials for that game, and am looking forwards to seeing how they have it set up.

I’ve also received tips about two different artist-researcher collaborations that have resulted in LARPs. Omsk social club developed a LARP at Somerset House earlier this year, based on research on digital intimacy by Alessandro Gandini and artist/curator Marija Bozinovska Jones. They’re still working on putting documentation online, but you can get some idea of how it worked from this short video:

Secondly, Martin Zeilinger responded to my question to the list to tell me about a series of LARPs developed by Ruth Catlow with  Ben Vickers. Martin himself is currently in the early stages of developing a LARP with Ruth about cashless societies, aimed at 15-25 year olds. I found a description of one of Ruth and Ben’s earlier LARPs that explored the excitement about blockchain and tech startups in a workshop called ‘Role Play Your Way to Budgetary Blockchain Bliss’. The LARP was hosted by the Institute of Network Cultures in Amsterdam in 2016, and conveniently for me, they wrote up a blog post about it. This LARP was designed like a hackathon set in a near future, where all the projects that were pitched were about cats, and participants were “assigned a cat-invested persona and the general goal of networking their way into a profitable enterprise for themselves, the cat community, and the hosting institution.” The blog post explains that after the pitches:

The rest of the first day gave chance to the multiplicity of attendees to ask, negotiate, and offer their skills to their favourite projects. It became rapidly clear that the diversity of the audience had different motivations, skills, and ideologies. Each participant performed a part of the complex ecosystem of fintech and start-ups: investors, developers, experts, scholars, and naive enthusiasts had the difficult task to sort out differences in order to build up lasting and successful alliances. Everyone had something to invest (time, energy, money, venues, a van full of cats) and something to get in return (profits, cat life improvement, patents, philanthropy aspirations).

It’d be pretty straightforward to copy this structure and make a kind of speculative startup hackathon for new machine vision-related technologies – and that could certainly lead to many ethical debates. I can imagine something like that working well for teaching, and being reasonably easy to carry out. I’d really like to make something more narrative, though.

Netprovs are another genre that has a lot in common with larps, and which we’ve been involved with in our research group. Netprov is sort of an online, written version of a larp, that lasts for a day, a week or several months. Rob Wittig wrote his MA thesis here on netprov, and he and his collaborator Mark Marino have explicitly compared netprov to larps. Scott Rettberg is planning a machine vision-themed netprov in our course DIKULT203: Electronic Literature this autumn, which should be fun, and which may provide good ideas for a larp on the topic as well.

Another thread to consider is design fiction, design ethnography and user enactments. A really interesting paper by Michael Warren Skirpan, Jacqueline Cameron and Tom Yeh describes an “immersive theater experience” called “Quantified Self”, designed to support audience reflection about ethical uses of personal data. They used a script and professional actors, asked audience members to share their social media data, and set up a number of technological apps and games that used that data in various ways. So this isn’t a larp, because the audience aren’t really actors driving the narrative: they stay firmly audience members, but participatory.

The show had an overarching narrative following an ethical conflict within a famous tech company, DesignCraft. Imme- diately upon signing up for the show, participants were invited to a party for their supposed friend, Amelia, who was a star employee at DesignCraft. As the story unravels, they learn that Amelia is an experimental AI created using their personal data, who, herself, has begun grappling with the ethics of how the company uses her and its vast trove of data.

Within this broader plot arc, main characters were written to offer contrasting perspectives on our issues. Don, the CEO of DesignCraft, represented a business and innovation per- spective. Lily, the chief data scientist of DesignCraft, held scientific and humanitarian views on the possibilities of Big Data while struggling with some privacy concerns. Felicia, an ex-DesignCraft employee, offered a critical lens of tech- nology infiltrating and destroying the best parts of human re- lations. Evan, a hacker, saw technology as an opportunity for exploitation and intended to similarly use it to exploit De- signCraft. Amelia, a humanoid AI, struggled with the idea of being merely an instrument for technology and the artifi- ciality of knowing people only through data. Felicity, an FBI agent, believed data could support a more secure society. Bo, the chief marketing officer at DesignCraft, felt strongly that technology was entertaining, useful, and enjoyable and was willing to make this trade-off for any privacy concern. Finally, Veronica, a reporter, was concerned about the politics and intentions of the companies working with everyone’s personal data.


Skirpan, Michael Warren, Jacqueline Cameron, and Tom Yeh. “More Than a Show: Using Personalized Immersive Theater to Educate and Engage the Public in Technology Ethics.” In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 464:1–464:13. CHI ’18. New York, NY, USA: ACM, 2018. https://doi.org/10.1145/3173574.3174038.

But what I would really like is to develop something both more participatory than the immersive theater example, and more narrative than the artist-led larps, with events and conflicts and problems to solve. That’s probably quite ambitious and difficult. I am very much looking forwards to sitting down with Anita and Harald, who have lots of experience (good thing, since I have practically none).

Harald works in a cubicle by day and plays the lord of the dark elves and various other parts on weekends. I’m hoping he and Anita will have good ideas for villains for the machine vision larp.

(And here’s a rather fun NRK documentary som 2011 about Anita – the kids’ larp in the forest is still going strong, and we played a version of the murder master at the camp last week.)

Here are some names of people doing relevant work that people have suggested:

Also I was recommended the following books:

  • Stenros, Jaakko and Markus Montola. Nordic Larp. Stockholm: Fëa Livia. 2010.
  • Simkins, David. The Arts of Larp: Design, Literacy, Learning and Community in Live-Action Role Play. Jefferson, North Carolina: McFarland & Company, 2015.

If you know about larps about technology and society, or that are used for research dissemination or teaching, please leave a comment! I would love to know more!

02. July 2019 by Jill
Categories: Uncategorized | Leave a comment

Lesson plan: DIKULT103 29.01.2019 – Video Game Aesthetics and Orientalism in Games

ReadingsUnderstanding Video Games Chapter 5, Woke Gaming Chapter 6 (Kristin Bezio: The Perpetual Crusade: Rise of the Tomb Raider, Religious Extremism, and the Problem of Empire. (p 119-138))

Learning goals: After doing the reading, taking the quiz and attending the class, students can

  • Explain how video game aesthetics incorporate game mechanics as well as visuals, sounds, etc. 
  • Use some of the terms in Understanding Video Games chapter 5 to describe games
  • Explain Said’s concept of orientalism and discuss it in relation to video games
Continue Reading →

18. February 2019 by Jill
Categories: Uncategorized | Leave a comment

Hostile machine vision

One of our goals in MACHINE VISION is to analyse how machine vision is represented in art, stories, games and popular culture. A really common trope is showing machine vision as hostile and as dangerous to humans. Machine vision is used as an effective visual metaphor for something alien that threatens us.

My eight-year-old and I watched Ralph Breaks the Internet last weekend. I found it surprisingly satisfying – I had been expecting something inane like that emoji movie, but the story was quite engaging, with an excellent exploration of the bad effects of neediness in friendships. But my research brain switched on in the computer virus scene , towards the end of the movie, because we see “through the eyes of the virus”. Here is a shot of the virus, depicted as a dark swooshing creature with a single red eye:


And here you see the camera switch to what the virus sees. It is an “insecurity virus”, that scans for “insecurities” (such as Vanellope’s anxious glitching and Ralph’s fear of losing Vanellope) and replicates them.

And of course it uses easily-recognisable visual cues that signify “machine vision” – a computer is seeing here.

I noticed an almost-identical use of this visual metaphor at another visit to the cinema with the kids, though this time in an ad from the Australian Cancer Council. Here, the sun is presented as seeing human skin like an alien.

The way humans see skin is not the same way the sun sees skin. And each time the sun sees your skin, when the UV is 3 or above, it’s doing damage beneath the surface. It builds up, until one day, it causes a mutation in your DNA, which can turn to skin cancer. Don’t let the sun see your DNA. Defend yourself.

The visuals are different. While Ralph Breaks the Internet uses an overlay of data, the ad shifts from a “human” camera angle to zooming in, black and white, fading around the sides of the image, a shaky camera, and then appears to penetrate the skin to show what we assume is the DNA mutating. The sound effects also suggest something dangerous, perhaps mechanic.

Certainly machine vision isn’t always represented as hostile. It’s often presented as useful, or protective, or simply as a tool. This year we are going to be tracking different representations and simulations of machine vision in order to sort through the different ways our culture sees machine vision. Hostile is definitely one of those ways.

If you have suggestions for other examples we should look at, please leave a comment and tell us about them!

11. February 2019 by Jill
Categories: Machine Vision | 2 comments

Seeing brainwaves

Last week I was in London, where I visited Pierre Huyghe’s exhibition Uumwelt at the Serpentine Gallery. You walk in, and there are flies in the air, flies and a large screen showing images flickering past, fast. The images are generated by a neural network and are reconstructions of images humans have looked at, but that the neural network hasn’t had direct access to – they are generated based on brainwave activity in the human subjects.

The images flicker past in bursts, fast fast fast fast fast slow fast fast fast, again and again, never resting. Jason Farago describes the rhythm as the machine’s “endless frantic attempts to render human thoughts into visual form”, and frantic describes it well, but it’s a nonhuman frantic, a mechanical frantic that doesn’t seem harried. It’s systematic, mechanical, but never resting, never quite sure of itself but trying again and again. I think (though I’m not sure) that this is an artefact of the fMRI scanning or the processing of the neural network  that Huyghe has chosen to retain, rather than something Huyghe has introduced.

Huyghe uses technology from Yukiyasu Kamitani’s lab at Kyoto University. A gif Kamitani posted to Twitter gives a glimpse into how the system uses existing photographs as starting points for figuring out what the fMRI data might mean – the images that flicker by on the right hand side sometimes have background features like grass or a horizon line that is not present in the left image (the image shown to the human). Here is a YouTube version of the gif he tweeted:

The images and even the flickering rhythms of the Kamitani Lab video are really quite close to Huyghe’s Uumwelt. At the exhibition I thought perhaps the artist had added a lot to the images, used filters or altered colours or something, but I think he actually just left the images pretty much as the neural network generated them. Here’s a short video from one of the other large screens in Uumwelt – there were several rooms in the exhibition, each with a large screen and flies. Sections of paint on the walls of the gallery were sanded down to show layers of old paint, leaving large patterns that at first glance looked like mould.

The neural network Kamitani’s lab uses has a training set of images (photographs of owls and tigers and beaches and so on) which have been viewed by humans who were hooked up to fMRI, so the system knows the patterns of brain activity that are associated with each of the training images. Then a human is shown a new image that the system doesn’t already know, and the system tries to figure out what that image looks like by combining features of the images it knows produce similar brain activity. Or to be more precise, “The reconstruction algorithm starts from a random image and iteratively optimize the pixel values so that the DNN [DNN=deep neural network] features of the input image become similar to those decoded from brain activity across multiple DNN layers” (Shen et.al. 2017) Looking at the lab’s video and at Uumwelt, I suspect the neural network has seen a lot of photos of puppy dogs.

I’ve read a few of the Kamitani Lab’s papers, and as far as I’ve seen, they don’t really discuss how they conceive of vision in their research. I mean, what exactly does the brain activity correspond to? Yes, when we look at an image, our brain reacts in ways that deep neural networks can use as data to reconstruct an image that has some similarities with the image we looked at. But when we look at an image, is our brain really reacting to the pixels? Or are we instead imagining a puppy dog or an owl or whatever? I would imagine that if I look at an image of somebody I love my brain activity will be rather different than if I look at an image of somebody I hate. How would Kamitani’s team deal with that? Is that data even visual?

Kamitani’s lab also tried just asking people to imagine an image they had previously been shown. To help them remember the image, they were “asked to relate words and visual images so that they can remember visual images from word cues” (Shen et.al. 2017). As you can see below, it’s pretty hard to tell the difference between a subject’s remembered swan or aeroplane and their remembered swan or aeroplane. I wonder if they were really remembering the image at all, or just thinking of the concept or thing itself.

Figure from a scientific paper.

Figure 4 in Shen, Horikawa, Majima and Kamitani’s pre-print Deep image reconstruction from human brain activity (2017), showing the reconstruction of images that humans imagined.

Uumwelt means “environment” or “world around us” in German, though Huyghe has given it an extra u at the start, in what Farago calls a “stutter” that matches the rhythms of the videos, though I had thought of it as more of a negator, an “un-environment”. Huyghe is known for his environmental art, where elements of the installation work together in an ecosystem, and of course the introduction of flies to Uumwelt is a way of combining the organic with the machine. Sensors detect the movements of the flies, as well as temperature and other data that relates to the movement of humans and flies through the gallery, and this influences the display of images. The docent I spoke with said she hadn’t noticed any difference in the speed or kinds of images displayed, but that the videos seemed to move from screen to screen, or a new set of videos that hadn’t been shown for a while would pop up from time to time. The exact nature of the interaction wasn’t clear. Perhaps the concept is more important than the actuality of it.

The flies apparently are born and die within the gallery, living their short lives entirely within the artwork. They are fed by the people working at the gallery, and appear as happy as flies usually appear, clearly attracted to the light of the videos.

Dead flies are scattered on the floors. They have no agency in this Uumwelt. At least none that affects the machines.

28. November 2018 by Jill
Categories: Digital Art, Machine Vision | Tags: , , , , , , | Leave a comment

Updates on algorithms and society talks

I’ve given a few more versions of the “algorithms and society” talks from this spring. You can still see the videos of those talks, but here are a few links to new material I’ve woven into them:

Social credit in China – this story by the Australian Broadcasting Company paints a vivid picture of what it might be like to live with this system. It’s hard to know exactly what is currently fact and what is conjecture.

Ray Serrato’s Twitter thread about YouTube recommending fake news about Chemnitz,and the New York Times article detailing the issue.

19. September 2018 by Jill
Categories: Algorithmic bias | 2 comments

Generating portraits from DNA: Heather Dewey-Hagborg’s Becoming Chelsea

Did you know you can generate a portrait of a person’s face based on a sample of their DNA? The thing is, despite companies selling this service to the police to help them identify suspects, it’s not really that accurate. That lack of precision is at the heart of Heather Dewey-Hagborg’s work Probably Chelsea, a display of 30 masks showing 30 possible portraits of Chelsea Manning based on a sample of her DNA that she mailed to the artist from prison. The work is showing at Kunsthall 3.14 here in Bergen until the end of September.

Many masks resembling human faces hang from the ceiling in an art gallery.

Continue Reading →

11. September 2018 by Jill
Categories: Digital Art, Machine Vision, Visualise me | 1 comment

My ERC interview: the full story

It seems more and more research funding is awarded in a two-step process, where applicants who make it to the second round are interviewed by the panel before the final decisions are made. I had never done this kind of interview before I went to Brussels last October, and was quite nervous. I must have done OK, because I was awarded the grant, and my ERC Consolidator project, Machine Vision in Everyday Life: Playful Interactions with Visual Technologies in Digital Art, Games, Narratives and Social Media, officially started on August 1! Hooray!  Continue Reading →

22. August 2018 by Jill
Categories: Academia | 1 comment

The god trick and the idea of infinite, technological vision

When I was at the INDVIL workshop about data visualisation on Lesbos a couple of weeks ago, everybody kept citing Donna Haraway. “It’s the ‘god trick’ again,” they’d say, referring to Haraway’s 1988 paper on Situated Knowledges. In it, she uses vision as a metaphor for the way science has tended to imagine knowledge about the world. Continue Reading →

21. June 2018 by Jill
Categories: Machine Vision | Leave a comment

Skal samfunnet styres av algoritmer? To foredrag og syv bøker

[English summary: info about two recent talks I gave about algorithmic bias in society]

Algoritmer, stordata og maskinlæring får mer og mer å si for samfunnet vårt, og brukes snart i alle samfunnsområder: i skolen, rettsstaten, politiet, helsevesenet og mer. Vi trenger mer kunnskap og offentlig debatt om dette temaet, og jeg har vært glad for å kunne holde to foredrag om det den siste måneden, en lang og en kort – og her kan du se videoene om du vil! Continue Reading →

23. April 2018 by Jill
Categories: Algorithmic bias | Leave a comment

Best Guess for this Image: Brassiere ( The sexist, commercialised gaze of image recognition algorithms.)

Did you know the iPhone will search your photos for brassieres and breasts, but not for shoulders, knees and toes? Or boxers and underpants either for that matter. “Brassiere” seems to be a codeword for cleavage and tits. Continue Reading →

28. March 2018 by Jill
Categories: Machine Vision, Visualise me | 1 comment

← Older posts