Call for submissions
Workshop dates: 15-17 August 2022

Location: Bergen, Norway
Proposals due: 15 June

The ERC project Machine Vision in Everyday Life invites submissions to a workshop to be held at Solstrand Hotel near Bergen, Norway, on 15-17 August 2022. In this interdisciplinary workshop we will combine qualitative approaches and digital methods to analyse how machine vision is represented in art, science fiction, games, social media and other forms of cultural and aesthetic expression.

The workshop combines qualitative close readings and case studies of machine vision in art, fiction, games and social media, and collaboration on analysing the project’s dataset on machine vision in 500 creative works. The workshop is for scholars at all levels from graduate students to full professors and will result in a special journal issue on the topic. The project will cover the cost of hotel and board for the workshop, and a limited number of travel stipends may be available. 


As visual technologies are increasingly combined with machine learning to interpret and generate images, societies around the globe face practical and ethical questions about how these new possibilities should be evaluated, implemented and governed as they impact people’s everyday life. With their anticipatory and speculative potential, cultural representations of machine vision play a key role in answering these questions.

Our premise is that cultural production – including literature, art, cinema, video games, science fiction, memes, fandom and more – is a rich source for understanding the impact of machine vision technologies on society, as well as their potential future trajectory. What can we learn from how machine vision is represented, applied and discussed in digital art, video games, novels, movies, TV-series, fan fiction, electronic literature, popular culture, social media content and other aesthetically or culturally expressive genres?

Qualitative interpretation and/or data analysis

Seeking to experiment with mixed methods approaches, this workshop will ask participants to explore both qualitative interpretations and quantitative analyses of machine vision in art, games, narratives and more, based on the dataset Representations of Machine Vision Technologies in Artworks, Games and Narratives, which documents how machine vision is portrayed in 500 cultural works. The dataset is available as a set of .csv files, is described in a recently published data article, and can be also browsed using a more human-friendly interface at

Participants are invited to either present work-in-progress towards individually or co-authored papers on the topic of machine vision and its cultural representation, or to pitch and discuss strategies for analysing the dataset in collaboration with other workshop participants. In either case, workshop hours will be divided between short presentations, feedback sessions, and individual or collaborative data analysis and/or writing. The workshop has two goals: generating new research from the project’s dataset, and experiment with a mixed method approach to digital humanities data. A goal of the workshop is to develop research that combines qualitative methods with data science and data analysis, and to strengthen interactions between researchers using different methods. 

The workshop will be held at Solstrand Hotel, with delicious meals, beautiful fjord views, plenty of nooks and crannies for writing and group work, and indoor and outdoor bathing opportunities.

Participants will be invited to submit papers developed in the workshop to a special journal issue that is currently under development.

How to apply

Please send an email to by June 15, 2022 with a few sentences about your background and either a 250-word abstract for the paper you would like to develop at the workshop, or an idea for how you’d like to work with the dataset and a short explanation of why you’re interested in participating in the workshop. We are also open to participants who would like to analyse other relevant datasets or materials, or who have general skills in data science and would like experience applying humanities approaches to digital methods. Travel scholarships may be available for a limited number of applicants.

Funding: The workshop is funded by the University of Bergen and the project Machine Vision in Everyday Life: Playful Interactions with Visual Technologies in Digital Art, Games, Narratives and Social Media, which has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771800).

Leave A Comment

Recommended Posts

Triple book talk: Watch James Dobson, Jussi Parikka and me discuss our 2023 books

Thanks to everyone who came to the triple book talk of three recent books on machine vision by James Dobson, Jussi Parikka and me, and thanks for excellent questions. Several people have emailed to asked if we recorded it, and yes we did! Here you go! James and Jussi’s books […]

Image on a black background of a human hand holding a graphic showing the word AI with a blue circuit board pattern inside surrounded by blurred blue and yellow dots and a concentric circular blue design.
AI and algorithmic culture Machine Vision

Four visual registers for imaginaries of machine vision

I’m thrilled to announce another publication from our European Research Council (ERC)-funded research project on Machine Vision: Gabriele de Setaand Anya Shchetvina‘s paper analysing how Chinese AI companies visually present machine vision technologies. They find that the Chinese machine vision imaginary is global, blue and competitive.  De Seta, Gabriele, and Anya Shchetvina. “Imagining Machine […]

Do people flock to talks about ChatGPT because they are scared?

Whenever I give talks about ChatGPT and LLMs, whether to ninth graders, businesses or journalists, I meet people who are hungry for information, who really want to understand this new technology. I’ve interpreted this as interest and a need to understand – but yesterday, Eirik Solheim said that every time […]