I gave a talk at the Moral Machines symposium in Helsinki last year, and just heard that a revised version of the talk will be published in an anthology tentatively titled The Ethos of Digital Environments: Technology, Literary Theory and Philosophy. The anthology is edited by Hanna-Riikka Roine and Susanna Lindberg and will be published by Routledge, presumably in 2021 or 2022. Here is an excerpt from my draft of the chapter, where I explore the idea that there might not be that much difference between a neural network that can predict when a human would cry and that involuntary tightness we humans sometimes feel in our chests when we watch a sad movie.

Emotions are often conceived as the determining difference between humans and machines, and indeed, between groups of humans and whatever or whoever they wish to define as non-human. ā€œThey donā€™t have the same feelings we do,ā€ the narrator imagines the wives thinking of the handmaids in Margaret Atwoodā€™s novelĀ (1986, 215); ā€œthey donā€™t seem to feel anything, no pleasure, no painā€, the Terrans remark of the indigenous people they rape and beat in Ursula le Guinā€™sĀ The Word for World is ForestĀ (1972, 18).

ā€œNo text-analysis program weeps when it reads the passages in Felix Saltenā€™s Bambi in which Bambiā€™s mother diesā€, Drucker wrote in ā€œWhat Distant Reading Isnā€™tā€, as though this ability to weep, to respond emotionally and not just analytically is our defining difference. A neural network would not weep, it is true, but it could certainly learn to identify texts that would typically make a human weep. How different is that from actually feeling?

 If machines can feel, does that make them human? Should they have rights as people do? In Bladerunner, replicants are subjected to a test to make sure they do not have human emotions, and are allowed no more than a four-year lifespan so they do not develop emotions. Once machines feel, humans can also develop feelings for them, as in the many stories where humans fall in love with robots, as Drecker falls in love with the replicant Rachael in Bladerunner

One of the ethical arguments against killer robots, or lethal autonomous weapon systems (LAWS), is that they kill without emotion, as Daniel Lim describes in a paper discussing the different ethical arguments used against LAWS:

LAWS do not have a capacity to have a notion of sacrifice or a capacity to fathom the significance of using force against a human because they donā€™t feel anythingā€”they have no emotions. It is only because humans can feel the rage and agony that accompanies the killing of humans that they can understand sacrifice and the use of force against a human. Only then can they realize the ā€˜gravity of the decisionā€™ to kill. (Lim 2019)

Lim disagrees with this argument. Not all people feel the appropriate emotions when killing in battle either, he points out. While many humans may cry when they read of Bambiā€™s motherā€™s death, not all humans will. Can we really separate human from machine based on emotion? We have arrived at a moral impasse. 

Another approach is to use N. Katherine Haylesā€™s understanding of cognition as something that is common to humans, animals and many technical systems. Hayles defines cognition as ā€œa process that interprets information within contexts that connect it with meaningā€ (Hayles 2017, 22). This definition covers what a car with a collision avoidance system does when it gathers data from its surroundings and interprets it in order to determine whether a dangerous situation is occurring, upon which it overrides the human driver and takes evasive measures. It also covers a humanā€™s non-conscious cognition as a human body gathers data about its surroundings and adjusts bodily functions accordingly. For instance, when I chop an onion, my eyes sting as the onion releases a chemical irritant into the air, and in response, my eyesā€™ lachrymal glands release tears. But this has nothing to do with emotions, you might object, it is simply biomechanics. And humans can cry without being sad. For their artwork Tear Dealer, Polish artists Alicja Rogalska and Lukasz Surowiec set up a temporary shop where people could ā€œproduce and sell their tears for approximately ā‚¬25 per 3mlā€ (2014). The video installation documenting the process shows redfaced people listening to presumably sad music or a sad story through headphones, weeping, some wailing, apparently genuinely wracked with emotion. Each person then clinically lifts a test tube to the corner of each of their eyes to capture tears. One woman laughs to a friend when comparing how much liquid each of them has captured. 

Think of a time when you felt a strong urge to cry and you tried to stop yourself. Were the emotions and their symptom (the constrained chest and throat, the reddening eyes, the hormones flowing through your veins) consciously, rationally chosen? Or were they the result of a form of non-conscious cognition? Or try an experiment: do a YouTube search for a supercut of sad moments from movies, and see how long you can watch it (fullscreen, music on, donā€™t cheat) before you begin to feel sad, or to feel your chest clench and your eyes ache a little.

Is this also unconscious cognition? And if it might be considered such, do you think there is a great difference between our sadness in response to a supercut of sad movie moments, and a neural networkā€™s analysis of the same supercut that results in an inference that the video is ā€œsadā€? If we accept Haylesā€™ definition of non-conscious cognition, I am not sure that we can see human tears as particularly different from machine tears.

When people say that machines cannot weep, what they really mean is that machines donā€™t fully understand the data they are processing with the emotional depth of a human. The autonomous weapon system registers the presence of an enemy, compares the available data to a list of known threats, perhaps it looks for patterns to see whether the behaviour of the enemy target matches behaviour typically produced by people who have previously performed an attack, and based on this, it will either attack or not attack the target. The autonomous weapon system does not feel fear or rage or remorse, although we could certainly program it to perform some semblance of those emotions. 

But what if our emotions, based in our bodies, are no more than technical cognitive processes that might just as well be programmed in a machine? Sara Ahmed writes that ā€œ[t]hinking about what emotions do cannot be done without thinking about sweating and the sense of being in a bodyā€ (Schmitz and Ahmed 2014, 97). Machines lack bodies of flesh and blood, but they do have a material form. Is there a sweatiness of algorithmic decisions, an embodied cognition of machine learning? We tend to think of AI and machine learning as biased rather than embodied, though scholars like Wendy Chun have pointed out that the materiality of technology shapes what is possible (Chun 2008; 2017). Biases are often blamed on the datasets the machine learning is trained on. If the faces used to train a facial recognition algorithm are mostly white and male, the algorithm will be far better able to identify white men than black women (Buolamwini and Gebru 2018). Reading this through Haylesā€™ concepts in Unthought, one might conclude that the sweatiness, the embodiment of algorithms is in the entwinement between technical and human cognizers: ā€œnetworks of non-conscious cognitions between and among the planetā€™s cognizers are transforming the conditions of life, as human complex adaptive systems become increasingly interdependent upon and entwined with intelligent technologies in cognitive assemblagesā€ (Hayles 2017, 216). The biased datasets are the remnants of sweat, of human embodiment, in the machine learning. 

References

Atwood, Margaret. 1986. The Handmaidā€™s Tale. New York: Houghton Mifflin Harcourt.

Buolamwini, Joy, and Timnit Gebru. 2018. ā€œGender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.ā€ In Conference on Fairness, Accountability and Transparency, 77ā€“91. http://proceedings.mlr.press/v81/buolamwini18a.html.

Chun, Wendy Hui Kyong. 2017. Updating to Remain the Same: Habitual New Media. Cambridge, MA: MIT Press.

Chun, Wendy Hui Kyong. 2008. ā€œThe Enduring Ephemeral, or the Future Is a Memory.ā€ Critical Inquiry35 (1): 148ā€“71. https://doi.org/10.1086/595632.

Drucker, Johanna. 2017. ā€˜Why Distant Reading Isnā€™tā€™. PMLA 132 (3): 628ā€“35. https://doi.org/10.1632/pmla.2017.132.3.628.

Hayles, N. Katherine. 2017. Unthought: The Power of the Cognitive Nonconscious. Chicago: University Of Chicago Press.

Le Guin, Ursula. 1972. The Word for World Is Forest. New York: TOR.

Lim, Daniel. 2019. ā€œKiller Robots and Human Dignity.ā€ In , 6. Honolulu: ACM. http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_6.pdf.

Rogalska, Alicja, and Lukasz Surowiec. 2014. Video documentation, glass vial containing human tears. https://www.works.io/35832/tear-dealer.

Schmitz, Sigrid, and Sara Ahmed. 2014. ā€œAffect/Emotion: Orientation Matters. A Conversation between Sigrid Schmitz and Sara Ahmed.ā€ Freiburger Zeitschrift FĆ¼r GeschlechterStudien 22 (2): 97ā€“108. https://doi.org/10.3224/fzg.v20i2.17137.

Leave A Comment

Recommended Posts

Triple book talk: Watch James Dobson, Jussi Parikka and me discuss our 2023 books

Thanks to everyone who came to the triple book talk of three recent books on machine vision by James Dobson, Jussi Parikka and me, and thanks for excellent questions. Several people have emailed to asked if we recorded it, and yes we did! Here you go! James and Jussi’s books […]

Image on a black background of a human hand holding a graphic showing the word AI with a blue circuit board pattern inside surrounded by blurred blue and yellow dots and a concentric circular blue design.
AI and algorithmic culture Machine Vision

Four visual registers for imaginaries of machine vision

I’m thrilled to announce another publication from our European Research Council (ERC)-funded research project on Machine Vision: Gabriele de Setaand Anya Shchetvina‘s paper analysing how Chinese AI companies visually present machine vision technologies. They find that the Chinese machine vision imaginary is global, blue and competitive.  De Seta, Gabriele, and Anya Shchetvina. ā€œImagining Machine […]

Do people flock to talks about ChatGPT because they are scared?

Whenever I give talks about ChatGPT and LLMs, whether to ninth graders, businesses or journalists, I meet people who are hungry for information, who really want to understand this new technology. I’ve interpreted this as interest and a need to understand – but yesterday, Eirik Solheim said that every time […]