There are a lot of interesting theory papers being published on AI these days, and I don’t just want to read them all, I want to discuss them with people. So I whipped up a list of papers I’d like to read and invited people, thinking it’d probably just be 2-3 people in the CDN meeting room. But then people were saying they’d like to join online, so I made a zoom link, and for our first session, which was dedicated to discussing Mark Coeckelbergh’s paper “Technofascism: AI, Big Tech, and the new authoritarianism,” we had about a dozen people in person and another dozen online. It was great!

I had worried it would be difficult to have an actual discussion in the hybrid format, but it worked just fine. Several people had reading suggestions, and when Anja Salzman showed the group some German book recommendations, the online people self-organized a German-language online reading group in the chat – I hope they report back to those of us who are less proficient in German. Here are some of the reading recommendations from Anja and other participants:

Several people thought that fascism and AI could both have been defined more clearly. “I agreed with everything in the article,” one participant said, “but I kept wishing he would make the argument more convincing,” and several people nodded in response. I actually didn’t notice a lack of clarity when reading the article, and found it well written. I’ve been reading up on fascism in the last year, so recognized most of the material discussed, but found it a useful summary that I think would be a good introduction. And as someone pointed out, it’s not like there’s one singular definition of fascism, it’s a fuzzy concept.

Dom, who happens to have written a book about myth, pointed out that myth is also not very clearly defined. But the treatment of myth in the paper is one of the ideas I found really interesting. Coeckelbergh argues that while classical fascism centers on a myth of a glorious past that must be reborn, technofascism puts emphasis on myths of the future, and in particular the myth of AI being about to be capable of almost anything. I’ve not thought of the connection between AI hype and fascism before, because the classic definitions of fascism usually emphasise the past. Another angle, which Dianna pointed out was missing from the article, is the way AI is being used to create a (fake) mythic past. Ida called AI-imagery “speculative fiction,” which is another good insight.

Another critique of the article was its emphasis on fascism alone. Various people pointed out that an alternate origin story could have been told, for example emphasising the cyberlibertarianism of the United States, or authoritarianism in general. A couple of people commented that the Marxist perspective seemed almost to be being avoided – Fuchs was mentioned, but barely, and what about cannibal capitalism, someone asked. Others were pleased that Erich Fromm was included. And several, like me, were particularly interested in Coeckelbergh’s emphasis on how AI is used to manipulate emotion, often through intimacy, and its connection to aesthetics.

Towards the end of the paper, Coeckelberg writes of the need to tell stories about the potential good uses of AI. But most of his paper is about the bad effects of technology. “What about #metoo? Or the Epstein files?” someone asked. The same technology can be used to tear down oppressive systems and build community. We need to not only be telling and retelling those stories as well as the bad ones, we should also work to enable more of these good uses of technology.

A lot more was said, of course. And I’m looking forwards to our next session! It’ll be on March 10th from 12 noon until 13:00 Bergen time, and we are reading Ryan Heuser’s paper “Generative Aesthetics: On Formal Stuckness in AI Verse“, published in the Journal of Cultural Analytics. Either join us in person in the glass house at CDN at the University of Bergen, or you can sign up for the Zoom.


Discover more from Jill Walker Rettberg

Subscribe to get the latest posts sent to your email.

Leave A Comment

Recommended Posts

screenshot of Grammarly - main text in the middle, names of experts on the left with reccomendations and on the right more info about the expert review feature
AI and algorithmic culture Teaching

Grammarly generated fake expert reviews “by” real scholars

Grammarly is a full on AI plagiarism machine now, generating text, citations (often irrelevant), “humanizing” the text to avoid AI checkers and so on. If you’re an author or scholar, they also have been impersonating and offering “feedback” in your name. Until yesterday, when they discontinued the Expert Review feature due to a class action lawsuit. Here are screenshots of how it worked.