This semester I’m teaching a graduate seminar in digital media aesthetics on machine vision, and in today’s class we discussed drone art, using Dziga Vertov’s manifesto from 1923 (“I am kino-eye, I am a mechanical eye. I, a machine, show you the world as only I can see it.”) and Daniel Greene’s article “Drone Vision”, which was published in Surveillance & Society last year. The art works we discussed were James Bridle’s Dronestagram, the anonymous Texts from Drone, and Muse’s VR music video “Revolt”. I thought the class went really well, so I wanted to describe what we did.

A major point in Greene’s article is that Texts from Drone anthropomorphizes drones, whereas Bridle’s piece does not. Vertov certainly anthropomorphizes the camera in his manifesto, and I find this anthropomorphication very interesting, especially in terms of posthumanism and shared cognition between humans and machines.

Before class, the students had written discussion forum posts discussing one of the art works using Vertov, so we had a place to start our discussion. However, it seemed as though they had mostly used the six page description of Vertov’s work they had read rather than relying on Vertov’s own words (or to be exact, the words of “the council of three”, consisting of Vertov, Elizaveta Svilova and Mikhail Kaufman), so I wanted to focus our attention on the actual manifesto. So I made copies, foraged some highlighters and pens from the department storeroom, and strategically placed them with an assignment around the seminar room table before class. (Here is the same text online, the section titled “The Council of Three”, from page 14 on.)

I wanted the students to mark passages where the movie camera was anthropomorphized or where it was the speaker in the sentence.

Doing this in detail led to really interesting discussions. We talked about passages where it was not clear who the “I” who was speaking was. In one paragraph it’s the human authors of the text (“We affirm the kino-eye”) and in the next it seems human at first (“I make the viewer see…”) but then it’s definitely the camera speaking (“I am kino-eye. I am a builder.”) Then we leave the first person perspective for a while before sliding back into the human subject position: “I promise to drum up a parade of kinoks on Red Square.” And the conclusion positions the camera as secondary to the human: the kino-eye challenges the human eye, and the kinok-editor organises the images produced.

The students noticed that the first stage of anthropomorphising the camera was to refer to it as though it was a slave in need of liberation. Objects do not need to be emancipated. The grammar of sentences is used too, allowing the camera to be the subject of the sentence in the middle portion of the manifesto, but only to be the object in other sections (“The camera ‘carries’ the film viewer…”, “I have placed you…”).

I will definitely plan this kind of directed reading in future classes?—?it was very productive.

Next we looked at a few minutes of Vertov’s Man With a Movie Camera, and then moved on to Greene’s article about drone vision. The students had already written their discussion forum posts about the different art works, so they talked about each of them, but paying more attention to the question of anthropomorphisation of the drones, using the Vertov we had just read.

An important point in Greene’s argument is that Bridle’s Dronestagram in some ways buys into the military-industrial complex’s portrayal of drones as objective and precise, making war “smart”. The bloodless images are bland enough to be displayed on a coffee shop wall. The piece aims to subvert our understanding of drone warfare, but instead makes us empathise with the drone itself, or the drone pilot, not with the victims.

“Bridle mistakes the discourse of drone vision, the story of seamless, imperial visual supremacy, for its operation,” Greene writes (page 241). In fact, though, Greene argues, by trying to let the viewer occupy the drone’s eye view, we “embrace the discourse of drone vision, rather than the work of it” (page 242).

Greene contrasts this apparent bloodless objectivity to the very different Texts from Drone, a collection of memes submitted to a Tumblr. The Tumblr is now gone, but can still be viewed through the Internet Archive’s WayBack Machine. Here, we don’t see as the drone sees, instead we face it as though it was a person. The drone even has a name: D-Ron. Many of the memes show it responding to texts from Obama:

But as the meme develops, other people pose questions to the drone.

D-Ron has a clear personality. It not only enjoys bombing people, it finds it funny. It doesn’t try to rationalise its slaughter as “just”, on the contrary, it enjoys “collatoral damage”. D-Ron speaks in the language of the internet, and it’s not just the grammar and spelling: its attitude is also like something you’d find on 9gag or Reddit.

Daniel Greene writes,

The real power of Texts from Drone is the degree to which D-Ron himsefl is made an actor in the work of Empire, rather than a mute instrument of its policy. He celebrates, without any pretense of military gravitas or regret over mistaken targets, his role as global police. (page 244).

We also see drone as “an ally, not an instrument,” Greene writes (244), with goals and language clearly distinct from Obama’s.

The third work of drone art we looked at was Muse’s VR music video “Revolt” (from their album Drones) which has been released on the app VRSE and can be viewed using a cheap Google Cardboard headsetA couple of the students had already seen the video (we built Google Cardboards a few weeks ago) so for the rest I hooked my phone up to the projector and showed them the non-stereoscopic version. You can get an idea of the experience by viewing it as a 360? video on YouTube, where you can click and drag to see “behind you”, if you view it in Chrome. It’s a far better experience using Google Cardboard though.

This video is entirely focalised from the point of view of a drone. The video begins with startup code, and ends when the drone is shot down. You see everything through the wide-angle lens of a surveillance drone, and at some points in the video, information about enemies or targets (the women charging the police officers) and assets (the police officers) is overlaid the image. At one point (about 4 minutes into the video, one of the musicians even kicks the drone, apparently breaking it.

We had a great discussion about how the viewer is really positioned here. The lyrics tell us “You’re not a drone!” (encouraging us to revolt) and yet we are clearly locked into the drone’s perspective. The heroes of the video and the album are clearly the women who revolt, but we see the events from the perspective of a drone who is pitted against them. The students pointed out that we seemed to switch drones at certain points (for instance after being kicked) and so rather than showing an anthropomorphic drone, perhaps the video is simply focalised from the perspective of a drone pilot working at some remote console. Much to discuss here.

Here is the reading list for the class.


Discover more from Jill Walker Rettberg

Subscribe to get the latest posts sent to your email.

1 Comment

  1. Dale M.

    Muse they are actually a good band, Starlight is my favorite song http://lyricsmusic.name/muse-lyrics/black-holes-revelations/starlight.html

Leave A Comment

Recommended Posts

Academics in Norway: Sign this petition asking for research-based discussions of how to use AI in universities

I just signed a petition calling for Norwegian universities to use research expertise on AI when deciding how to implement it, rather than having decisions be made mostly administratively. ,  If you are a researcher in Norway, please read it and sign it if you agree – and share with anyone else who might be interested. The petition was written by three researchers at UiT: Maria Danielsen (a philosopher who completed her PhD in 2025 on AI and ethics, including discussions of art and working life), Knut Ørke (Norwegian as a second language), and Holger Pötzsch (a professor of media studies with many years of research on digital media, video games, disruption, and working life, among other topics).  This is not about preventing researchers from exploring AI methods in their research. It is about not uncritically accepting the hype that everyone must use AI everywhere without critical reflection. It is about not introducing Copilot as the default option in word processors, or training PhD candidates to believe they will fall behind if they do not use AI when writing articles, without proper academic discussion. Changes like these should be knowledge-based and discussed academically, not merely decided administratively, because they alter the epistemological foundations of research. Maria wrote to me a couple of months ago because she had read my opinion piece in Aftenposten in which I called for a strong brake on the use of language models in knowledge work. She was part of a committee tasked with developing UiT’s AI strategy and was concerned because there was so much hype and so few members of the committee with actual expertise in AI. I fully support the petition. There are probably some good uses for AI in research, but the uncritical, hype-driven insistence that we must simply adopt it everywhere is highly risky. There are many researchers in Norway with strong expertise in AI, language, ethics, working life, and culture. We must make use of this expertise. This is also partly about respect for research in the humanities, social sciences, psychology, and law. Introducing AI at universities and university colleges is not merely a technical issue, and perhaps not even primarily a technical one. It concerns much more: philosophy of science, methodological reflection, epistemology, writing, publishing, the working environment, and more. […]

screenshot of Grammarly - main text in the middle, names of experts on the left with reccomendations and on the right more info about the expert review feature
AI and algorithmic culture Teaching

Grammarly generated fake expert reviews “by” real scholars

Grammarly is a full on AI plagiarism machine now, generating text, citations (often irrelevant), “humanizing” the text to avoid AI checkers and so on. If you’re an author or scholar, they also have been impersonating and offering “feedback” in your name. Until yesterday, when they discontinued the Expert Review feature due to a class action lawsuit. Here are screenshots of how it worked.