Updates on algorithms and society talks

I’ve given a few more versions of the “algorithms and society” talks from this spring. You can still see the videos of those talks, but here are a few links to new material I’ve woven into them:

Social credit in China – this story by the Australian Broadcasting Company paints a vivid picture of what it might be like to live with this system. It’s hard to know exactly what is currently fact and what is conjecture.

Ray Serrato’s Twitter thread about YouTube recommending fake news about Chemnitz,and the New York Times article detailing the issue.

19. September 2018 by Jill
Categories: Algorithmic bias | 1 comment

Generating portraits from DNA: Heather Dewey-Hagborg’s Becoming Chelsea

Did you know you can generate a portrait of a person’s face based on a sample of their DNA? The thing is, despite companies selling this service to the police to help them identify suspects, it’s not really that accurate. That lack of precision is at the heart of Heather Dewey-Hagborg’s work Probably Chelsea, a display of 30 masks showing 30 possible portraits of Chelsea Manning based on a sample of her DNA that she mailed to the artist from prison. The work is showing at Kunsthall 3.14 here in Bergen until the end of September.

Many masks resembling human faces hang from the ceiling in an art gallery.

Continue Reading →

11. September 2018 by Jill
Categories: Digital Art, Machine Vision, Visualise me | 1 comment

My ERC interview: the full story

It seems more and more research funding is awarded in a two-step process, where applicants who make it to the second round are interviewed by the panel before the final decisions are made. I had never done this kind of interview before I went to Brussels last October, and was quite nervous. I must have done OK, because I was awarded the grant, and my ERC Consolidator project, Machine Vision in Everyday Life: Playful Interactions with Visual Technologies in Digital Art, Games, Narratives and Social Media, officially started on August 1! Hooray!  Continue Reading →

22. August 2018 by Jill
Categories: Academia | Leave a comment

The god trick and the idea of infinite, technological vision

When I was at the INDVIL workshop about data visualisation on Lesbos a couple of weeks ago, everybody kept citing Donna Haraway. “It’s the ‘god trick’ again,” they’d say, referring to Haraway’s 1988 paper on Situated Knowledges. In it, she uses vision as a metaphor for the way science has tended to imagine knowledge about the world. Continue Reading →

21. June 2018 by Jill
Categories: Machine Vision | Leave a comment

Skal samfunnet styres av algoritmer? To foredrag og syv bøker

[English summary: info about two recent talks I gave about algorithmic bias in society]

Algoritmer, stordata og maskinlæring får mer og mer å si for samfunnet vårt, og brukes snart i alle samfunnsområder: i skolen, rettsstaten, politiet, helsevesenet og mer. Vi trenger mer kunnskap og offentlig debatt om dette temaet, og jeg har vært glad for å kunne holde to foredrag om det den siste måneden, en lang og en kort – og her kan du se videoene om du vil! Continue Reading →

23. April 2018 by Jill
Categories: Algorithmic bias | Leave a comment

Best Guess for this Image: Brassiere ( The sexist, commercialised gaze of image recognition algorithms.)

Did you know the iPhone will search your photos for brassieres and breasts, but not for shoulders, knees and toes? Or boxers and underpants either for that matter. “Brassiere” seems to be a codeword for cleavage and tits. Continue Reading →

28. March 2018 by Jill
Categories: Machine Vision, Visualise me | 1 comment

My project on machine vision will be funded by the ERC!

Amazing news today: my ERC Consolidator project is going to be funded! This is huge news: it’s a €2 million grant that will allow me to build a research team to work for five years to understand how machine vision affects our everyday understanding of ourselves and our world.

Three images showing examples of machine vision: Vertov's kinoeye, a game that simulates surveillance, Spectacles for Snapchat.

Here is the short summary of what the project will do:

In the last decade, machine vision has become part of the everyday life of ordinary people. Smartphones have advanced image manipulation capabilities, social media use image recognition algorithms to sort and filter visual content, and games, narratives and art increasingly represent and use machine vision techniques such as facial recognition algorithms, eye-tracking and virtual reality.

The ubiquity of machine vision in ordinary peoples’ lives marks a qualitative shift where once theoretical questions are now immediately relevant to the lived experience of ordinary people.

MACHINE VISION will develop a theory of how everyday machine vision affects the way ordinary people understand themselves and their world through 1) analyses of digital art, games and narratives that use machine vision as theme or interface, and 2) ethnographic studies of users of consumer-grade machine vision apps in social media and personal communication. Three main research questions address 1) new kinds of agency and subjectivity; 2) visual data as malleable; 3) values and biases.

MACHINE VISION fills a research gap on the cultural, aesthetic and ethical effects of machine vision. Current research on machine vision is skewed, with extensive computer science research and rapid development and adaptation of new technologies. Cultural research primarily focuses on systemic issues (e.g. surveillance) and professional use (e.g. scientific imaging). Aesthetic theories (e.g. in cinema theory) are valuable but mostly address 20th century technologies. Analyses of current technologies are fragmented and lack a cohesive theory or model.

MACHINE VISION challenges existing research and develops new empirical analyses and a cohesive theory of everyday machine vision. This project is a needed leap in visual aesthetic research. MACHINE VISION will also impact technical R&D on machine vision, enabling the design of technologies that are ethical, just and democratic.

The project is planned to begin in the second half of 2018, and will run until the middle of 2023. I’ll obviously post more as I find out more! For now, here’s a very succinct overview of the project, or you can take a look at this five-page summary of the project, which was part of what I sent the ERC when I applied for the funding.

28. November 2017 by Jill
Categories: Machine Vision | 2 comments

Hand signs on musical.ly = emoji for video

drawings of a musical.ly user using hand signs

You know how we add emoji to texts?  In a face-to-face conversation, we don’t communicate simply with words, we also use facial expressions, tone of voice, gestures and body language, and sometimes touch. Emojis are pictograms that let us express some of these things in a textual medium. I think that as social media are becoming more video-based, we’re going to be seeing new kinds of pictograms that do the same work as emoji do in text, but that will work for video.

I wrote a paper about this that was just published in Social Media and Society, which is an open access journal that has published some really fabulous papers in social media and internet studies. It’s called Hand Signs for Lip-syncing: The Emergence of a Gestural Language on Musical.ly as a Video-Based Equivalent to Emoji. As you might have guessed, it argues that the hand signs lip-syncs on musical.ly use are doing what emoji do for text – but in video.

Musical.ly is super popular with tweens and teens, but for those of you not in the know, here is an example of how the hand signs work on musical.ly.

Musical.ly has become a pretty diverse video-sharing app, but it started as a lip-syncing app, and lip-syncing is still a major part of musical.ly. You record 15 second videos of yourself singing to a tune that you picked from the app’s library. You can add filters and special effects, but you can’t add text or your own voice.

I think the fact that the modalities are limited – you can have video but no voice or text – leads to the development of a pictogram to make up for that limitation. That’s exactly what happened with text-based communication. Emoticons came early, and were standardised as emoji 🙂 after a while.

Hand signs on musical.ly are pretty well defined. Looking at the videos or the tutorials on YouTube you’ll see that there are many that are quite standard. They’re usually made with just one hand, since the camera is held in the other hand, and often camera movements are important too, but more as a dance beat than as a unit of meaning. Here are the hand signs used by one lip-syncer to perform a 15 second sample from the song “Too Good” by Drake and Rihanna. First, she performs the words “I’m way too good to you,” using individual signs for “too”, “good”, “to” and “you”.

drawings of a musical.ly user using hand signs

The next words are “You take my love for granted/I just don’t understand it.” This is harder to translate into signs word for word, so the lip-syncer interprets it in just three signs, pointing to indicate “you”, shaping her fingers into half of a heart for “love”, and pointing to her head for “understand”.

drawings of a musical.ly user using hand signs

Looking at a lot of tutorials on YouTube (I love Nigeria Blessings’ tutorial) and at a lot of individual lip-syncing videos, I came up with a very incomplete list of some common signs used on musical.ly:

In my paper I talk about how these hand signs are similar to the codified gestures used in early oratory and in theatre. These are called chironomics, and there are 17th and 18th century books explaining them in detail. The drawings are fascinating:

I think it’s important to think of the hand signs as performance, and in the theatrical or musical sense, not in the more generalised sense that Goffman used for a metaphor, where all social interaction is “performative”. No, these are literal performances, interpretations of a script for an audience. That’s important, because without realising that, we might think the hand signs are just redundant. After all, they’re just repeating the same things that are said in the lyrics of the song, but using signs. When we think of the signs as part of a performance, though, we realise that they’re an interpretation, not simply a repetition. Each muser uses hand signs slightly differently.

And those hand signs aren’t easy. Just look at Baby Ariel, who is very popular on musical.ly, trying to teach her mother to  lip-sync. Or look at me in my Snapchat Research story trying to explain hand gestures on musical.ly just as I was starting to write the paper that was published this week:

The full paper, which is finally published after two rounds of Revise & Resubmit (it’s way better now) is open access, so free to read for anyone.

Oh, and sweethearts, if you feel like tweeting a link to the paper, it ups my Altmetrics. That makes the paper more visible. How about we all tweet each others papers and we’ll all be famous? ?

27. October 2017 by Jill
Categories: social media | Tags: , , , | Leave a comment

I’m a visiting scholar at MIT this semester

I’m on sabbatical from teaching at the University of Bergen this semester, and am spending the autumn here at MIT. Hooray!

It’s a dream opportunity to get to hang out with so many fascinating scholars. I’m at Comparative Media Studies/Writing, where William Uricchio has done work in algorithmic images that meshes beautifully with my machine vision project plans, and where a lot of the other research is also very relevant to my interests. I love being able to see old friends like Nick Montfort, look forwards to making new friends and catching up with old conference buddies. And just looking at the various event calendars makes me dizzy to think of all the ideas I’ll get to learn about.

Nancy Baym and Tarleton Gillespie at Microsoft Research’s Social Media Collective have also invited me to attend their weekly meetings, and the couple of meetings I’ve been at so far have been really inspiring. On Tuesday I got to hear Ysabel Gerrad speaking about her summer project, where she used Tumblr, Pinterest and Instagram’s recommendation engines to find content about eating disorders that the platforms have ostensibly banned. You can’t search for eating disorder-related hashtags, but there are other ways to find it, and if you look at that kind of content, the platforms offer you more, in quite jarring ways. Nancy tweeted this screenshot from one of Ysabel’s slides – “Ideas you might love” is maybe not the best introduction to the themes listed…

Thinking about ways people work around censorship could clearly be applied to many other groups, both countercultures that we (and I know we is a slippery term) may want to protect and criminals we may want to stop. There are some ethical issues to work out here – but certainly the methodology of using the platform’ recommendation systems to find content is powerful.

Yesterday I dropped by the 4S conference: Society for Social Studies of Science. It’s my first time at one of these conferences, but it’s big, with lots of parallel sessions and lots of people. I could only attend one day, but it’s great to get a taste of it. I snapchatted bits of the sessions I attended if you’re interested.

Going abroad on a sabbatical means dealing with a lot of practical details, and we’ve spent a lot of time just getting things organised. We’re actually living in Providence, which is an hour’s train ride away. Scott is affiliated with Brown, and we thought Providence might be a more livable place to be. It was pretty complicated just getting the kids registered for school – they needed extra vaccinations, since Norway has a different schedule, and they had to have a language test and then they weren’t assigned to the school three blocks from our house but will be bussed to a school across town. School doesn’t even start until September 5, so Scott and I are still taking turns spending time with the kids and doing work. We’re also trying to figure out how to organize child care for the late afternoon and early evening seminars and talks that seem to be standard in the US. Why does so little happen during normal work hours? Or, to be more precise, during the hours of the day when kids are in school? I’m very happy that Microsoft Research at least seems to schedule their meetings for the day time, and a few events at MIT are during the day. I suppose it allows people who are working elsewhere to attend, which is good, but it makes it hard for parents.

I’ll share more of my sabbatical experiences as I get more into the groove here. Do let me know if there are events or people around here that I should know about!

02. September 2017 by Jill
Categories: Uncategorized | 3 comments

Visa approved

I’m going to be spending next semester as a visiting scholar at MIT’s Department of Comparative Media Studies, and there are a lot of practical things to organize. We have rented a flat there, but still need to rent out our place at home (anyone rneed a place in Bergen  from August to December?). I’ve done the paperwork for bringing Norwegian universal health insurance with us to the US, and still have a few other forms to fill out for taxes. I think we can’t do anything about the kids’ schools before we get there.

But today’s big task was going to the US embassy in Oslo to apply for a visa.

Stamp on my DS-2019

Notes of interest about visiting the US embassy:

  1. They’ll store your phone and other small items in a box at the gate, but no large items or laptops.
  2. There are no clocks on the walls of the waiting room. Rows of chairs face the counters where the embassy employees take your paperwork and then call you up for your interview.
  3. They only let you bring your paperwork with you, nothing else. It was a two hour wait. There is no reading material provided except some children’s books. So the room was full of silent people with no phones, staring into space. The lack of phones or newspapers did NOT make them speak to each other.
  4. I had luckily brought a printout of a paper that needs revising and they seemed to think that was part of my paperwork so didn’t confiscate it. They wouldn’t let me bring my book or even my pencil. Luckily there was a pen chained to a dish at a counter not being used so I borrowed that and now have a wonderfully marked up essay that, once my computer is out, I can hopefully fix in a jiffy after my two hours of paper-based work on it. I was the only person in the waiting room not staring into space.

08. June 2017 by Jill
Categories: Uncategorized | Tags: | Leave a comment

← Older posts