I was excited to receive my Narrative Clip this spring. It’s the first consumer lifelogging camera: you clip it to your clothes and it silently takes a photo every 30 seconds. Then you connect it to your computer. It uploads photos to the Narrative server, processes them, and  a while later I can view the photos in my Narrative iPhone app.

“Remember every moment,” the website urges. This device, the website promises, will become your memory. It will capture the moments that are really important to you:

Capture the moment as it happens, without interference. Complement your staged photos of majestic scenery with the intensity of the small moments that matter the most.

narrative-clip-website-image

The assumption that technology can do a better job at capturing human experience than humans can (“without interference”) is a classic example of the dataism I wrote about in my book, and that I talk about in my TEDxBergen talk, “What Can’t We Measure in a Quantified World?” Still, the idea of capturing everyday moments we might not have thought to photograph is intriguing. I was interested to see what my days looked like seen from the perspective of this little camera.

Unfortunately, the Narrative Clip fails utterly at capturing the small moments that matter the most. It doesn’t actually document my life at all. Like all technology, it sees what it sees, not what I see.

Here’s a timelapse video of the 77 photos taken on a Tuesday in Chicago this spring between 15:47 and 17:29. 166 photos were taken in total: these are the photos that the Narrative software thought were the most interesting. The time-lapse video is pretty close to the way you can scan through the still images in the app on your phone.

As the camera was snapping these photos, I was walking with my six-year-old daughter from her school to her ballet class. She was hungry and there were no parks or benches on the way, so we sat down on a low fence that happened to face a wall covered with advertisements. You’ll notice that these are the only faces in the camera, and the algorithms have decided that they must therefore be the most important photographs, using the ads as the cover image for the sequence. Jessie decided she would like to wear the camera during her ballet class, and we thought perhaps the images would be more exciting – all those mirrors and beautiful dancers, you know? But as you see, the camera really didn’t capture anything very memorable about the ballet class either.

It turns out to be really hard to get the Narrative Clip to capture any images of my children, because the clip is worn at chest height and my children only reach up to my waist. So I tried fastening the clip to my jeans pocket instead. Not much better.

According to the website, the software selects the most significant photos using a “momentification” algorithm.

narrative-clip-website-blurb

 

“Momentification” is the process where all your photos are uploaded to the company’s cloud servers, analyzed, sorted and sent back to you with the system’s best guess as to what the most important photos are. Based on my experiences, human faces, even on billboards or on stranger’s faces at the table next to you at a café, are given high priority, which makes sense. Photos that are similar to each other tend to be left out of the time-lapse views of your “moments” so that you get more variety instead of a hundred photos of the same wall.

In my case, the momentification highlighted things that were not important to me, like the advertisement on the wall. Just as importantly, the camera itself did not capture the things that are important to me, like my children. The camera’s fixed position on my chest or jeans pocket gives it a very limited view. Perhaps Google Glass would do a better job, as its camera would move with your head. But even if a camera could perfectly capture what my eyes see, would that really capture my experience satisfactorily?

I wrote more about the Narrative Clip in chapter four of my book Seeing Ourselves Through Technology: How We Use Blogs, Selfies and Wearable Devices to See and Shape Ourselves, comparing it to other forms of life logging. You can buy the book in print or download it for free.

And if you’ve used the Narrative Clip, I’d love to hear how you experienced it.

Leave A Comment

Recommended Posts

Triple book talk: Watch James Dobson, Jussi Parikka and me discuss our 2023 books

Thanks to everyone who came to the triple book talk of three recent books on machine vision by James Dobson, Jussi Parikka and me, and thanks for excellent questions. Several people have emailed to asked if we recorded it, and yes we did! Here you go! James and Jussi’s books […]

Image on a black background of a human hand holding a graphic showing the word AI with a blue circuit board pattern inside surrounded by blurred blue and yellow dots and a concentric circular blue design.
AI and algorithmic culture Machine Vision

Four visual registers for imaginaries of machine vision

I’m thrilled to announce another publication from our European Research Council (ERC)-funded research project on Machine Vision: Gabriele de Setaand Anya Shchetvina‘s paper analysing how Chinese AI companies visually present machine vision technologies. They find that the Chinese machine vision imaginary is global, blue and competitive.  De Seta, Gabriele, and Anya Shchetvina. “Imagining Machine […]

Do people flock to talks about ChatGPT because they are scared?

Whenever I give talks about ChatGPT and LLMs, whether to ninth graders, businesses or journalists, I meet people who are hungry for information, who really want to understand this new technology. I’ve interpreted this as interest and a need to understand – but yesterday, Eirik Solheim said that every time […]