This is a fascinating demo of how massive amounts of photo-data can be linked together in a system called Photosynch. (The best bits are about 3-4 minutes in and to the end.) They’ve downloaded all the photos on Flickr of Notredame (I bet mine and Scott’s are in there too!) and are able to hook them together – the software actually analyses the images and matches bits that are the same – the round window over the entrance, obviously, but also more diffuse shapes and colours. It then pieces together a navigable space where if you click on that piece of carving, there, and another photo in the collection provides a closeup of it, you zoom in to see the closeup.

You can try some of this out online at the Photosynch website, too. But it only works on Windows. For now. I’m OK dreaming about it for a while longer, anyway – oh, how do you think it would handle going through the hundreds of photos from our wedding on Flickr and trying to automatically stitch them together? Could it do time as well as place, or is this a technology best suited to studying monuments that are stable over time, like the Notre Dame? How could this system handle photos of a child growing into a woman, I wonder? Would it?
(via Martin)

Leave A Comment

Recommended Posts

Image on a black background of a human hand holding a graphic showing the word AI with a blue circuit board pattern inside surrounded by blurred blue and yellow dots and a concentric circular blue design.
AI and algorithmic culture Machine Vision

Four visual registers for imaginaries of machine vision

I’m thrilled to announce another publication from our European Research Council (ERC)-funded research project on Machine Vision: Gabriele de Setaand Anya Shchetvina‘s paper analysing how Chinese AI companies visually present machine vision technologies. They find that the Chinese machine vision imaginary is global, blue and competitive.  […]