So what does it do to democracy if we can predict the results of an election with 100% accuracy? Nate Silver’s predictions at the NY Times’ Fivethirtyeight.com election poll blog correctly called the results of 50 out of 50 states in this year’s US elections. In 2008, Mashable writes, he only got 49 out of 50 states (Obama won Indiana by 0.1%). Here’s the side by side comparison Mashable showed us, in this tweet from interaction designer Michael Cosentino:
The ability to accurately predict the results of an election, even a relatively simple two party election as in the US, is quite new. As recently as this summer a blogger for The Economist by the name of “M.D.” wrote that forecasts in general are not very accurate, although “the 2008 election happened to be a good year for the forecast industry, with all 15 forecast models with which I am familiar, save one, predicting Barack Obama’s victory.”
Given Nate Silver’s results this year, I’m guessing that 2008 didn’t just “happen to be” a good year. What’s happening is that we’re getting very, very good at analysing big data. Also, more and more applicable data is available in a format that we can analyse – we’re using Twitter as well as traditional polls.
Interestingly a quick search on Google Scholar found plenty of articles discussing how to make more accurate election forecasts, but I didn’t find anything about whether perfectly accurate election forecasts are something we really want. Nate Silver’s prediction victory is reported in many news outlets (including Norwegian Dagbladet) but the only criticism of the model that I’ve seen is to question its accuracy – please tell me there are people considering what it means for democracy?
What is the point of voting, if we have 99.999% accurate predictions? Is voting an anachronism when we can simply analyse the population as a whole using astounding amounts of data? If predictions match election results perfectly, are they now unbiased? If we know that predictions are extremely accurate, does it change the way we vote, or the kinds of people who turn out to actually vote? Perhaps using prediction software that included the whole population could be more democratic than the current system of actually going to a physical place to vote, which has all kinds of built in exclusion of some kinds of voices.
But it is also easy to imagine a world where presidents are chosen by algorithms analysing the people’s sentiments and opinions. We’ve stopped using old-fashioned voting, because the software is so much more fair. But what happens when the algorithm is tweaked in favour of one of the candidates. So easy to do. Such profound consequences.
Jesper Juul has become interested in visualisations of genre histories, and in a blog post yesterday he both showed the above visualisation of the history of film genres, based on 2000 US films, and linked to his own article on the history of matching tile games, where one of his methods in mapping the history of the genre was creating a visual family tree of influences, partly based on Alfred J. Barr’s diagram of “Cubism and Abstract Art” from 1936, which, as Jesper writes, is also criticised by Tufte. Here is Jesper’s family tree of matching tile games:
Jesper and commentors to his post discuss briefly whether visual genre histories could be automatically generated, and I wonder, too, whether we could create something like this for genres of electronic literature from the data in the ELMCIP Knowledge Base of Electronic Literature. We would need to have an even more complete data set, and to make sure that everything was carefully tagged by genre, but once that was done, it would certainly be possible to generate a visualisation like the film genre visualisation above.
[VIdeo of the conference is also available at http://bambuser.com/v/3110251]
Remediation of the Social is the international conference that is the highlight of the ELMCIP project, and we’re excited to be here! We not only brought the whole Electronic Literature Research Group from UiB, we also brought eleven of our e-lit students. Look at everyone arrived at the airport:
Actually I didn’t arrive with the team, I showed up this morning, still in time for the start of the conference though. The auditorium at the Edinburgh College of Art has stuccoed ceilings and many familiar faces from the e-lit and new media conferences I’ve attended in the last fifteen years, and many new faces I don’t yet know.
Nick Montfort gives the keynote (documentation, and soon, the video, are available on the ELMCIP Knowledge Base), and his theme (as I read it) is about fun, and about how programming and creating programmable art and literature can and should be fun and accessible. He shows us an early television commercial for the Vic-20 computer, emphasising that it was “not just for games” – it had “a real computer keyboard”. Next, an Australian commercial for the Commodore 64 that I’ve shown students as well. This ad is pretty funny, but notice particularly the wonderful juxtapositions: look at these fun things we could do! We could go to a waterpark – or we could program! Back then, programming was seen (and marketed) as fun, easy and an obvious thing you’d want to do with your computer. The demoscene is another example of this, a wonderful subculture of young people who challenge each other to make their computers generate cool graphics and “demos”. “The Popular Demo” is an example of how the demoscene is about fun, and very different from the apocalyptic aesthetics of most computer games (I’d say there are plenty of exceptions to this – and I’m sure Nick would agree.)
Next, Nick fires up a C64 emulator and proceeds to teach us enough BASIC programming to write the 10 print program that he and a pile of other scholars have just written a book dissecting. I am definitely showing students this bit of the lecture for my next run of my BASIC class. Happily all this is being videotaped. All this will lead, in a moment, to the program that is also the title of the book Nick and nine co-authors are releasing in a couple of weeks from MIT Press: 10 PRINT CHR$(205.5+RND(1)); : GOTO 10.
Scott Rettberg steps in for Rita Raley (who is stuck in lower Manhattan without power or internet or access to the flight she had tickets on) as respondent, and uses the example of collaborative writing as a counter example to Nick’s programming examples. E-lit authors also collaborate and have fun, for instance as Nick and Scott did in Implementation, or more programmatically, as Scott and then many, many other authors rewrote and recoded Nick’s generative poem Taroko Gorge.
Today we’ve gathered electronic literature experts with gallerists, artists and curators from Bergen at Hordaland kunstsenter for a workshop on Curating and Exhibiting Electronic Literature, which is a first step in preparing to host the ELO conference here in Bergen in 2015. Our goal is to learn more about how to think when we curate exhibitions for the ELO2015 conference, and specifically, to help formulate a call for works for the coming ELO Conference and Exhibition for works of electronic literature that is suited for the different Bergen venues which highlight the Bergen electronic art and literature scene.
I’m not going to blog every talk and discussion, but will “liveblog” a few interesting links and discoveries.
Nick Montfort talked, among other things, about Games by the Book, a recent exhibition at the Humanities Library at MIT, where books were presented along with games. Lovely idea for a library exhibition.
Dene Grigar talked us through some of the nine (so far!) exhibitions of electronic literature she’s curated, and Simon Biggs and Mark Daniels skyped in from the ELMCIP conference in Edinburgh, Remediating the Social, to show us what the exhibition there looks like a couple of days before opening. Simon mentioned the challenges of a juried or peer reviewed selection process when you want to create a coherent, curated exhibition. The hurricane on the East coast of the US is also causing trouble. Some art works have not arrived, others, like John Cayley and Daniel C. Howe’s ”Common Tongues”, are at the gallery but without their artist, and with phone lines down and no way of contacting John, it’s difficult to make sure the work is presented the way it was intended. Dene talked about how she got the electronic literature exhibition going at the MLA conference in 2012: figuring she could coast the digital humanities wave at the 2011 conference, she simply grabbed hold of the MLA leadership and asked if I could do it. She already owned all the computers and drove all the gear down to the conference (three hours from her home) along with students who worked as docents explaining the works to the audience. She borrowed pedestals from local galleries. MLA provided no funding, so she had to write a lot of grant applications.
Kristian Pedersen is an animator who works with poets to create beautiful moving poetry. He showed us the process behind one of his recent pieces, “Bokstavene” (or “Letters”) which plays upon the very analogue human errors in consulting a microfilm archive.
Søren Pold talks about exhibitions he has done in collaboration with the Roskilde library, including one where readers use glued-together leather-bound books like Wii controllers to generate a poem, Tilfældigvis er skærmen blevet blæk (“Coincidentally, the screen has turned to ink”). After your interaction, it prints out the poem on a narrow slip of paper, and posts them to a blog. The installation was even more successful when presented at the Roskilde Festival, where the printouts were particularly useful: people took the printout back to their tents, showed them to friends and their friends came back and tried the installation out for themselves.
Rui Torres, who works on the Po-ex archive of Portuguese experimental poetry, talks about creating a database, and how the rigidity of the database and its metadata is necessary so we can be creative with the database. The interface is a kind of remix, you remix the content of your database through the interfaces, and sometimes the interface might be an exhibition.
Talan Memmott presents the ELMCIP Anthology of Electronic Literature, which is being launched this week at the Remediating the Social conference in Edinburgh. Eighteen works from across Europe – it looks beautifully clean and inviting. The physical edition is on a cute little flash drive and it will also be released online soon.
[Lunch at Pascal. Yum.]
Sissel Lillebostad teaches curators at KHiB. When you work with commissions in public space, you deal with a very present audience. The space is already occupied: by people, their needs, visions, routines, habits, expectations, information. When introducing art into this kind of space, you have to do it by violence. You have to actually conquer the space for art. Time is also important. KORO expects publicly funded public art to last for at least twenty years. The curator’s space is a wish, a vision. It is redefined and created by three unstable structures: the art, its reception and the space itself. All are unforeseeable. A case study: Adsonore by Natasha Barrett, which is a sound installation in the stairwell of a building at the hospital - I blogged about it when it was first installed in 2003. Adsonore has turned out to be a complete failure, Sissel says (and I remember reading that it frightens the people who use the building), but, she asks, why? It was well-conceptualised, there were so many good things. But the people who work in the building hated it so much that it has been turned off. People responded in two ways. Some said well, it was exciting, kind of lively, but a bit frightening at night when I heard voices at the bottom but couldn’t see anything. But 80% became very hostile to the work in the first few months after it was installed. The space was too much for the work. So Natasha Barrett changed the system to only run during office hours. That didn’t help. The sounds it creates are too intense – every little movement reverberates through the space. Slamming doors, echos, fragments of conversations from last year, yesterday, ten minutes ago. It’s a text, and it forces anyone who walks through that You want to be able to focus completely on art. But in public space you also need to be able to ignore art. You cannot constantly be confronted by art. In a white cube, you can install art that is very demanding. But you can’t do that in a public space.
A panel presents some local organizations and art spaces: Anne Marte Dyvi presents BEK, Bergen elektroniske kunstsenter, Malin Barth presents Foundation 3,14, and Elisabeth Nesheim presents the Piksel festival, subbing for Gisle Frøysland.
How does one plan exhibitions of electronic literature? Maya Økland from KNIPSU commented that while elit in the library seems like a great idea, for an art exhibition in a gallery you would be more interested in the quality of the content and context, the artistic quality, than in the platform or programming language. Anne Marthe Dyvi from BEK suggested commissioning site specific art where an artist/author spent an extended time in a specific place to create something particular to that site. She also suggested putting out a call for collaborations between artists and authors. Workshops and hackathons were suggested, much as the Piksel festival organises. What about residencies, Rod Coover asks? Conveniently, Vilde Andrea Brun from the Bergen Municipality (Bergen kommune) snuck in during this session, and as she works with funding for visual and literary arts is able to answer: the city funds residencies for international artists, some through USF, and also Hordaland fylke funds some too. So there are definitely opportunities for this.
At this point I had to rush to the preschool to pick up kids, but I’m looking forwards to the evening program:
20:30-23:00 Readings, Screenings and Performances at Gallery 3.14
“An Evening of Digital Narratives and Poetry”
Michelle Teran, Roderick Coover, Nick Montfort, Scott Rettberg, Talan Memmott, Kristian Pedersen, Rui Torres
I helped organise a seminar today on what kinds of digital competencies universities should aim to teach students (and lecturers) and I’m meeting so many interesting people across the university. I already knew Knut Melvær from Twitter and his blog, and he’s already blogged about the seminar. I enjoyed hearing Knut Martin Tande, who is vice dean for education at the Faculty of Law, talk enthusiastically about how he’s encouraged many of his colleagues to video record their lectures, and how he uses blogs in his teaching. Torgeir Waterhouse’s talk was engaging as always, Koenraad de Smedt expertly led the final panel debate, and many students and professors had excellent comments and questions.
I wish I had time to blog this properly. I blame it on this twirling girl and her siblings, and on the student papers I had to read this evening, and oh, realising I would much rather swirl with my girl while she’s little than spend all my time thinking about work, interesting as work may be.
The seminar was interesting though. Too many ideas, a little too fragmented, perhaps, but an important topic that we’ll definitely return to at UiB and also one that we’re working on in the DIGIT-committee.
Here is the program. The seminar was recorded by UiB, but the live streaming didn’t work so I’m not sure when it will be available. Livar Bergheim live-streamed it on Bambuser, which works but the quality’s not that great.
One thought that has stuck with me is that the different subjects themselves need to adapt to a digitalised society, both in terms of methods and subject matter. But how does a university ensure that that takes place across the board, in such different fields and with such a wide variety of professors and lecturers? I’ve talked to numerous students from other disciplines (comparative politics, administration science, pedagogy among them) who were working on MA theses on digital topics and felt very unsupported by their home department, with advisors with little knowledge about or interest in anything digital.
Are there systemic barriers to research using digital methods or research on digital issues? One barrier might be lack of tech support or perhaps a lack of knowledge or support from the professors. If most professors are older or simply less excited about a digitalised society, perhaps MA or would-be-PhD students with digital projects are more likely to be turned away or simply receive poor support? You wouldn’t need much of a bias to end up with an anti-digital younger generation of researchers as well.
Obviously many environments at UiB are well-versed in digital methods and know a great deal about research and teaching in today’s thoroughly digital society. One of the benefits of starting this discussion is simply being able to connect more with these people and thus raise awareness throughout the university.
And it seems I ended up blogging it after all, though far less thoroughly than I would have liked.
Update: Here’s a story about the seminar from På Høyden.
This Coursera stuff is dangerous. I’ve been loving my social network analysis class so much that when the teachers mentioned a Python class just started (and Python is a useful tool for getting data off the web and into a form you can analyse/visualise) I um, sort of signed up even though I already have way too much to do. You see, I know digital culture in and out but have never really learnt the programming that I would like to. Other than BASIC. I teach BASIC
I have to say, the Python professors are pretty cute, and totally playing on the stereotypes of comp. sci. geeks with their matching t-shirts and rock, paper, scissors, lizard, Spock games. We’re going to program that for our first week’s assignment. They’ve built a browser-based programming environment for us to practice our Python in, too, which sounds as though it’ll make things easy. One of my favourite thing about Lada Adamic‘s Social Network Analysis class is the NetLogo models you tinker with to simulate and understand each concept that’s presented in the videos.
When I first heard about MOOCs I thought they would be bad replicas of an out-dated teaching paradigm, where the teacher pours knowledge into a video, the student watches the video and then knows (or not), and knowledge is tested in brainless multiple choice quizzes. Instead I’m seeing wonderful hands-on models to play with throughout the video lectures, lots of fellow students in discussions in the forums and homework that involves actually doing something, whether it’s reading a complex sociology paper to figure out the answer to a tricky quiz question or modelling your Facebook network and uploading it for an automatic grader to check whether you could tell the size of its giant component.
I followed a little bit of MOOC MOOC, the MOOC on MOOCs that Hybrid Pedagogy ran in August. It was full of skeptics and most of the people I talked with (including me) had never actually tried a MOOC. Some of the assignments were really interesting, like write a collaborative 1000 word essay on “What is a MOOC? What does it do, and what does it not do?” with 50 other people in a single Google Doc. It was so annoying how people kept rewriting my wonderful insights
I don’t know where MOOCs will take academia and our current system of universities. Obviously all these universities offering free courses on Coursera are going to want to earn money from them at some point. A likely source of income is allowing students to pay take an exam after completing a course and getting some form of accreditation. A greater worry, I suppose, is the possibility that once a course is canned, we might not really need the professors much longer. Or at least, we might not need as many professors or as many universities. I think at least as likely a scenario is that more and more people are going to want to learn more. I wouldn’t have signed up for a university course on Python, so for me the alternative to taking a Coursera class for free is either not learning Python or teaching myself from books and the internet. I imagine most of the other students are not taking these classes instead of going to university.
I’m damn sure that there is no MOOC that can replace what is going on in my classrooms this summer. Now, society can decide that what I’m offering isn’t worthwhile, or is too expensive, or can be offered to too few students, or may even not as work as well as I hope. Maybe that’s the real danger of MOOCs — it offers something for free (to the students) that seems as good as what a good University education could be, or as good an education as members of our society need.
Obviously a MOOC provides far less of the facilitated classroom discussions that we have with our undergraduates in my program, and it’s nowhere near offering the kind of close mentoring of MA students that I just came from this morning, where I spent two hours discussing two students’ thesis projects with them. MOOCs can’t provide individual feedback on writing, and the massive peer-review feedback some courses do is not generally very enthusiastic or informed from what I’ve read (I haven’t tried these assignments yet). But there are lots of kinds of learning a MOOC can support. So let’s see what they are good for.
I do find myself imagining setting up a Coursera course of my own. That would be a lot of fun, given the time and support to do so. It would be really interesting to do a test case. Maybe set up one of our big undergraduate classes as a MOOC and hopefully recruit lots of students from everywhere. Then run the same course with our local students. Maybe without lectures, but adding in tutorials and better feedback. And then seeing what the end results were for both groups of students.
And next spring I’m learning guitar. Finally.
I’m reading fascinating – and scary – stories about how political campaigns today use microtargeting to send prospective voters exactly the right information and advertisements to sway them. For instance, a Slate article from back in February about “Narwhal”, the Democrats’ data integration project, writes about how the Obama campaign could send a voter in a conservative state a strong message about the importance of subsidised contraceptives because they could see from data collected about her that she would be very likely to support that.
This is the big shift in the use of technology in campaigns today – both in marketing and in politics. The Slate article summarises it thus:
From a technological perspective, the 2012 campaign will look to many voters much the same as 2008 did. There will not be a major innovation that seems to herald a new era in electioneering, like 1996’s debut of candidate Web pages or their use in fundraising four years later; like online organizing for campaign events in 2004 or the subsequent emergence of social media as a mass-communication tool in 2008. This year’s looming innovations in campaign mechanics will be imperceptible to the electorate, and the engineers at Obama’s Chicago headquarters racing to complete Narwhal in time for the fall election season may be at work at one of the most important. If successful, Narwhal would fuse the multiple identities of the engaged citizen—the online activist, the offline voter, the donor, the volunteer—into a single, unified political profile.
An article in The Atlantic spells out the potential creepiness even more: The Creepiness Factor: How Obama and Romney are Getting to Know You.