Today the students and I have played around with visualizations in Google Fusion and Manyeyes. Scott has exported data from the ELMCIP Electronic Literature Knowledge Base, so we’ve been making pie charts and timelines and maps and so forth.

Here are the files if you’d like to experiment:

In Google Fusion, you select a kind of visualization from the “Visualize” pulldown menu. This dataset doesn’t have location data so you can’t make maps, I’m afraid, because that information is tied to authors, not to individual works.

Here’s a timeline of  works in the knowledge base that are tagged “hypertext” that Scott put together in Google Fusion:

And here’s a word cloud made from titles of works in the knowledge base, created in Manyeyes:

I hadn’t realised how many of the titles include statements about what genre the work belongs to: book, poem, novel, generator, statement, hypertext, project, letter. I suppose many titles in print literature do the same, come to think of it.

Obviously the dataset comes with biases – it doesn’t (and may never be able to) contain complete information about all electronic literature ever published, and there’s a lot of Young-Hae Chang in there because a student did a project on that group last semester – I actually removed the words “Samsung” and “Korean” from the word cloud because they dwarfed everything else. But it’s interesting to see how it’s relatively easy to do simple visualisations from the data we do have – and to think about what kinds of visualisations we would like to do, and how that might help our research.

In class, we also made a simple map of where the students in this class come from – all over Norway, as you can see, and some other interesting places too. This is very easy to do, you just open a new table, add some place names or addresses to the “location” column and hit “visualize”. We mostly followed this tutorial. You could also upload your own address book, I assume, or other tables with address or location information.

In a way this is just dabbling, and obviously we should all learn a lot more about data visualization to use it properly. But I’m a big believer in having a go at things even before you’re quite sure how to do it – and I think having students who at least have some idea of how they could go about visualizing data is an excellent start. I think it’s also easier to think critically about the biases and sources of the data when we’ve played with the tools ourselves. Maybe some of them will come up with ways of using visualizations usefully in their bachelor theses, or next semester in another paper, or eventually in their MA theses.

 


Discover more from Jill Walker Rettberg

Subscribe to get the latest posts sent to your email.

Leave A Comment

Recommended Posts

Academics in Norway: Sign this petition asking for research-based discussions of how to use AI in universities

I just signed a petition calling for Norwegian universities to use research expertise on AI when deciding how to implement it, rather than having decisions be made mostly administratively. ,  If you are a researcher in Norway, please read it and sign it if you agree – and share with anyone else who might be interested. The petition was written by three researchers at UiT: Maria Danielsen (a philosopher who completed her PhD in 2025 on AI and ethics, including discussions of art and working life), Knut Ørke (Norwegian as a second language), and Holger Pötzsch (a professor of media studies with many years of research on digital media, video games, disruption, and working life, among other topics).  This is not about preventing researchers from exploring AI methods in their research. It is about not uncritically accepting the hype that everyone must use AI everywhere without critical reflection. It is about not introducing Copilot as the default option in word processors, or training PhD candidates to believe they will fall behind if they do not use AI when writing articles, without proper academic discussion. Changes like these should be knowledge-based and discussed academically, not merely decided administratively, because they alter the epistemological foundations of research. Maria wrote to me a couple of months ago because she had read my opinion piece in Aftenposten in which I called for a strong brake on the use of language models in knowledge work. She was part of a committee tasked with developing UiT’s AI strategy and was concerned because there was so much hype and so few members of the committee with actual expertise in AI. I fully support the petition. There are probably some good uses for AI in research, but the uncritical, hype-driven insistence that we must simply adopt it everywhere is highly risky. There are many researchers in Norway with strong expertise in AI, language, ethics, working life, and culture. We must make use of this expertise. This is also partly about respect for research in the humanities, social sciences, psychology, and law. Introducing AI at universities and university colleges is not merely a technical issue, and perhaps not even primarily a technical one. It concerns much more: philosophy of science, methodological reflection, epistemology, writing, publishing, the working environment, and more. […]

screenshot of Grammarly - main text in the middle, names of experts on the left with reccomendations and on the right more info about the expert review feature
AI and algorithmic culture Teaching

Grammarly generated fake expert reviews “by” real scholars

Grammarly is a full on AI plagiarism machine now, generating text, citations (often irrelevant), “humanizing” the text to avoid AI checkers and so on. If you’re an author or scholar, they also have been impersonating and offering “feedback” in your name. Until yesterday, when they discontinued the Expert Review feature due to a class action lawsuit. Here are screenshots of how it worked.