I’ve been contributing to the conference Wiki and bookmarking stuff in del.icio.us rather than blogging. These activities work differently to blogging though. Right now I need a blog post: Fox Harrell is currently demonstrating his poetry generation system, GRIOT. I tend to like the idea of poetry generation more than its products, but GRIOT generates its poetry line by line in dialogue with a human. So the human types a word (“europe”) and the generator replies, then waits for the human to type another word. When the human is satisfied, she types “end”. The results are rather appealing, and I’d love the opportunity to play with poetry in that way. Unfortunately I don’t think GRIOT is online yet.


Discover more from Jill Walker Rettberg

Subscribe to get the latest posts sent to your email.

1 Comment

  1. Fox Harrell

    Dear Jill,

    I really enjoyed your talk and the probing, contemplative style of it. I was intrigued by your contention at the end that these various new forms of self representation are often motivated in reaction against commercial culture and commodified ideas of self and beauty. I hope that is the case!

    I am glad that you found the output of the “Girl with Skin of Haints and Seraphs” pleasing. I hope you also enjoyed briefly interacting with the same system. I found this post of yours tracing back some of the links from the updated DAC page and looking for photos from the conference (I didn’t take any myself). I had a few responses to it:

    (1) The poems output from the system have narrative structures. They don’t just end when the user types “end.” Some are shorter and some are longer but they all have openings, some various types of clauses, and endings.

    (2) What you describe is not exactly output of GRIOT itself. GRIOT is the general platform I made to implement and run a particular poetic system like “The Girl with Skin of Haints and Seraphs” or “The Griot Sings Haibun.” These systems have totally different narrative (discourse) structures and different content and themes. So when you interact it is really with a “poetic system” or “polypoem” (polymorphic poem that is different each time it is run). The idea is that the general framework of GRIOT should be useful for other forms of narrative multimedia artwork, not just poetry. Poetry was my first experiment (for a variety of reasons), so it is fair to judge this work in the context of poetry generation though.

    (3) The claims being made, and my goals, are not really that the computer generates the poetry. I want help enable a human artist to be able to write text, stories, or multimedia that is meaningfully different on each reading based in interaction with that reader. So I am interested in improvisation in some forms that are not usually thought of as being improvisational (in this case written text). I often find that ideas I want to express deal with shifting metaphors, narrative imagining, and blending and reblending concepts, so this type of medium suits my own artwork.

    Thanks so much for you interest and I hope to meet again!
    Fox

    p.s. If you are interested in some of my motivations (including thoughts about identity), please look here:
    http://www.ctheory.net/articles.aspx?id=489

Leave A Comment

Recommended Posts

Academics in Norway: Sign this petition asking for research-based discussions of how to use AI in universities

I just signed a petition calling for Norwegian universities to use research expertise on AI when deciding how to implement it, rather than having decisions be made mostly administratively. ,  If you are a researcher in Norway, please read it and sign it if you agree – and share with anyone else who might be interested. The petition was written by three researchers at UiT: Maria Danielsen (a philosopher who completed her PhD in 2025 on AI and ethics, including discussions of art and working life), Knut Ørke (Norwegian as a second language), and Holger Pötzsch (a professor of media studies with many years of research on digital media, video games, disruption, and working life, among other topics).  This is not about preventing researchers from exploring AI methods in their research. It is about not uncritically accepting the hype that everyone must use AI everywhere without critical reflection. It is about not introducing Copilot as the default option in word processors, or training PhD candidates to believe they will fall behind if they do not use AI when writing articles, without proper academic discussion. Changes like these should be knowledge-based and discussed academically, not merely decided administratively, because they alter the epistemological foundations of research. Maria wrote to me a couple of months ago because she had read my opinion piece in Aftenposten in which I called for a strong brake on the use of language models in knowledge work. She was part of a committee tasked with developing UiT’s AI strategy and was concerned because there was so much hype and so few members of the committee with actual expertise in AI. I fully support the petition. There are probably some good uses for AI in research, but the uncritical, hype-driven insistence that we must simply adopt it everywhere is highly risky. There are many researchers in Norway with strong expertise in AI, language, ethics, working life, and culture. We must make use of this expertise. This is also partly about respect for research in the humanities, social sciences, psychology, and law. Introducing AI at universities and university colleges is not merely a technical issue, and perhaps not even primarily a technical one. It concerns much more: philosophy of science, methodological reflection, epistemology, writing, publishing, the working environment, and more. […]

screenshot of Grammarly - main text in the middle, names of experts on the left with reccomendations and on the right more info about the expert review feature
AI and algorithmic culture Teaching

Grammarly generated fake expert reviews “by” real scholars

Grammarly is a full on AI plagiarism machine now, generating text, citations (often irrelevant), “humanizing” the text to avoid AI checkers and so on. If you’re an author or scholar, they also have been impersonating and offering “feedback” in your name. Until yesterday, when they discontinued the Expert Review feature due to a class action lawsuit. Here are screenshots of how it worked.