Ran out after Kate Hayles and dashed into another panel on artificial intelligence and chatbots, which is turning out to be really good.

Keep it Simiple (and female?) A reaction to the masculine robot or HAL from 2001 who came to destroy our world – easier to relate to a non-threatening young administrative assistent style female chat bot.

Tim Menzies:
Computers are already smarter than us, but using methods so alien that we’ll never catch them up, and they may already be beyond our understanding. (planes don’t fly the same way birds do but work, in some ways, better – certainly differently)
Limits to AI (these are the three objections to AI everyone has to point out
– Godels incompleteness theorem
– Suchman/Clancey situation activity/cognition (what people SAY they do isn’t necessarily what they really do so not possible to design robots)
– Cook’s NP-completeness

E.g. John Koza, Keane and Matthew – a program that can take any human-developed patent and design a NEW circuit that will do the same thing, differently. They have thousands of examples.
AI can reason about larger theories than humans. Menzies et. al. 1985 – a simple AI system outperformed the human who’d encoded it. Why? short-term memory just not big enough in humans.

Being able to do something doesn’t mean you can explain how you do it or what you’ve done.
Often AI systems where it’s really NOT obvious how they’re working something out. THey’re opaque, alien to us.

Is this worrying? No. we can share – we bring the power, they’ll help us out now and again, and maybe sometimes we’ll understand what they’re trying to tell us. Worrying about AI is like being the spoilt only child worried about the birth of a sibling.

Helpful benign AI agents eager to share all they know with you. Yet it can’t. Sometimes AI tries to talk to us, and we just say “–wha?”

Machines reading and listening
Surveillance cameras aren’t really working. E.g. delivery man in NY who was trapped in a list for three days despite security cameras and people searching for him – either he was in the camera’s blind spot (!?) or the security people were just not looking at the cameras. Other examples too of missing cameras.

Machines reading.


Discover more from Jill Walker Rettberg

Subscribe to get the latest posts sent to your email.

2 thoughts on “panel about AI and narrative

  1. Claus

    > Computers are already smarter than us

    What does “smarter” actually mean? What does “intelligence” mean? Machines (or computer programs) are (yet) not able to *think* at all. “Thinking” would mean that they are aware of themselves–which is not the case right now (2005). Programs like the classic LIZA are mereley attempts to emulate human reaction. Until advanced hardware has been developed with the ability of self-organisation, I would not dare using the term “intelligence” (or “A.I.” for that matter) at all when related to machines.

  2. Claus

    P.S.:

    > LIZA

    I think that the name was actually ELIZA (presumably after “My Fair Lady”). This program was written by Joseph Weizenbaum, and its task was to “interact” with a human by “answering” questions with one out of a set of predefined sentences, thus giving the impression that a human being was “on the other side” instead of a machine.

Leave A Comment

Recommended Posts

Academics in Norway: Sign this petition asking for research-based discussions of how to use AI in universities

I just signed a petition calling for Norwegian universities to use research expertise on AI when deciding how to implement it, rather than having decisions be made mostly administratively. ,  If you are a researcher in Norway, please read it and sign it if you agree – and share with anyone else who might be interested. The petition was written by three researchers at UiT: Maria Danielsen (a philosopher who completed her PhD in 2025 on AI and ethics, including discussions of art and working life), Knut Ørke (Norwegian as a second language), and Holger Pötzsch (a professor of media studies with many years of research on digital media, video games, disruption, and working life, among other topics).  This is not about preventing researchers from exploring AI methods in their research. It is about not uncritically accepting the hype that everyone must use AI everywhere without critical reflection. It is about not introducing Copilot as the default option in word processors, or training PhD candidates to believe they will fall behind if they do not use AI when writing articles, without proper academic discussion. Changes like these should be knowledge-based and discussed academically, not merely decided administratively, because they alter the epistemological foundations of research. Maria wrote to me a couple of months ago because she had read my opinion piece in Aftenposten in which I called for a strong brake on the use of language models in knowledge work. She was part of a committee tasked with developing UiT’s AI strategy and was concerned because there was so much hype and so few members of the committee with actual expertise in AI. I fully support the petition. There are probably some good uses for AI in research, but the uncritical, hype-driven insistence that we must simply adopt it everywhere is highly risky. There are many researchers in Norway with strong expertise in AI, language, ethics, working life, and culture. We must make use of this expertise. This is also partly about respect for research in the humanities, social sciences, psychology, and law. Introducing AI at universities and university colleges is not merely a technical issue, and perhaps not even primarily a technical one. It concerns much more: philosophy of science, methodological reflection, epistemology, writing, publishing, the working environment, and more. […]

screenshot of Grammarly - main text in the middle, names of experts on the left with reccomendations and on the right more info about the expert review feature
AI and algorithmic culture Teaching

Grammarly generated fake expert reviews “by” real scholars

Grammarly is a full on AI plagiarism machine now, generating text, citations (often irrelevant), “humanizing” the text to avoid AI checkers and so on. If you’re an author or scholar, they also have been impersonating and offering “feedback” in your name. Until yesterday, when they discontinued the Expert Review feature due to a class action lawsuit. Here are screenshots of how it worked.