Whenever I give talks about ChatGPT and LLMs, whether to ninth graders, businesses or journalists, I meet people who are hungry for information, who really want to understand this new technology. I’ve interpreted this as interest and a need to understand – but yesterday, Eirik Solheim said that every time he gives a talk on ChatGPT he meets audiences who are afraid.

I was surprised to hear that. Have I misinterpreted my audiences? They certainly pay more attention to talks about ChatGPT than they do about many other subjects. The ninth graders who visited UiB a couple of weeks ago were EXTREMELY attentive to the short talk I did for them about how LLMs work, and about AI bias and how My AI in Snapchat works. There wasn’t a single whisper or yawn. It actually never occurred to me that they might be scared rather than fascinated, absorbed, eager to learn.

What is your impression? Are people paying attention to AI because they are scared? Or is it amazement?

I’ll have to do an anonymous survey at my next talk, a Kahoot or Mentimeter or something – I tried asking the audience yesterday after Eirik’s point, but of course nobody put their hand up in answer to “Are you scared?”

The photos below are from the not just one, but two talks I did yesterday on ChatGPT: at a breakfast meeting for the Bergen Chamber of Commerce and at a lunch event for journalists and students at Media City Bergen, where I was on a panel with Eirik Solheim and Chris Ronald Hermansen, led by Lasse Lambrechts.


Discover more from Jill Walker Rettberg

Subscribe to get the latest posts sent to your email.

1 Comment

  1. Tin

    Am I frightened? That’s maybe overstating the case.
    Am I concerned, sometimes very deeply? Yes, definitely, and this explains just one of the reasons: https://reclaimthefacts.com/en/2023/04/07/exploring-the-risks-of-ai-in-spreading-misinformation-about-climate-change/

    But a huge concern is actually also that these AI companies are benefitting monetarily from unpaid work that hundreds of thousands of people have put into their websites, community projects, stories and all of the other things AI has scraped. Many of these websites, projects and stories are labours of love, often written and maintained by women, who are anyway already at the losing end of the wage gap.

    As far as I’m concerned, this is just another case of Silicon Valley tech bros getting rich on the unpaid work of others, including women and minorities.

Leave A Comment

Recommended Posts

Academics in Norway: Sign this petition asking for research-based discussions of how to use AI in universities

I just signed a petition calling for Norwegian universities to use research expertise on AI when deciding how to implement it, rather than having decisions be made mostly administratively. ,  If you are a researcher in Norway, please read it and sign it if you agree – and share with anyone else who might be interested. The petition was written by three researchers at UiT: Maria Danielsen (a philosopher who completed her PhD in 2025 on AI and ethics, including discussions of art and working life), Knut Ørke (Norwegian as a second language), and Holger Pötzsch (a professor of media studies with many years of research on digital media, video games, disruption, and working life, among other topics).  This is not about preventing researchers from exploring AI methods in their research. It is about not uncritically accepting the hype that everyone must use AI everywhere without critical reflection. It is about not introducing Copilot as the default option in word processors, or training PhD candidates to believe they will fall behind if they do not use AI when writing articles, without proper academic discussion. Changes like these should be knowledge-based and discussed academically, not merely decided administratively, because they alter the epistemological foundations of research. Maria wrote to me a couple of months ago because she had read my opinion piece in Aftenposten in which I called for a strong brake on the use of language models in knowledge work. She was part of a committee tasked with developing UiT’s AI strategy and was concerned because there was so much hype and so few members of the committee with actual expertise in AI. I fully support the petition. There are probably some good uses for AI in research, but the uncritical, hype-driven insistence that we must simply adopt it everywhere is highly risky. There are many researchers in Norway with strong expertise in AI, language, ethics, working life, and culture. We must make use of this expertise. This is also partly about respect for research in the humanities, social sciences, psychology, and law. Introducing AI at universities and university colleges is not merely a technical issue, and perhaps not even primarily a technical one. It concerns much more: philosophy of science, methodological reflection, epistemology, writing, publishing, the working environment, and more. […]

screenshot of Grammarly - main text in the middle, names of experts on the left with reccomendations and on the right more info about the expert review feature
AI and algorithmic culture Teaching

Grammarly generated fake expert reviews “by” real scholars

Grammarly is a full on AI plagiarism machine now, generating text, citations (often irrelevant), “humanizing” the text to avoid AI checkers and so on. If you’re an author or scholar, they also have been impersonating and offering “feedback” in your name. Until yesterday, when they discontinued the Expert Review feature due to a class action lawsuit. Here are screenshots of how it worked.