Via Espen Anderson, I found this report about a recent study that seems to confirm some of my arguments in the talk I gave a month or so ago at Fleksibel lÊring, where I argued that young university students are far less digitally literate than we assume:

A new study overturns the common assumption that the ëGoogle Generation’ ñ youngsters born or brought up in the Internet age ñ is the most web-literate. The first ever virtual longitudinal study carried out by the CIBER research team at University College London claims that, although young people demonstrate an apparent ease and familiarity with computers, they rely heavily on search engines, view rather than read and do not possess the critical and analytical skills to assess the information that they find on the web.

There’s also a Slashdot discussion about the article.

Espen’s daughter’s homework apparently, at least on some days, consists largely of searching for answers on Google. We teachers certainly have a lot of challenges in figuring out how to help students not only learn to find information but learn those critical and analytical skills that do not come automatically.


Discover more from Jill Walker Rettberg

Subscribe to get the latest posts sent to your email.

3 thoughts on “google generation lacks critical and analytical skills needed to assess information

  1. Christy

    Oh yeah, part of the problem (in Australia at least) is that secondary schools and universities often have policies where they don’t let students refer to the web in their essays. So rather than teaching students what is good to refer to and why, a lot of educators are giving them no guidance at all. Part of the problem here is that many of the educators don’t know how to discern content on the web either!

    The same issue has been raised with ‘alternate reality games’. ARG players are skilled at being able to discern a fake site from a real one…that is one of the reasons why many educators are looking to create mini ARGs to teach these skills. But in large scale ARGs, the designers have found that if it is directed to the general public first, they think the game is a hoax. But if directed to the ARGers first, they know it isn’t a hoax. They then create content that describes it as a game and it ripples out to the general public who receive it already digested for them.

    Just some thoughts to add as this topic is an issue in my work too! Have fun in Chicago and say hi to Scott. Love the baby widget by the way. 🙂

  2. Christy

    Oh yeah, part of the problem (in Australia at least) is that secondary schools and universities often have policies where they don’t let students refer to the web in their essays. So rather than teaching students what is good to refer to and why, a lot of educators are giving them no guidance at all. Part of the problem here is that many of the educators don’t know how to discern content on the web either!

    The same issue has been raised with ‘alternate reality games’. ARG players are skilled at being able to discern a fake site from a real one…that is one of the reasons why many educators are looking to create mini ARGs to teach these skills. But in large scale ARGs, the designers have found that if it is directed to the general public first, they think the game is a hoax. But if directed to the ARGers first, they know it isn’t a hoax. They then create content that describes it as a game and it ripples out to the general public who receive it already digested for them.

    Just some thoughts to add as this topic is an issue in my work too! Have fun in Chicago and say hi to Scott. Love the baby widget by the way. 🙂

  3. Espen Andersen

    Jill,
    thanks for picking this up – we really need to get in touch the next time I am in Bergen.

    As for Google and homework, I am more concerned with the lack of context than where the text comes from….

Leave A Comment

Recommended Posts

Academics in Norway: Sign this petition asking for research-based discussions of how to use AI in universities

I just signed a petition calling for Norwegian universities to use research expertise on AI when deciding how to implement it, rather than having decisions be made mostly administratively. ,  If you are a researcher in Norway, please read it and sign it if you agree – and share with anyone else who might be interested. The petition was written by three researchers at UiT: Maria Danielsen (a philosopher who completed her PhD in 2025 on AI and ethics, including discussions of art and working life), Knut Ørke (Norwegian as a second language), and Holger Pötzsch (a professor of media studies with many years of research on digital media, video games, disruption, and working life, among other topics).  This is not about preventing researchers from exploring AI methods in their research. It is about not uncritically accepting the hype that everyone must use AI everywhere without critical reflection. It is about not introducing Copilot as the default option in word processors, or training PhD candidates to believe they will fall behind if they do not use AI when writing articles, without proper academic discussion. Changes like these should be knowledge-based and discussed academically, not merely decided administratively, because they alter the epistemological foundations of research. Maria wrote to me a couple of months ago because she had read my opinion piece in Aftenposten in which I called for a strong brake on the use of language models in knowledge work. She was part of a committee tasked with developing UiT’s AI strategy and was concerned because there was so much hype and so few members of the committee with actual expertise in AI. I fully support the petition. There are probably some good uses for AI in research, but the uncritical, hype-driven insistence that we must simply adopt it everywhere is highly risky. There are many researchers in Norway with strong expertise in AI, language, ethics, working life, and culture. We must make use of this expertise. This is also partly about respect for research in the humanities, social sciences, psychology, and law. Introducing AI at universities and university colleges is not merely a technical issue, and perhaps not even primarily a technical one. It concerns much more: philosophy of science, methodological reflection, epistemology, writing, publishing, the working environment, and more. […]

screenshot of Grammarly - main text in the middle, names of experts on the left with reccomendations and on the right more info about the expert review feature
AI and algorithmic culture Teaching

Grammarly generated fake expert reviews “by” real scholars

Grammarly is a full on AI plagiarism machine now, generating text, citations (often irrelevant), “humanizing” the text to avoid AI checkers and so on. If you’re an author or scholar, they also have been impersonating and offering “feedback” in your name. Until yesterday, when they discontinued the Expert Review feature due to a class action lawsuit. Here are screenshots of how it worked.