I signed up for Grammarly a few weeks ago to find out how its citation finder tool works (spoiler: it’s not great, very pushy and often produces irrelevant citations) and was surprised to see it generating expert reviews with real people’s names. Yesterday Grammarly discontinued this feature after a class action lawsuit was filed against the company by Julia Angwin, a journalist who has written extensively on privacy and technology for many years and one of the people being impersonated. Other people I’ve seen pop up include Helen Sword, Kate Crawford, TreaAndrea M. Russworm, Samantha Blackman, Adrienne Shaw, Scott Rettberg, Noah Wardrip-Fruin, Katherine N. Hayles and more. I couldn’t provoke it to generate a fake me 😉 After the Wired article, discussions about about this are all over my BlueSky feed, but there aren’t many examples of how this looks – I guess most academics I follow aren’t using Grammarly (phew). So here are some examples.

Here’s an example from when I showed Kishonna Gray how it worked when she was visitng last week. First we asked Grammarly to generate a 1000 word essay on black game studies, then we asked for the Expert review. Unsurprisingly, Kishonna shows up, as shown above. Here’s another example from a text generated about electronic literature.

screenshot of grammarly showing text in the middle and expert names on left side
After generating a text about electronic literature and The Unknown, Grammarly generated feedback presented as though from real scholars. The screenshot was taken on 4 March 2026.

As you can see, each expert gives pretty bland, run-of-the-mill advice. We clicked on “Show example” for fake-Kishonna’s advice and got an explanation of why they suggested it. But while the advice isn’t bad – be more precise about the time – it’s not well tied to the supposed reason they gave it. In fact, the supposed reason is that Kishonna’s work is situated, for instance by discussing how Black folks’ experience as gamers relates to what’s happening at the time, like Black Lives Matter or Gamergate. Situated writing is of course something that LLMs are extremely bad at. And just adding “In the early 2000s….” isn’t really helping with that.

Screenshot taken 4 March 2026.

If you click “Show example” it suggests exactly how to rewrite your text and you can click to insert the revision. This is frictionless not-writing for students and academics. It would honestly be difficult to use this tool and not just accept what it suggests.

This is just one of many AI tools Grammarly now provides. Here is the full menu:

A list of menu options with icons from Grammarly: AI chat, proofreader, paraphraser, expert review, reader reactions, humanizer, citation finder, fact checker, AI detector, AI rewriter, plagiarism checker, AI grader
Grammarly’s menu options as of 28 February 2026.

The irrelevant citations it suggests are particularly annoying to me as a peer reviewer and reader of academic articles that are often clearly at least partly AI-generated. Scholarslop, as David Berry has called it.

Here is an example of how Grammarly suggests sources to cite. You select the “Citation finder” and it identifies statements in your text that it predicts need a citation. It provides a few possible choices and offers a big green button labelled “Insert in-text citation”. Here are two screenshots taken on 18 February 2026 showing how it does this – the second is for the same statement but I clicked on one of the other suggested sources to cite.

screenshot
screenshot
Grammary identifies statements in your text that it predicts need a citation, and suggests four citations you can choose between. These are often not actually relevant citations, as in this example.

If you actually read the statement Grammarly says needs a citation, and compare this to the sources suggested, you’ll see they don’t match. Both sources are irrelevant citations. The claim is that promotional writing that is not intended to be primarily factual makes up a large part of the training data of LLMs. The sources offered are about how much LLMs are used in scientific writing, and about how many people have used LLMs. But it takes a lot more cognitive effort to read this and decide that no, they’re not good sources, than it does to just trust Grammarly and click that big green button and move on.

I’ve got a draft article in review that addresses some of this more extensively. For now, if you have students who use Grammarly I highly recommend signing up and checking it out so you know how to talk with them about it. You may also be interested in checking how Grammarly grades a paper as if it is a specific professor (like you?) if you type in the instructor name, the class code, the university and ideally upload the grading matrix.

Grammarly is a lot more than a tool to help non-native speakers check their spelling and grammar. It has become a full-scale plagiarism machine. I read some comments on Bluesky from NLP scholars asking what happened – apparently Grammarly was founded in Ukraine and was an NLP darling whose developers attended all the NLP conferences. But then, as summaried at Wikipedia, they bought other companies (a document editor, an AI-enabled email tool), got a lot more funding, and starting in October 2025 rolled out a lot of AI tools. Grammarly now seems more focused on enabling click-button AI plagiarism than on helping people become better writers.

You get a free week’s subscription, but have to give them your credit card. Maybe not a great idea. When I cancelled after 6 days they gave me an extra free week. So if you’re disciplined enough to remember to cancel in time, you could try it for yourself.


Discover more from Jill Walker Rettberg

Subscribe to get the latest posts sent to your email.

Leave A Comment

Recommended Posts

Academics in Norway: Sign this petition asking for research-based discussions of how to use AI in universities

I just signed a petition calling for Norwegian universities to use research expertise on AI when deciding how to implement it, rather than having decisions be made mostly administratively. ,  If you are a researcher in Norway, please read it and sign it if you agree – and share with anyone else who might be interested. The petition was written by three researchers at UiT: Maria Danielsen (a philosopher who completed her PhD in 2025 on AI and ethics, including discussions of art and working life), Knut Ørke (Norwegian as a second language), and Holger Pötzsch (a professor of media studies with many years of research on digital media, video games, disruption, and working life, among other topics).  This is not about preventing researchers from exploring AI methods in their research. It is about not uncritically accepting the hype that everyone must use AI everywhere without critical reflection. It is about not introducing Copilot as the default option in word processors, or training PhD candidates to believe they will fall behind if they do not use AI when writing articles, without proper academic discussion. Changes like these should be knowledge-based and discussed academically, not merely decided administratively, because they alter the epistemological foundations of research. Maria wrote to me a couple of months ago because she had read my opinion piece in Aftenposten in which I called for a strong brake on the use of language models in knowledge work. She was part of a committee tasked with developing UiT’s AI strategy and was concerned because there was so much hype and so few members of the committee with actual expertise in AI. I fully support the petition. There are probably some good uses for AI in research, but the uncritical, hype-driven insistence that we must simply adopt it everywhere is highly risky. There are many researchers in Norway with strong expertise in AI, language, ethics, working life, and culture. We must make use of this expertise. This is also partly about respect for research in the humanities, social sciences, psychology, and law. Introducing AI at universities and university colleges is not merely a technical issue, and perhaps not even primarily a technical one. It concerns much more: philosophy of science, methodological reflection, epistemology, writing, publishing, the working environment, and more. […]

AI and algorithmic culture Teaching

Grammarly generated fake expert reviews “by” real scholars

Grammarly is a full on AI plagiarism machine now, generating text, citations (often irrelevant), “humanizing” the text to avoid AI checkers and so on. If you’re an author or scholar, they also have been impersonating and offering “feedback” in your name. Until yesterday, when they discontinued the Expert Review feature due to a class action lawsuit. Here are screenshots of how it worked.