screenshot of demo page in wikipedia with words colour-coded according to the reputation of their authors

Look at that: the words in white were written by Wikipedia authors whose contributions to the encyclopedia tend to be “preserved, or built-upon”, while the words in orange were written by authors whose contributions tend to be deleted, reverted or edited. Does that help you read the encyclopedia?

The screenshot comes from the UCSC Wiki Lab’s The Wikipedia Trust Coloring Demo, which I saw linked from Steve Rubel’s . Basically, they downloaded a dump of the Wikipedia in February and have been analysing the data, aiming to come up with a precise “reputation” figure for any contributor based on the extent to which his or her contributions are accepted or revised. The demo only has a selection of pages, and they’re not editable, but certainly provides an interesting demonstration of how such a system could work.

Browsing through the demo, I’m not sure that this would really help me read the Wikipedia. I think I might find it more useful to see the level of dissent over a particular phrase or sentence in an article. For instance, I’d like to see recent additions or phrases that have been haggled over and changed back and forwards marked in red. Sentences that have stayed the same, or perhaps even better, been honed over time with a series of small changes that only affect grammar or ordering, well, they’d seem more trustworthy so should perhaps be green, or white, if we follow the colours used in this demo.

This demo would be excellent to show students to trigger a discussion of how trustworthy the Wikipedia is, and what trustworthiness might mean.

2 thoughts on “see levels of trust in a wikipedia article

  1. PART

    Don’t you trust me?…

    From Jill Walker’s blog I came across a really interesting project which is trying to visualise “trust” in wikipedia. Basically, they scraped some of Wikipedia and run a program which coloured in authors’ contributions based on how long their edits…

  2. […] Aug 8th, 2007 by Miriam Jones The Wikipedia Trust Coloring Demo: software that colour-codes text in Wikipedia entries according to the perceived reliability of the editors (from jill/txt, who suggests it might be useful in the classroom). Jimbo Wales signs on. […]

Leave A Comment

Recommended Posts

Image on a black background of a human hand holding a graphic showing the word AI with a blue circuit board pattern inside surrounded by blurred blue and yellow dots and a concentric circular blue design.
AI and algorithmic culture Machine Vision

Four visual registers for imaginaries of machine vision

I’m thrilled to announce another publication from our European Research Council (ERC)-funded research project on Machine Vision: Gabriele de Setaand Anya Shchetvina‘s paper analysing how Chinese AI companies visually present machine vision technologies. They find that the Chinese machine vision imaginary is global, blue and competitive.  De Seta, Gabriele, and Anya Shchetvina. “Imagining Machine […]

Do people flock to talks about ChatGPT because they are scared?

Whenever I give talks about ChatGPT and LLMs, whether to ninth graders, businesses or journalists, I meet people who are hungry for information, who really want to understand this new technology. I’ve interpreted this as interest and a need to understand – but yesterday, Eirik Solheim said that every time […]

How to use ChatGPT to get past writer’s block

Having your own words processed and restated can help you improve your thinking and your writing. That’s one reason why talking with someone about your ideas can help you clarify your thoughts. ChatGPT is certainly no replacement for a knowledgable friend or colleague, but can can definitely help you remix your […]