If you complain to US Airways about being on the Selectee list, the form letter you receive in return (PDF available from EPIC) includes the following comfort:

CAPPS is a government administered computer application that operates in the background of our reservations computer system. This application scans the data in each reservation record and prompts for additional security measures when specific data are present. There is no human intervention in the CAPPS selection process and US Airways personnel cannot modify a reservation to prevent a search or cause a passenger to be selected. An “S” is printed on the boardign pass of each passenger who is a CAPPS selection so that our gate personnel and security agents know which customers require additional screening.

You can read a thick wad of customer complaints and form letter responses in this PDF (from EPIC’s problems page)

Compare this to Google’s bragging about the objectivity of GoogleNews:

Google News is highly unusual in that it offers a news service compiled solely by computer algorithms without human intervention. Google employs no editors, managing editors, or executive editors. While the sources of the news vary in perspective and editorial approach, their selection for inclusion is done without regard to political viewpoint or ideology.

Or, indeed, look at ValueCents, a program that interprets financial data, and according to the CEO, “has two unique benefits. First, the analysis is objective. There is no human intervention; it is being generated by artificial intelligence.

Do most people really trust computers more than people? Is it because they have no faces? How on earth can we think an algorithm is objective!?

(Related: my posts on links and power from 2002)

8 thoughts on “no human intervention

  1. vika

    What baffles me in this is that it’s the humans who write the algorithms, who program the computers. We tend to conveniently forget that when speaking of machine “objectivity.”

  2. HÂkon Styri

    I guess it’s supposed to be interpreted as: “In this process, there are no humans that can be bribed or talked into changing the decision.”

    It completely ignores the fact that computers may fail and computer programs usually have errors, making the interpretation: “In this process, noboby will act on or correct any error. Have a nice day, loser.”

    What vika write is also quite important. In some cases additional weirdness is introduced by using machine learning, genetic programming or neural networks that sometimes makes it very hard to inspect the code to validate the correctness of the algorithms.

  3. Matt

    Well there’s objective as in absolute truth, and objective as in “no body is able to influence the running of the process once it is started”. Perhaps o. isn’t the word, but systems analysts aren’t generally philosophers. Invariability might be a better word to describe the Quality (ref Parasuraman – service, Garvin – hardware) of this “Service”.

    Anyway, I’m surprised at you all. Of course we don’t “trust computers”. But we may choose to trust a computationally based System in which the human actions have been suppressed, either for reasons of regulating the quality or for cost/time constraints. (Would blogging have gotten off the ground if every post/comment was handled by a brain?) Equally we may feel violated when said system gets it wrong. Parasuraman would say that the service is then un-Responsive, ironically because the human interventions have been removed.

  4. Mark M. Hancock

    The airline (and government by association) are hiding behind the computer. They say there is “no way to change it,” which is horse hockey. It is Orwellian at its core.
    Google’s news service, on the other hand, changes every few seconds. They use quite a few sources, but stick to major media outlets (and let those newsrooms take the heat for errors). It is, however, edited by the programmers who choose which sites will and will not qualify as legitimate news sources.
    This programmer is the gatekeeper and does the same function of the CAPPS — s/he identifies problem sources and eliminates them from the common discussion.
    The Mirror was at the middle of the Brit army photo problem, but it isn’t considered legit to Google’s programmers and must be found on the Web instead of inside the Google news area. At least they allow it on the WWW searches.
    Tell the programmer.

  5. nick

    Google News is my standard example of why understanding some aspects of computer science, information retrieval, and machine learning is essential if humanists want to critique how, for instance, the news is created.

    But without knowing about that, it’s easy to rhetorically distinguish the statements from Google News and from US Airways. Google News is saying “look ma, no editors!” – the company is bragging that they have put together a new Web site that selects, categorizes, and lays out the news without minute-to-minute or day-to-day human intervention. The claim that this system is “without … political viewpoint or ideology” deserves extended critique, but their basic point is something like “we’re so great because of what we can do with technology.”

    US Airways is saying almost the opposite – “don’t blame us, the computer did it,” a classic excuse dating from the late 1950s, or from much earlier if you allow company policy and inflexible law and such to stand in for “the computer.”

    How Google News is ideological is probably worth a chapter or a book, but it’s not just because humans wrote the algorithms, and not just because humans do the selection of news sources. It’s also because they write the news that is crawled to begin with, among many other things.

    Making a random decision is not without ideology, but there are pretty sound game-theoretic reasons for doing that. CAPPS tries to be like a tennis player who serves unpredictably either to the left or right; the tennis player who is predictable is easier to defeat. (The idea is one cited by William Burroughs in his discussion of game theory as he describes the cut-up method.) While it isn’t without ideology, it seems to me to be close to the ideology of non-discrimination and equality. If searches were biased somehow toward one group, could that be more equitable?

  6. Jill

    Oh, I agree with you Nick, about the differences between Google and US Airways – and using the computer as an excuse is indeed an old line.

    I don’t think CAPPS is meant to be random though. Not at all. It’s profiling, selecting certain passengers that it assumes are more likely to be a threat based on information gathered about them. They do also select random passengers so as not to discriminate against certain groups.

    Turns out my mum’s also been S’ed. But not consistently. So I’m hoping (indeed, almost assuming now I’m less sleep-deprived) that I was just randomly selected as an S once and not randomly put on a list that’ll be S’ed every time I travel in the US.

    Either that or Australian-Norwegians are just deemed to be a risky kind of individual.

  7. […] What is Google News? What’s up with this “no human intervention=objectivity” thing?) […]

  8. Sindre

    The only time a computer will be able to be as close to truly objective as possible, is when it’s monitoring a single true of false statement. A switch which is operated by humas is either on or off, the computer monitors the state and reports true or false. This can be proven only as long as the state of the switch does not change. (Maybe a bit off topic)

    Any more complex evaluation of data made by a computer can be influenced by those who first define what the parameters are, and the values of those. Second by those who programs it, and thirdly by those collecting the data to be assesed. To claim that any proccess, especially as complex as News or CAPPS, is free from human intervention or human error is to be blatantly ignorant about how computers and humans interact. Not to mention a misplaced trust in computers.

    A quote that seems fitting: The real problem is not whether machines think but whether men do. (B. F. Skinner)

Leave A Comment

Recommended Posts

Triple book talk: Watch James Dobson, Jussi Parikka and me discuss our 2023 books

Thanks to everyone who came to the triple book talk of three recent books on machine vision by James Dobson, Jussi Parikka and me, and thanks for excellent questions. Several people have emailed to asked if we recorded it, and yes we did! Here you go! James and Jussi’s books […]

Image on a black background of a human hand holding a graphic showing the word AI with a blue circuit board pattern inside surrounded by blurred blue and yellow dots and a concentric circular blue design.
AI and algorithmic culture Machine Vision

Four visual registers for imaginaries of machine vision

I’m thrilled to announce another publication from our European Research Council (ERC)-funded research project on Machine Vision: Gabriele de Setaand Anya Shchetvina‘s paper analysing how Chinese AI companies visually present machine vision technologies. They find that the Chinese machine vision imaginary is global, blue and competitive.  De Seta, Gabriele, and Anya Shchetvina. “Imagining Machine […]

Do people flock to talks about ChatGPT because they are scared?

Whenever I give talks about ChatGPT and LLMs, whether to ninth graders, businesses or journalists, I meet people who are hungry for information, who really want to understand this new technology. I’ve interpreted this as interest and a need to understand – but yesterday, Eirik Solheim said that every time […]