Good heavens: Automated Essay Grading exists. Admittedly they’re only recommending it as a tool students can use during revision, before submitting the paper, but if you read the full paper (PDF) you’ll see that computers have many advantages over human assessors:
Well, students do want to believe that we’re objective, infallible graders. At least, the students I made do peer assessment last semester said they did. Perhaps students would prefer computer assessors?
You can try out the system if you like. Unfortunately the sample topics don’t include “write a 600-1000 word review of a weblog” which is the next assignment I’ll be grading.
I came across this deep in a post on hacking literature by Maciej Ceglowski over at Idlewords. Maciej suggests that if this technology exists, pretty soon students plagariasing essays by copying them off the internet will be surplanted by students automatically generating essays. Won’t that be fun!?
steve
The examples provided don’t seem to offer more than a summary of the given topic, and according to the product description the software “analyzes the body of text from which people learn to derive an understanding of essays on that topic.” I wonder how it handles an essay making an argument, especially an argument with connections to unexpected sources (ie, those not in the “body of texts”)?
If I had time to write an essay about the Great Depression this morning, I suppose I could find out.
Ben the Geographer
Here in North America and at particularly at McGill University, Turnitin.com has been a big issue . McGill used the service last term. Students were forced to submit their work so it could be compared to other papers in the turnitin.com database before it was handed in. After considerable student complaint, McGill has decided to drop the service. Students representatives to the McGill Senate recommended that professors should make paper topics more specific to decrease plagarism. I’m fairly certain that computer grading would go over even more poorly.
Francois Lachance
Jill,
One under utilized capability of CMC technology is the remote connection. I mean a session where a pupil can witness a writer at work. The contents of the writer’s terminal are echoed on the terminal of the pupil. Control can be passed one to the other for tutoring purposes. See the Unix “talk” and the “write” commands. In the HTTP world WWW chat provides a similar environoment for composition tutoring but so far the implementations I have seen lack the character by character run (and delete) offered by talk.
B. Rickman
I’ve done some work related to what Knowledge Analysis Technologies is doing, and the process is very computationally intensive. You need supercomputers to make it work, and I’m not sure that schools are going to buy something like this simply to grade student essays. By the time you’ve fed the system enough samples you could be finished grading.
Mike
Charles Moran and Anne Herrington at the University of Massachusetts have been doing some excellent critiques of computer assessment of student writing. Their article in the March 2001 issue of College English, “What Happens When Machines Read Our Students’ Writing?” (abstract only; the full version is available to academic libraries via JSTOR, and I’d be happy to pass along a PDF version or their e-mail addresses), gives an excellent historical perspective and then offers a witty and insightful analysis of their interactions with various computer grading systems, including that of Knowledge Analysis Technologies. Anne and Charlie also presented their more recent findings concerning the electronic assessment of writing in San Antonio on Thursday, at a 4Cs panel presentation. How’s that for synchronicity?
Mike
I should add that Charlie’s suspicions regarding computer assessment of student writing sound very similar to Ceglowski’s: Charlie opines that should automated assessment become common enough, students — rather than learning to write — will start to attempt to learn the proper statistical combinations of polysyllabic words and syntactic structures that will allow them to game the grading machine, sort of like the way people try to game Google now. One gripe about Ceglowski’s article: the software, as Moran and Herrington repeatedly and forcefully point out, does not grade “essays as well as a human TA”: far from it, in fact.
And in American university writing classes, at least, the most common antidote to plagiarism is the instructor’s knowledge of students’ styles: my colleagues and I frequently catch plagiarism simply because students don’t believe that they have an individual style — and I don’t think these programs can diachronically monitor changes in style.
Norman
Assuming you have eliminated factors such as a marker’s incompetence and/or bias, one of the biggest problems facing marking has been [as Jill suggested] the difficulties associated with making comparisons among a large number of items.
In a situation involving the making of numerous sets of decisions on essays from a small group of about 35 students, I spent an inordinate amount of time on marking that first set of essays. Time and again I revised their order, “remarking” each and, where appropriate, moving an essay “up or down” in my pile.
With subsequent assignments, I placed them initially in the same order as the students’ previous assignment, then [with mark book beside me] marked them in that same sequence, sometimes moving an essay up, sometimes moving one down. This ensured there was a reasonably consistent standard of marking applied across the whole group.
Assessing students via the computer based “marking programme” mentioned above doesn’t appeal to me at all; but I do see it having value as a preliminary sorting tool. I’d love to be able to run each batch of essays through a programme which sorted them into some sort of rough order. As each essay was marked, it would be easier to compare it with the standard of essays which were reasonably similar to it in quality. Despite what some may pretend, when essays are involved, it’s made far more difficult when you’re moving backwards and forwards among a mixture of brilliant, hopeless and mediocre efforts.
scribblingwoman
Looking good about now
jill/txt points towards grading software. That’s right. Why should students be the only ones who can cheat?…
HeLeN
Computer assessors?
Hey Jill – I don’t think Computer assessor will work in reality – i still prefer my reports to be marked by human (with feelings and emotions) though it will be good if computers can mark all our assignments (reports)……
Peter
How ’bout the Automated Assessor System of Real Property Tax Declaration?
Todd Heldt
There is a short story–the author of whom I have forgotten–who writes about a supercomputer that writes beautiful, complex, rhythmically perfect poems. Everyone loves it except for human poets, who end up trying to scale the fence and destroy it at night, because it has rendered them useless. Though it is coming at the situation from a different angle, I think we lose some of our humanity if we let machines grade our students’ papers. What does it say to students if we are making assignments that we will not read? What does it say to them that a human is not even going to look at what they have written? To be sure, of the many student essays I read over the years I spent as an English teacher, far fewer of them gave me joy than burnout, but the impetus behind computerized grading is purely cynical. It says, “These students are writing nothing I will waste my time reading.” I can see administrations salivating about the prospect of larger classes and fewer salaries to pay, but it is a moribund step we should avoid taking.