Most people would have to agree that what computers can do is pretty impressive. More importantly for those trying to save money, using computers to grade student essays in high-stakes testing situations is much cheaper than hiring and training human readers. Consequently, there is significant support for this practice.
But others oppose it.
HumanReaders.Org recently launched a petition “Against Machine Scoring Of Student Essays In High-Stakes Assessment.”
HumanReaders.Org argues that “computerized essay rating” should not be used for “any decision affecting a person’s life or livelihood” because it is “reductive,” “inaccurate,” and “unfair.” Along with their petition, HumanReaders.Org provides a summary of the relevant research and a substantial list of references.
What’s at stake in this debate?
Of course, there is the issue itself. The kinds of high-stakes tests that these computers score influence education in significant ways, impacting what and how K12 teachers teach, who gets to go to college, and what they will be able to do when they get there. So we want to get it right.
But the bigger picture has to do with how we think about and make decisions about education. The high stakes of the specific question of automated essay scoring illustrate the high stakes of getting those involved in education to read the scholarship on teaching and learning.
HumanReaders.Org gets this right. They ground their specific arguments in evidence from the research. Just as importantly, they approach the whole issue in a way that is clearly informed by the broader scholarship on teaching and learning, for instance, in knowing what questions to ask.
Anyone who wants to take up a contrary position should do the same.