The problems with MOOCs 1: Robo-essay grading
Massively Open Online Courses (MOOC) are all the rage in 2013 academia. These are free courses offered on the Web designed to be taken by tens of thousands of students at a single time – offered at no cost to the student. There is much that is desirable about the MOOC model – in the same way that public libraries are desirable. But the quality of the total educational experience is dubious.
Several of the MOOC providers are for-profit. (How will they make money? Perhaps by selling certificates attesting to students’ participation in MOOCs.) But arguably the most aggressive and prestigious MOOC consortium is EdX – it currently offers courses from Harvard, MIT, and UC Berkeley; and beginning in Fall 2013, it will expand to include more schools from the US (U. Texas, Rice, Georgetown, Wellesley), Canada (McGill, U. Toronto), Europe (TU Delft, EPFL) and even Australian National U. I’ve been focusing my attention on EdX – both because of the prestige of member schools and because it is non-profit.
As I will report in subsequent posts, I’ve discovered some questionable pedagogy in several courses I’ve examined. But before discussing my own investigations, I’d like to point to a news story from yesterday – John Markoff’s New York Times report that EdX is releasing software to allow MOOC instructors (or a conventional college instructor) to have computers auto-grade essays.
There are many problems with auto-grading essays – contemporary automated graders cannot actually “understand” the essay, so instead it must depend on superficial features – features that can be gamed. In particular, automated graders cannot address the underlying logic, factual assertions, or actual meaning of a student essay. MIT’s Les Perelman has demonstrated this repeatedly. The BLT blog previously reported how a nonsense essay by Les Perelman (quoted in red here) received the maximum possible grade from automated grading software. (Perelman gives a good critique of studies of automated grading software here.)
Using automated grading software in an online environment will allow students, as they repeatedly use the software, to learn what superficial features (e.g., “use big words”) cause the automated grader to give high grades. We will not be teaching students skills in critical thinking or cogent writing, but rather conditioning them to successfully “game” automated grading software.
I’m simply stunned that EdX member institutions are taking this seriously. But, according to Markoff’s report, they are:
[T]he growing influence of the EdX consortium to set standards is likely to give the technology a boost. On Tuesday, Stanford announced that it would work with EdX to develop a joint educational system that will incorporate the automated assessment technology.
Indeed, one of the founders of one of the commercial MOOC provider argues that this training students to “game” the grader is actually a benefit, since it will make learning fun:
“It allows students to get immediate feedback on their work, so that learning turns into a game, with students naturally gravitating toward resubmitting the work until they get it right,” said Daphne Koller, a [Stanford] computer scientist [professor] and a founder of [for-profit MOOC provider] Coursera.
Teaching good writing is admirable. Teaching critical thinking is admirable. But the MOOCs are proposing something different: teaching students to submit and resubmit an essay until a student learns the idiosyncrasies of the automated grading software and is able to regularly “trick” it into giving good grades.