I got it wrong!
Christopher Munro, who is working with Michael McCabe on the SEXPOT Project, has kindly pointed out that my interpretation of the following "partial grading" formula (for multiple-response questions in software package B)
was wrong. The reality is even more bizarre.
To explore the implications of this formula, let's lay down some ground rules for multiple-response questions (MRQs) and look at a concrete example.
Rule 1: The statements that make up the choices in are either true or false (although Martin Greenhow told me recently that he is exploring a question type for Mathletics that also allows the answer 'undecidable').
Rule 2: For each part of the MRQ, it is equally difficult to decide whether the statement is true or false (unlike competitions in popular magazines, which often make it humorously obvious which statements are false). I usually aim to satisfy this rule in my MRQs, although I freely acknowledge it's not an exact science and anyway, what is 'hard' varies from student to student.
Rule 3: It is equally likely that a statement is true or false (so that, on average for a four-part question, one question in sixteen has all parts false).
In the above formula the "# of correct answers" actually means the number of true statements in the question.
For our concrete example, take a four-part MRQ and a model student who gets all four choices correct. If, say, just one of the four statements is true, the student scores (4–0)/1 = 4 marks. On the other hand, if 3 statements are true and one is false, the student only gets (4–0)/3 = 4/3 marks for being just as clever as before. Maybe I am missing something, but that strikes me as a daft scoring system.
And for the one question in sixteen where all four answers are false, the student scores
Luckily here, the software under discussion violates Rule 3 by insisting that at least one statement is chosen to be true!