All entries for Wednesday 24 January 2007
January 24, 2007
This is the next contribution to my list of criteria for judging whether CAA software, particularly that with mathematical capabilities, is up to the job. Today I look at the heading
• Question types: MCQs, MRQs, yes/no, hot-spot, drag-and-drop, and so on—the more the merrier! For the assessment of deeper mathematical knowledge, more searching questions can be set when the assessment package can call on the services of a computer algebra system (CAS) – eg Maple TA and STACK. An option for multiple-part questions is valuable, especially if (i) partial credit is available and (ii) answers to later parts can be judged correct when calculations based on incorrect answers to earlier parts are correctly performed.
• Marking/Grading, Scoring: It is important for the author (i) to have complete control over the marking system for each question, (ii) to be able to give the user full information about how each question will be scored, and (iii) to have the option of revealing scores to the user’s at specified stages. Default marking schemes may be useful but should be easy to over-ride and should allow an author to specify a different marking scheme for each question. If an answer involves mathematical expressions, the software should be able to parse equivalent answers.
• Feedback: I believe this to be the most important pedagogical feature of CAA software! The author should be able to provide various types of feedback to each question (e.g. (1) whether the submitted answer was right or wrong, (2) the bare marks scored, (3) the correct answer—for instance, the correct MCQ choice, numerical entry, or symbolic expression, (4) the full worked solution) and to specify the point at which the feedback is made available (e.g. upon submission of a single answer, of a completed assessment, or at some later time). If questions contain variable parameters, the feedback should be tailored to the parameter values used. Another useful feature is an option to provide one or more graded hints after a wrong answer and to adjust the marks accordingly. An advanced feature, explored in Mathletics, is to be able to use a student’s answer to guess at errors or misconceptions (malrules) and to respond to them in the feedback.
• Random features: The inclusion of varying degrees of randomness in the construction of individual questions and whole assignments/tests/exams can significantly enhance the educational value of CAA and simultaneously reduce the risks of cheating. For each question at the assignment level, the software should be capable of selecting randomly from a specific bank of questions which all test the same skill/knowledge/understanding. At the question level, there is considerable scope for randomised variation, using place-holders to vary such things as units, names, even subject contexts; and in mathematical subjects, using parameters within specified ranges of numerical values that require students to carry out different calculations, each testing essentially the same knowledge or skills. Considerable care is needed to ensure the questions make sense for all choices of variables (for instance, avoiding division by zero), but in a science discipline, it is possible to generate millions of different, but educationally-equivalent, questions. This makes copying answers pointless and allows students to have virtually unlimited practice in formative mode. When sufficient randomness is built into a question template, it becomes a “reusable learning object ”, a special case of a reusable learning object (RLO) beloved of educational theorists who study computer-mediated learning.