January 24, 2007

CAA Fitness for Purpose: Pedagogy

Follow-up to Fit for Purpose? from Computer-aided assessment for sciences

This is the next contribution to my list of criteria for judging whether CAA software, particularly that with mathematical capabilities, is up to the job. Today I look at the heading

Pedagogy

Question types: MCQs, MRQs, yes/no, hot-spot, drag-and-drop, and so on—the more the merrier! For the assessment of deeper mathematical knowledge, more searching questions can be set when the assessment package can call on the services of a computer algebra system (CAS) – eg Maple TA and STACK. An option for multiple-part questions is valuable, especially if (i) partial credit is available and (ii) answers to later parts can be judged correct when calculations based on incorrect answers to earlier parts are correctly performed.
Marking/Grading, Scoring: It is important for the author (i) to have complete control over the marking system for each question, (ii) to be able to give the user full information about how each question will be scored, and (iii) to have the option of revealing scores to the user’s at specified stages. Default marking schemes may be useful but should be easy to over-ride and should allow an author to specify a different marking scheme for each question. If an answer involves mathematical expressions, the software should be able to parse equivalent answers.
Feedback: I believe this to be the most important pedagogical feature of CAA software! The author should be able to provide various types of feedback to each question (e.g. (1) whether the submitted answer was right or wrong, (2) the bare marks scored, (3) the correct answer—for instance, the correct MCQ choice, numerical entry, or symbolic expression, (4) the full worked solution) and to specify the point at which the feedback is made available (e.g. upon submission of a single answer, of a completed assessment, or at some later time). If questions contain variable parameters, the feedback should be tailored to the parameter values used. Another useful feature is an option to provide one or more graded hints after a wrong answer and to adjust the marks accordingly. An advanced feature, explored in Mathletics, is to be able to use a student’s answer to guess at errors or misconceptions (malrules) and to respond to them in the feedback.
Random features: The inclusion of varying degrees of randomness in the construction of individual questions and whole assignments/tests/exams can significantly enhance the educational value of CAA and simultaneously reduce the risks of cheating. For each question at the assignment level, the software should be capable of selecting randomly from a specific bank of questions which all test the same skill/knowledge/understanding. At the question level, there is considerable scope for randomised variation, using place-holders to vary such things as units, names, even subject contexts; and in mathematical subjects, using parameters within specified ranges of numerical values that require students to carry out different calculations, each testing essentially the same knowledge or skills. Considerable care is needed to ensure the questions make sense for all choices of variables (for instance, avoiding division by zero), but in a science discipline, it is possible to generate millions of different, but educationally-equivalent, questions. This makes copying answers pointless and allows students to have virtually unlimited practice in formative mode. When sufficient randomness is built into a question template, it becomes a “reusable learning object ”, a special case of a reusable learning object (RLO) beloved of educational theorists who study computer-mediated learning.


- One comment Not publicly viewable

  1. Juliette White

    This probably implicitly falls into the list above but one of the things that I got a bit frustrated by with the CAA systems that I’ve used for mathematics is the lack of support for ‘mastery testing’. You want to be able to give students a random set of 20 differentiation questions say from a bank of questions (with maybe a certain number from one subbank, a certain number from another etc. but with the order mixed up) and make them redo the test until they’ve got them all correct.

    I’m also not sure if this counts under pedagogy but it’s also really important for the teacher to be able to easily see all the wrong answers to a given question so that what the common mistakes might be.

    24 Jan 2007, 20:27


Add a comment

You are not allowed to comment on this entry as it has restricted commenting permissions.

Trackbacks

January 2007

Mo Tu We Th Fr Sa Su
Dec |  Today  | Feb
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31            

Search this blog

Tags

Galleries

Most recent comments

  • The LaTeX equations are a little wobbly in their baselines That's because they use mimeTeX. I am try… by Steve Mayer on this entry
  • Aha! You are right, John. I am familiar with a different terminology: Multiple Choice for what Quizb… by on this entry
  • Haven't you used the wrong question type, Trevor? Your answers look, to my unskilled eye, to be mutu… by John Dale on this entry
  • The usual factors in Information Assurance are: Confidentiality Integrity Availability And for syste… by Max Hammond on this entry
  • Is the workshop next Monday,26 March 07, open to anybody interested in computer aided assessment? If… by Kuldeep Singh on this entry

Blog archive

Loading…
Not signed in
Sign in

Powered by BlogBuilder
© MMXIV