All entries for February 2006
February 20, 2006
… on Tuesday, 14th March in the Mathematics Institute. In fact, why not come a bit earlier and/or stay a bit later for the Computer-Aided Assessment Demo Day.
The emphasis of this workshop is on assessment in Science and other maths-related subjects. Here are some of the highlights:
- Maple TA — in theory (Adept Science) and in practice (Michael McCabe)
- STACK — reaches parts of the cerebral cortex other CAA software cannot reach
- Mathletics — with intelligent feedback for the fuller learning experience
- SEXPOT — we all deserve a fair share of the marks
- Maths Fit — will help your students to prepare
The full programme can be downloaded here.
Free lunch? Of course, there's no such thing, unless you are a speaker or on the home team. It will cost £6 for a light buffet and drinks. I need to know how many to cater for, so it would help greatly if you could sign up here.
February 13, 2006
Writing about web page http://www.apple.com/uk/imac/isight.html
How serious is the risk of cheating when computer-aided assessment (CAA) is used in summative mode (i.e. for module credit)? Attitudes within the Warwick Science Faculty vary widely: The Chemists are laid back, in Biological Sciences there are some strong individual concerns, the Statistics Dept is united in enforcing a strict 'zero tolerance' policy.
Although I believe the safeguards against cheating I discussed here are proportionate and reasonably robust, yet tighter measures are needed to keep everyone on side. Here are a couple of further ideas:
- My suggestion of CCTV cameras in computer workrooms reserved for summative assessment evoked an unenthusiastic response from those that hold the purse strings. So how about dummy cameras instead (real cameras, I mean, just not wired up to any monitoring system — surely cheap to install if recycled from an upgrade elsewhere in the system)?
- What about an all-seeing eye in each monitor that attaches a set of random mug-shots to the file a student submits when completing a piece of summative assessment — a dozen or so compressed snaps taken at random intervals during a 50-minute test should provide ample evidence of any cheating. If Apple can build iSight into their latest iMac design, it shouldn't be too many years before Windows PCs follow suit.
I'd welcome readers' suggestions for practical and effective CAA security.
February 10, 2006
Writing about web page http://mathstore.ac.uk/articles/maths-caa-series/jan2006/
Cliff Beevers' article IT was twenty years ago today … covers two decades of work on computer-aided assessment for mathematical disciplines at Heriot-Watt University. The challenges of testing mathematics online are presented as an intelligent and ever-changing compromise between effective pedagogy and the technical limitations of the medium.
The CAA package they have created since 1985 is currently offering formative testing to 25,000 students in Scottish schools. As a registered blind teacher, Cliff is well placed to address the accessibility issues; the Heriot-Watt development team took the mathematical tests to youngsters at the Royal Blind School in Edinburgh last year.
He ends with the following advice:
- Keep the questions fresh through randomisation
- Seek answers using mathematical expressions wherever possible
- Provide optional steps for those that need them
- Report all of this to both student and the teacher alike
- Keep all of these new assessments accessible so that disabled students are not disadvantaged
- Extend where possible to include animations, simulations and explorations so that higher order skills can also be tested electronically
February 08, 2006
Feed back! I make no apologies for plugging "Feedback" again.
"Feedback, Feedback, Feedback", as Tony Blair might have said, is at the heart of any contract between teacher and learner. And providing it is one of the things computer-aided assessment can be really good at.
Here is a quote from this paper entitled Recent Developments in Setting and Using Objective Tests in Mathematics Using QM Perception presented by E. Ellis, N. Baruah, M. Gill and M. Greenhow to the 9th International CAA Conference in Loughborough last year.
One of us (Martin Greenhow) initially worried that so much feedback was being made available to students that they would simply ignore it. The results of this study clearly show that extensive feedback is welcomed by, and has a positive effect on, most students. Some students requested even more feedback. In effect, the questions are being used as a learning tool alongside, or even instead of, lectures and seminars. This could have rather far-reaching consequences: question designers should focus much of their attention on feedback, the curriculum needs to make time for students to attend to it and the assessment criteria need to reward such student engagement.
Of course, it is one thing to provide feedback, another to ensure that it is acted upon. Encouraging students to make good use of feedback is one of the aims of the FAST Project cited in the related web page.
We started with a crossword clue, and so let's end with one:
Well constructed and square, like a stool perhaps (6 letters)
This scatalogical clue is attributed to Ximenes (and as usual, culled from The Week magazine). Ximenes was the crossword pseudonym of Derrick Somerset Macnutt, who was Head of Classics at Christ's Hospital. A Housie friend of mine said he would regularly set his class a stiff translation while he got on with his weekly puzzle for the Observer newspaper.
February 06, 2006
Christopher Munro, who is working with Michael McCabe on the SEXPOT Project, has kindly pointed out that my interpretation of the following "partial grading" formula (for multiple-response questions in software package B)
was wrong. The reality is even more bizarre.
To explore the implications of this formula, let's lay down some ground rules for multiple-response questions (MRQs) and look at a concrete example.
Rule 1: The statements that make up the choices in are either true or false (although Martin Greenhow told me recently that he is exploring a question type for Mathletics that also allows the answer 'undecidable').
Rule 2: For each part of the MRQ, it is equally difficult to decide whether the statement is true or false (unlike competitions in popular magazines, which often make it humorously obvious which statements are false). I usually aim to satisfy this rule in my MRQs, although I freely acknowledge it's not an exact science and anyway, what is 'hard' varies from student to student.
Rule 3: It is equally likely that a statement is true or false (so that, on average for a four-part question, one question in sixteen has all parts false).
In the above formula the "# of correct answers" actually means the number of true statements in the question.
For our concrete example, take a four-part MRQ and a model student who gets all four choices correct. If, say, just one of the four statements is true, the student scores (4–0)/1 = 4 marks. On the other hand, if 3 statements are true and one is false, the student only gets (4–0)/3 = 4/3 marks for being just as clever as before. Maybe I am missing something, but that strikes me as a daft scoring system.
And for the one question in sixteen where all four answers are false, the student scores
Luckily here, the software under discussion violates Rule 3 by insisting that at least one statement is chosen to be true!
February 03, 2006
My thanks to the Statistics Dept for slotting me into the long agenda of their Teaching Committee meeting on 1st February. We has a useful exchange of views on computer-aided assessment (CAA). Here are some of the points raised:
Cost-Effectiveness: The time and effort required to learn to use an assessment package and put well-designed tests online needs to be justified by savings elsewhere and an improvement in student learning.
Deep Learning: Convincing evidence is missing to show that CAA can be used to assess and mediate deeper levels of knowledge and understanding.
Cheating: This is a cental issue in the Statistics Department. They have a strict policy of zero-tolerance of cheating, even for tests with minimal (say 5%) credit. They want to establish very clearly from outset what their testing and examining means because their students come from a wide range of cultural backgrounds (50% from overseas) and may bring differing assumptions and conventions about assessment practice.
Formative Experiment: One of their first-year modules would be suitable for regular online tests of routine knowledge. The tests would
- keep the students engaged with the module material as it unfolds and
- provide the Department with useful information about their students' difficulties and progress.
The module is taken by 400 students and the University's largest computer suite available for assessment holds around 50 students. The Department's zero tolerance of cheating would therefore mean 8 hours of invigilated sessions, a very inefficient alternative to the in-class tests currently used. However, online formative tests would be a good way to prepare the students for the summative tests they take in lecture theatres, and it was agreed to try to set these up next year if resources to prepare the material can be found.