All 3 entries tagged Cheating
View all 5 entries tagged Cheating on Warwick Blogs | View entries tagged Cheating at Technorati | There are no images tagged Cheating on this blog
March 03, 2006
Writing about web page http://mathstore.ac.uk/articles/maths-caa-series/feb2006/
Imagine a test with 5 questions, where each question is selected randomly from a bank of 10 related alternatives. Some 100,000 different tests can be generated.
Question: How many of these would you need to generate, on average, to have sight of all 50 alternative questions?
Answer: Only 43 (Douglas Adams was one out).
This surprising fact should give pause for thought to an author of online exams concerned about cheating. One of my favourite models for driving learning through assessment is to offer students a number of attempts (say 5) at randomly-generated tests in formative mode before they take the one that counts for credit. In the given example, 10 students colluding could suss out all, or most of, the questions stored in the banks before they take their summative test.
February 03, 2006
My thanks to the Statistics Dept for slotting me into the long agenda of their Teaching Committee meeting on 1st February. We has a useful exchange of views on computer-aided assessment (CAA). Here are some of the points raised:
Cost-Effectiveness: The time and effort required to learn to use an assessment package and put well-designed tests online needs to be justified by savings elsewhere and an improvement in student learning.
Deep Learning: Convincing evidence is missing to show that CAA can be used to assess and mediate deeper levels of knowledge and understanding.
Cheating: This is a cental issue in the Statistics Department. They have a strict policy of zero-tolerance of cheating, even for tests with minimal (say 5%) credit. They want to establish very clearly from outset what their testing and examining means because their students come from a wide range of cultural backgrounds (50% from overseas) and may bring differing assumptions and conventions about assessment practice.
Formative Experiment: One of their first-year modules would be suitable for regular online tests of routine knowledge. The tests would
- keep the students engaged with the module material as it unfolds and
- provide the Department with useful information about their students' difficulties and progress.
The module is taken by 400 students and the University's largest computer suite available for assessment holds around 50 students. The Department's zero tolerance of cheating would therefore mean 8 hours of invigilated sessions, a very inefficient alternative to the in-class tests currently used. However, online formative tests would be a good way to prepare the students for the summative tests they take in lecture theatres, and it was agreed to try to set these up next year if resources to prepare the material can be found.
October 11, 2005
I got an encouragingly lively reception when I climbed Gibbet Hill on Friday afternoon (a walk I miss since Mathematics moved to Central Campus) to talk to the Biological Sciences Undergraduate Teaching Committee about CAA. The discussion ate 15 minutes or more into their precious meeting time but brought me some positive outcomes:
- Two first-year modules were identified where CAA would help, and suitable materials already exist for one of them
- CAA could play a useful role in the continuous assessment of several second-year modules
- The departmental Chair, Professor Robert Freedman, did not entirely rule out the possibility of freeing up staff time to develop CAA resources
Two of the discussion points were:
- The value of using paper-based data-capture methods for continuous assessment
- The risks of plagiarism
I will discuss data capture in a separate blog but will make a couple of points here about the risks of cheating in computer-aided assessment when it is used in a 'summative' mode, that is to say, a mode where the assessment directly affects the outcome of a student's degree (for example, yielding marks for course credit or determining progression).
First some general points:
- The level of security in exams only needs to be proportional to the weight of the assessment
- A vanishingly small amount of course credit seems to get most learners engaged
- Credit representing say 10% of a first-year module affects at worst the second decimal place in the final percentage used to determine a student's degree class
- Students who cheat in continuous assessment shoot themselves in the foot; they are usually exposed in the final exam
- No system of invigilation offers 100% security against cheating
Now a few practical suggestions for raising the level of security in computer-aided tests and exams used in summative mode:
- Generate many variations of each question type and permute them so that each student answers a different question paper; several assessment packages are good at this for numerical questions
- When a student logs on to a test, display his or her university card on the table and screen to make identity confirmation easy
- A couple of random visits by an invigilator to the computer room during a 50-minute exam is a good deterrent
- Install CCTV in computer suites used for assessment
- Disable access to other computer applications and directories while the exam is in progress if the style of the questions makes this necessary (to prevent students using, for example, the internet or some mathematical sofware to answer the questions)
Other suggestions please.