All entries for October 2005
October 28, 2005
In recent months, the tedium of the Gibbet Hill traffic queues reduces one to reading the banners fluttering from lamp-posts proclaiming that "Coventry is the hottest place to be this year". This year Leamington was quite hot enough, thank you very much; the nights were oppressively sticky and restless in our sultry top-floor bedrooms.
"Summer of Cov" states another suggestive slogan. Does anyone really call it Cuventry any more? Coventry may be trying harder, but Glasgow was miles better.
By "data capture" I'll mean the transfer of data from paper to an electronic repository, where it can be safely stored and analysed. It is typically used for processing questionnaires and marking tests. The Warwick E-Lab still offers data capture for a fee but writes: "It is a declining service, not one which is getting better resourced as time goes by". What are the implications of this?
The arguments for putting these activities online are compelling. The E-Lab FormsBuilder tool makes it very easy to create online questionnaires. But I would like to put in some caveats:
- When the Engineering Dept (some years ago) moved from paper to web evaluations for their modules, the number students responding dropped so drastically that they were criticised for low participation in their QAA subject review. Therefore incentives must be provided to encourage students to complete evaluation questionnaires, for instance, by integrating them with other online module activities: "completing this short questionnaire is the first part of this week's online assignment" OR by offering a prize for a randomly drawn questionnaire received by a given date with some of the money saved by going online.
- For some years the Mathematics Department has used Warwick's ITS data capture service to process invigilated paper-based multi-choice tests held in large lecture theatres — the tests count for a small amount of module credit, and four permuted versions of each paper are used to make it difficult to copy answers from neighbours. In the service's heyday, the data captors used touch-sensitive tablets and could turn around 350 answer sheets in 3 hours; an email merge brought the results of a midday exam to individual students by tea-time. Warwick's IT Services now use optical readers, and the response time is much slower. Of course, these tests could be put online, but although there are eight bookable work areas with just over 200 networked Windows PCs spread around the Warwick campuses, coordinating and monitoring these tests is a serious administrative and technical challenge for 350 students.
- The Department of Biological Sciences also uses regular paper-based multi-choice tests to assess around 150 first-year students on the module BS125. A member of their support staff processes the answer sheets using their own optical character reader and Speedwell Multiquest software to provide a quick turn-around of the test results and a range of statistical analyses. The effectiveness of this approach makes it unnecessary to consider investing time and effort putting the tests online
October 18, 2005
… in fact, four more cryptic clues to break the monotony of banging on about CAA. The first is egotistical:
I turn out vast work here (6,6)
The next two are culled from recent issues of The Week, that entertaining take (literally) on the news for those too busy to do justice to a daily paper.
Pound of sultanas (8)
Being as one in bed, you and me (6)
Finally another slightly naughty one that came to me while walking back down Gibbet Hill after meeting the biologists. It takes a liberty with the 'double definition' style:
Sex maniacs described? (7)
October 17, 2005
Here are two desirable features of CAA:
- It improves student learning
- It reduces staff time spent on assessment
I intend to debate how CAA might "improve student learning" in a later blog. As for saving staff time, of course the simplest way is to reduce the amount of assessment, although this can carry a price. But if you are going to do some objective testing, either to drive learning or to find out what they've learnt, then CAA can cut the bill over the years despite a big investment up front.
When I asked to Robin Ball in Physics for an invitation to their Teaching Committee to plug the Project, he threw down a challenge: What's the point in spending money on CAA when experience shows that such schemes and experiments usually peter out when the enthusiasts move on? This blog is a response to Robin's challenge to show that CAA can have staying power,
First an evolutionary metaphor: Certainly not all CAA experiments will last the course, but if enough variations are tried, some will be fit for purpose in a suitable ecological niche and hold out until either the niche disappears or something even better comes along. But in this Project I hope the evolution really will be driven by intelligent design! My job will be to discover the niches and design animals that will flourish in them.
So what are criteria for sustainability? The two desirable features mentioned at the outset would be a good start. Here are a few more — but please send me others :
- Extremely easy authoring (see John Dales' Comment 1 ), or failing that, an inexpensive and accessible authoring service (e.g. from a trained postgraduate student in your discipline)
- A reliable and secure delivery system, with most decisions (When will the test be? Who will take it? What questions will it include? Where will it take place?) entered in a simple web interface.
- A culture of handing down the resources of a well-established module from year to year, irrespective of who the module organiser/lecturer is. First-year modules are usually stable long enough for an edict from the Teaching Committee to be workable here.
Robin, over to you.
October 12, 2005
I am allergic to jargon! It can be used to
- conceal the lack of substance or meaning in an idea
- inflate the commonplace or trivial
- exclude the unititiated
- confer pseudo-scientific authority and claim false superiority
So what about Latent Semantic Analysis and its inevitable contraction to LSA? I came across it in an interesting conversation with Mike Joy about current issues in e-learning and assessment. LSA is a statistical method designed to measure the commonality of meaning in a collection of text passages or documents. It compares the frequency of significant words, numerically conflates their meanings, applies some mathematical jiggery-pokery to the data (viewed as sparse matrices), and comes up with some numbers that may indicate how close the texts are in meaning. It can be used as an alternative to the more familiar comparison of strings (a la Google) in detecting likely plagiarism. I believe that it is used effectively in monitoring plagiarism in program source code submitted for assessment by students in the Department of Computer Science.
Clearly jargon is both necessary and useful to experts, and LSA meets this test. It also has the virtue of meaning what it says: the analysis of hidden meaning. It might be interesting to run LSA on this and other blogs on plagiarism!
Writing about web page http://www2.warwick.ac.uk/elearning/aboutus/steve/flash/questionwriter/
Have a look at this elegant example of an "interactive self-assessment quiz" written in the Flash-based software QWriter (click the URL above). I'd take this test again and again just for the pleasure of experiencing its cool and stylish interface. It offers a wide range of question types, presents mathematics well and shows its scientific paces from GCSE to University level in this Chemistry test.
What is not clear is how much effort went into authoring this example; I suspect you need to be an academic chemist and a graphics wiz who can write Actionscipt to produce something like this. I want to find out more. Watch this blog.
October 11, 2005
I got an encouragingly lively reception when I climbed Gibbet Hill on Friday afternoon (a walk I miss since Mathematics moved to Central Campus) to talk to the Biological Sciences Undergraduate Teaching Committee about CAA. The discussion ate 15 minutes or more into their precious meeting time but brought me some positive outcomes:
- Two first-year modules were identified where CAA would help, and suitable materials already exist for one of them
- CAA could play a useful role in the continuous assessment of several second-year modules
- The departmental Chair, Professor Robert Freedman, did not entirely rule out the possibility of freeing up staff time to develop CAA resources
Two of the discussion points were:
- The value of using paper-based data-capture methods for continuous assessment
- The risks of plagiarism
I will discuss data capture in a separate blog but will make a couple of points here about the risks of cheating in computer-aided assessment when it is used in a 'summative' mode, that is to say, a mode where the assessment directly affects the outcome of a student's degree (for example, yielding marks for course credit or determining progression).
First some general points:
- The level of security in exams only needs to be proportional to the weight of the assessment
- A vanishingly small amount of course credit seems to get most learners engaged
- Credit representing say 10% of a first-year module affects at worst the second decimal place in the final percentage used to determine a student's degree class
- Students who cheat in continuous assessment shoot themselves in the foot; they are usually exposed in the final exam
- No system of invigilation offers 100% security against cheating
Now a few practical suggestions for raising the level of security in computer-aided tests and exams used in summative mode:
- Generate many variations of each question type and permute them so that each student answers a different question paper; several assessment packages are good at this for numerical questions
- When a student logs on to a test, display his or her university card on the table and screen to make identity confirmation easy
- A couple of random visits by an invigilator to the computer room during a 50-minute exam is a good deterrent
- Install CCTV in computer suites used for assessment
- Disable access to other computer applications and directories while the exam is in progress if the style of the questions makes this necessary (to prevent students using, for example, the internet or some mathematical sofware to answer the questions)
Other suggestions please.
October 05, 2005
I have decided to use this blog as a diary of my CAA activities to tell anyone that's interested (and to remind myself) what I have been up to on the Sciences Computer-aided Assessment (CAA4S) front.
On Monday (3/10/05) the School of Engineering "Undergraduate Degrees Committee" generously allowed me to bend their ears for 10 minutes on my plans for CAA. They looked overworked (13 items on the Agenda). I got some searching questions about cost-effectiveness and the technical problems of authoring.
My wild suggestion that academic staff might be given time (e.g. a smaller teaching load?) to develop CAA resources was firmly ruled "out of the question" so great is the School's academic burden – of course, this reaction will be universal. However, the idea of hiring intelligent and technically-savvy postgraduates to help staff with the design and implementation of online assessments turned out to be a less-leaden balloon.
On the positive side, there is a group of 5–6 members of the School who are seriously interested in using CAA and I plan to try to get them together to see what we can do.
Finally a reminder to myself arising from the meeting: I can use the Engineering email alias to circulate staff about the Project and maybe unearth further enthusiasts.
My first ever blog entry! A new trick for an old dog? Yet another source of anxiety? Undressing in public?
Let's play safe with a cryptic crossword clue:
My blog disrupted! Master daemon is suspected (8,5,9)
No prizes but answers welcome.
Now for the frission of excitement as I click on the 'Publish now' button