All 32 entries tagged CAA
January 23, 2007
Software for computer-aided assessment comes in many shapes and sizes serving many purposes, ranging from simple quiz-building to the construction of complex question templates involving random parameters that are designed to test deeper understanding and provide intelligent feedback.
It is evaluation time for the software we have be trying out in the Science Faculty at Warwick. Because my project is specifically aimed at science disciplines, we have concentrated the trials on four CAA packages with serious mathematical capabilities: Maple TA, Mathletics, STACK and WeBWorK.
In order to judge the merits of these behemoths, it is important to lay down the criteria we will use. I have therefore started to produce a list of features and qualities that might be considered desirable in CAA software of this kind. PLEASE ADD TO MY LIST OR SUGGEST AMENDMENTS.
I have set out the features and qualities under the following headings:
- Data Security
I will deal with each heading in separate blogs for ease of digestion. Today I start with:
• Ease of use (Ability to author questions in browser window, intelligent fully-functional editor (see Work-flow below), quick access to current projects, good GUI and navigation, natural syntax for writing questions, flexible file and folder structure for organising work, automatic save before closing browser, easy user account creation, spreadsheet import and export of both account and assignment data, optimised for accessibililty, re-usable user-created templates for (i) writing tests (ii) sets of properties and permissions.)
• Mathematics entry and handling (WYSIWYG maths editor for symbolic and mathematical expressions. GIF-free options – MathML, (La)TeX, or WebEQ with MathPlayer. Platform-independent visually-pleasing rendering of symbols with scalable fonts and colours. Tex quality for both rendering and range of symbols. Intelligent display of mathematical objects (e.g. polynomials).)
• Sharing questions and assignments/tests (Import/export of (i) questions created in same software and (ii) text from other applications. Compatibility with QTI and other interoperability standards. Control of permissions for other users,)
• Creating assignments/tests (Easy selection from question banks. Easy control of assignment delivery options (ability to permute questions, permute parts of MCQs, choose “single scrollable page” or “one question per page”. Full control over length of test, period of availability, user-access, feedback timing.)
• Work flow (WYSIWYG editor with (i) full features (e.g. find and replace) and (ii) instant rendering of modified entry. Cut and paste in all question fields (including mathematical expressions). Regular automatic-saving option. Control over time out. One-click question try-out.)
• Testing (Ability to try out questions and feedback exactly as it would be experienced by a user. Separate windows for question testing and editing. Debugging and comprehensive error-reporting.)
• Question, assignment and user tagging (Ability to create a number of database fields (e.g. level, topic, subtopic, creation date) for quick search and retrieval of questions from large banks. Likewise for retrieval of users from performance database.)
July 21, 2006
July 19, 2006
Anyone wondering how effective crossing fingers was for
"the joint proposal on wikis as a teaching–and–learning tool"
might like to know: it worked!
Warwick's Education Innovation Fund has agreed to support a 2–year project to investigate the use of wikis in undergraduate teaching and learning; the lead will be taken in:
- The Department of Computer Science in Year 1 (a wiki will be established to support a first–year programming module with contributions from students, staff and experts from the computing industry).
- The Department of Mathematics in Year 2 (final–year students will create their own wiki of module resources built around a given skeleton syllabus; in other words, they will write their own lecture course. Their contributions will form a significant part of the module credit).
Other University departments have expressed an interest in devising their own experiments with teaching wikis.
If you would like to be kept informed of this project's progress, or to join in, please send me an email.
July 10, 2006
In the previous blog we decribed some of the features of Maple TA and WeBWorK presented at the March workshop. Two other CAA software architects introduced their brainchildren at the meeting: Chris Sangwin told us about his System for Teaching and Assessment using a Computer algebra Kernel (STACK) , and Martin Greenhow gave us a roller-coaster ride through his Mathletics program.
This open source software is designed for intelligent assessment of deeper mathematical knowledge in the growing number of subject areas that require it. Although STACK can deliver standard online question types (e.g. MCQs), its real strength is to handle student–provided answers to questions like these:
1. Factorise the following polynomial into a linear and a quadratic factor and hence find its roots:
(where a different equation is generated each time the question is called)
2. Write down a continuous function passing through the points (1,0) and (0,1) with exactly three turning points: a maximum, a minimum and a point of inflexion.
This kind of functionality is made possible by calling a computer algebra system (CAS); currently STACK uses the open–source Maxima system (try it out ). It can not only manipulate students' answers and give responsive feedback but can also help to generate problems randomly from a single template and provide corresponding worked solutions. In his talk Chris gave us some fascinating insights into the challenges that mathematics presents in this area, in particular, how to handle the subtleties of notation in
- students' submitted answers (fx might mean f times x or the functional value f(x))
- the CAS (interpreting various positions of minus signs for instance)
STACK tolerates informal entries in student answers (for instance, accepting 3xy instead of 3*x*y) and encourages students to "validate" their answers, in other words, to confirm that the program has correctly interpreted their entry when it displays the formal version.
We are currently exploring the possibility of using STACK for some low–level assessment taken by large numbers of first–year mathematics and statistics students. Its PHP architecture marries well with the Department's learning resources website Mathstuff.
Martin Greenhow and his team of research students have been developing this CAA resource and using it for their teaching and assessment at Brunel University for some years now. Its strengths include
- well–developed and thoughtful pedagogy
- large banks of mathematics questions aimed at the A–Level/starting–university zone.
Coding individual questions is a skilled and time–consuming activity, but when the parameters are varied and the context is changed, each template generates thousands of different questions (as many as 1020 for some templates!). Thus the Mathletics framework can create essentially unlimited numbers of different exams on the same set of topics, allowing students to learn by extended practice. Mathletics is responsive to issues of accessibility, gender, and ethnic background.
The demanding requirements of authoring have to be set against the huge searchable repository of existing questions: new–style Mathletics (with randomisation) has about 1500 question styles (each realising to thousands/millions of questions) spanning around 120 topics at GCSE/AS and A–level/university levels 1 and 2. They range broadly across algebra, geometry, calculus (incl. Laplace transforms, differential equations and vector calculus), logic, decision maths, numerical methods, economics applications, probability and statistics . New questions are constantly being developed (as part of the Mathematics for Economics: Enhancing Teaching and Learning Project (METAL) for instance), and Martin is keen to encourage others to join in this creative process.
We are planning to try out Mathletics in the Autumn on a small subset of the first-year engineering students without A-Level Maths. If this pilot is successful, we would hope to use it to support the mathematical needs of the whole cohort later on. We have a site licence for QM Perception and are well placed for this. Although my attempts a few months ago to get Mathletics running on the University network were abortive, one of our postgraduate CAA team members, John Hattersley, is now on the case. I hope to report soon of his success, at least on the well-tried version 3.4 of Perception, which will run another year here; another challenge will be to run it on version 4.2, to which we are upgrading next month. Stay tuned.
July 07, 2006
… the date of a Computer–Aided Assessment workshop at Warwick showcasing some of the software we are evaluating as part of an in–house CAA Project (for 'in–house' read 'Warwick Science Faculty'). Four software packages were exposed to scrutiny, and I would now like to say a little about each, our experience so far and the plans we have for them. More details will be posted soon on the Project website.
Maple TA (MTA).
This commercial Teaching and Assessment package from the Canadian firm Maplesoft is built on their well-known computer algebra system Maple. This foundation give MTA one of its mains strengths, namely the way it handles mathematics:
- at the authoring stage — it has a palette of mathematical symbols and a LaTeX facility
- in its rendering of equations onscreen
- its ability to include random parameters in question templates
- its ability to parse mathematically–equivalent answers.
At the workshop, its praises were sung by a representative from Adept Science (who distribute MTA in the United Kingdom) and moderated "Warts and All" by Michael McCabe, who had used it to assess Mathematics students at Portsmouth University a few months earlier.
Five weeks after the workshop, we used MTA, in tandem with Question Mark Perception, for a 50-minute summative test taken by 166 students registered for a second-year module in elementary number theory. The test contained 11 multiple-choice questions (MCQs) and contributed around 6% to the module credit. It was fairly easy to author the questions, the test was reliably delivered on the day by the hosting service, and I found it straightforward to extract and process the students' answer files from its database. On the negative side I found its imposed marking scheme and MCQ format frustratingly restrictive and laboriously had to adjust marks student-by-student to allow 3 for a correct answer, 1 for 'don't know' and zero for a wrong choice, normalising the totals by subtracting 8. In particular, I could not return the students' scores as soon as they clicked the submit button. Here Perception won hands–down on pedagogical flexibility but couldn't compete on the maths.
We were fortunate in getting MTA for an extended trial evaluation period. We tried, without success, to run it locally on a Java server that was configured for other apps. Subsequently we changed to the hosted service. Apart from an unfortunate loss of data two days before the test, this worked robustly and we have now taken out a 500–student departmental licence for the coming academic year, when we hope to go beyond the simple MCQ format and begin to exploit MTA's full mathematical capabilities.
This is a mature open–source assessment package developed with generous NSF funding at the University of Rochester. It has been around for over a decade and is currently used by over 80 (mainly North American) universities. When Jim Robinson of the Warwick Physics Department started looking for a suitable CAA package to improve and reinforce the mathematical abilities of his department's first–year students, he listed the following criteria. The software should preferably be:
- Capable of rendering mathematics
- Client–independent and available off site using web–based technology
It should also offer:
- A good bank of problems at right level
- Easy authoring, customizable question formats and individualised problems
- Instant feedback — hints, right/wrong, model solutions
WeBWorK has all these desirable features. Earlier this year, Jim installed WeBWorK on a Linux box, an old 500MHz PC. He needed some Linux systems admin experience – plus a few hours (no installation wizard) to install the WeBWorK software and problem libraries; it needs Perl, Apache, SQL server (MySQL), LaTeX, dvipng plus a few other apps (all free). Thereafter all course administration is web based.
Since Christmas, we have been running a pilot using volunteer Physics students taking the first–year Maths for Scientists module. Initially we plundered the very large collection of question banks to create a sample assigment based on the first term's material (mainly calculus) and subsequently provided a second assignment of home-made questions on the second-term's content (including linear algebra).The student feedback is currently being analysed, and we have begun to create assignments to be used by the whole cohort of Physics students taking the Maths for Scientists module next term.
As this entry is growing rather long, I will take a break now and discuss STACK and Mathletics later.
June 23, 2006
The long gap since my last entry on 20th March is due to some time–and–energy–diverting deadlines, including:
- Preparing and running a summative CAA test for 166 mathematics undergraduates(simultaneously using Question Mark Perception in one venue and Maple TA in another)
- Writing a proposal for funding Phase 2 of this CAA Project (happily successful)
- Writing a joint proposal on wikis as a teaching–and–learning tool (fingers crossed)
- Most demandingly, carrying out the complex assessment for a compulsory Projects module taken by 66 finalists on our 4–year Master of Mathematics (MMath) degree programme.
Now I am happily back to full–frontal CAA again.
But since the administration and assessment of the Projects module MA469 draws heavily on the resources of the digital age, it may interest readers to know how it works. Here goes:
MA469, the only core module in the MMath final year, accounts for 20% of the year's credit and comes in two flavours: students can freely choose either a Research Project or a Maths-in-Action (MiA) Project. They are informally advised, however, that research projects are for those intent on subsequently studying for a higher degree and MiA–projects are for the rest, most of whom will be entering of the job market.
The philosophy is that students contemplating a research degree in mathematics will make a better decision (on whether to commit and in what area) if they get a taste of mathematics research beforehand. MMath students at the end of their third year are this week knocking on my colleagues' doors to find out about, and maybe sign up for, projects in their areas of research expertise. Colleagues are not obliged to offer research projects, but those that do not only enjoy the experience of inspiring a talented undergraduate to work at the cutting edge but often end up with a well–trained and well–motivated research student ready to start a PhD the following year.
Responsibility for organising and assessing research projects lies firmly with the individual supervisors, who, of course, each have their own working styles and standards. In the Easter vacation, students submit a dissertation, which is read by the examiners (the supervisor and a second marker). They hold an oral, which includes a 20—30 minute presentation by the student, after which the examiners settle on a mark. Occasionally a third opinion is sought when the examiners don't agree. The main challenge is to ensure consistency between supervisors. Supervisors are given clear and detailed guidelines on the assessment criteria, and the external examiners, who can get a broad overview and spot anomalies, provide an important safety net.
These are more circumscribed than research projects. Students choose from a list of given topics that are underpinned by some serious mathematics (this year's themes included Encryption on the Internet, Voting, Data Compression, Optimal Vaccination, Virtual Reality and Communication Networks). They are given a fairly explicit brief and a skeleton bibliography, and must produce (for the stated percentage credit)
- A Public Presentation (30%)
- A Scholarly Report (60%)
- Peer Assessments of 3 reports on different themes by fellow students (10%)
Students may also negotiate their own topics and briefs with the module organiser.
For their public presentation they may offer either a poster session or a 25-minute talk. In either case their audience consists of open-day visitors (admissions candidates and their companions). They can work alone or in pairs. Their challenging task is to hint at the power of mathematics and inspire the aspiring Warwick students without losing the interest of the parents (perspiring at the prospect of £12K in fees) in the audience who may not even be comfortable with GCSE Maths.
Their scholarly reports must include all the mathematics they left out of their public presentations. With no word limit (but over 6 years ranging from 20 and 120 sides of A4), these reports must display not only wide–ranging scholarship but well–assimilated knowledge of, and insight into, the intimate relationship between mathematics and the complex real world we inhabit. Lively writing for enquiring minds is the order of the day, with pictures and animations thrown in if appropriate.
The peer reviews are intended to broaden our students' knowledge of mathematics "under the bonnet" of modern technology. and perhaps provide them something to talk about in their job interviews! Too often our graduates go forth into the world, their brains crammed with powerful and beautiful abstract thoughts but without a clue of how the stuff they learnt in their modules changes peoples' lives.
If all this adds up to a lot of marking and moderating (and it certainly does!), where do the benefits of th digital age come in? First, the module is entirely administered via Mathstuff, the Maths Dept's web site for teaching and learning: for instance, in October students register their preferences online; in March they submit their reports in electronic form immediately accessible to the examiners; in April they carry out their peer reviews online, reading their three reports in a browser window and submitting their reviews in a web form. To write their technical reports they must learn to use a mathematical word processor (many become skilled in writing LaTeX markup for high quality mathematical typesetting) and submit their files in portable document format (pdf). Many produce their posters with advanced design packages and use versatile presentation software to create animated slides to enhance their talks. The MiA projects certainly encourage our students to develop what are known in edu–speak as 'transferable skills'. They are not short of job offers either.
March 16, 2006
in search of the CAA Grail, a package that truly delivers all the assessment needs of scientists and mathematicians. Our quest may have fallen short, the chalice may have been chimerical, but we travelled hopefully in the company of Maple TA, WeBWorK, STACK and Mathletics, and I will discuss them each in turn in later blogs.
Meanwhile, here are some snaps of the fine company.
March 03, 2006
Writing about web page http://www.centralquestion.com/archives/2006/03/mind_reading_test.htmlTry this test (it takes about 10 mins). It is created with the elegant Flash-based software Question Writer — see my earlier blog.
Writing about web page http://mathstore.ac.uk/articles/maths-caa-series/feb2006/
Imagine a test with 5 questions, where each question is selected randomly from a bank of 10 related alternatives. Some 100,000 different tests can be generated.
Question: How many of these would you need to generate, on average, to have sight of all 50 alternative questions?
Answer: Only 43 (Douglas Adams was one out).
This surprising fact should give pause for thought to an author of online exams concerned about cheating. One of my favourite models for driving learning through assessment is to offer students a number of attempts (say 5) at randomly-generated tests in formative mode before they take the one that counts for credit. In the given example, 10 students colluding could suss out all, or most of, the questions stored in the banks before they take their summative test.
February 20, 2006
… on Tuesday, 14th March in the Mathematics Institute. In fact, why not come a bit earlier and/or stay a bit later for the Computer-Aided Assessment Demo Day.
The emphasis of this workshop is on assessment in Science and other maths-related subjects. Here are some of the highlights:
- Maple TA — in theory (Adept Science) and in practice (Michael McCabe)
- STACK — reaches parts of the cerebral cortex other CAA software cannot reach
- Mathletics — with intelligent feedback for the fuller learning experience
- SEXPOT — we all deserve a fair share of the marks
- Maths Fit — will help your students to prepare
The full programme can be downloaded here.
Free lunch? Of course, there's no such thing, unless you are a speaker or on the home team. It will cost £6 for a light buffet and drinks. I need to know how many to cater for, so it would help greatly if you could sign up here.