All 15 entries tagged Software

View all 39 entries tagged Software on Warwick Blogs | View entries tagged Software at Technorati | View all 2 images tagged Software

April 24, 2007

CAA Fitness for Purpose: Data Security and Robustness

Follow-up to Fit for Purpose? from Computer-aided assessment for sciences

Data Security

Three kinds of data need to be kept safe: (i) the questions stored for a test; (ii) student answers entered during a test; (ii) submitted answers and results.

  • Keeping tests safe: It is clearly important to keep tests, questions, solutions and feedback safe from prying eyes, especially if they are to be used in summative mode. So the question database should be encrypted or otherwise made hacker-proof. It should also be regularly backed up in case of hardware failure (having lost questions on a hosted service, I would strongly advise authors to back up their work locally too). If a degree of randomness is introduced to reduce the risk of cheating (via multiple question banks or question templates with parameters, say), then thought should be given to the ease with which determined students could circumvent the protection thus provided (see this blog entry, for instance).
  • In-test Security: Some assessment software allows submission of answers to be postponed until the end of the test. This is dangerous. A user who has entered 9 out of 10 answers when the system crashes without saving them would have every reason to be angry. My preferred option is to require an entered answer to be validated (and simultaneously saved) before the user is allowed to proceed to the next question (or at least to be warned that they may lose their answer if they do not validate before moving on). Validation allows the software to interpret the answer and return its interpretation to the user in a standard form; it is an important stage in dealing with answers to questions with symbolic content, where the machine may not be able to cope with the informal context-dependent representation humans used to. Another kind of security involves limiting cheating during a test: impersonation, or copying from a neighbour, for example. Invigilation is still the safest answer to this.
  • Securing Results: The most important thing about the results database, apart from the obvious needs to be backed up and proof against hacking, it that it should store every bit of activity engaged in by a student during a test. If a student challenges their test outcome, the examiner needs to be able to trace every step the student took, including false validations, inappropriate mouse clicks (some assessment software swoons at the click of a browser back button). and the relaunching of the test. Although it is a good idea to insist that students jot their work down on paper during a test, this is not much help if a system fault requires a new test to be generated and it comes with different values of th random parameters; When parameters are used, the system should also be able to deliver the same test to a student who, through no fault of their own, is forced to make a fresh start. As I have said elsewhere, it is a great help if the database fields are chosen to optimise efficiency and flexibility in searching the results.

February 08, 2007

CAA Fitness for Purpose: Administration

Follow-up to Fit for Purpose? from Computer-aided assessment for sciences

Administering online assessment can be a nightmare—I have lost sleep over it. Although setting the parameters for delivering an exam online will never be entirely straightforward, let me suggest a few desirable features to smooth the way.

Administration

User Accounts: If a single sign-on (SSO) system, such as the open-software system Shibboleth, can be integrated with a CAA package, an assessment can be made instantly accessible to a group of students registered for a module on the institutional database. At the same time, students signed on to the network have immediate access to all the available assessments for the modules they are registered for. In the absence of SSO, assessment software should make it easy for the details of students permitted to access a given assessment to be uploaded manually, for instance accepting comma-separated values from a spreadsheet containing the appropriate fields. An option to give students permission to create their own assessment accounts is also useful; it should allow them to browse the available assessments and register for any that take their fancy.

Setting Permissions: When creating an assessment, it should be straightforward for the author to set a whole range of permissions: who can see the test, edit the test, take the test, when they can do so, how long it should last, how many attempts are allowed, who can access the results, and so on. It is helpful if these permissions can be set and subsequently edited in an easily-accessible window, which displays the full range of permissions available. It is also handy to be able to save templates of standard sets of permissions for re-use.

Sending Feedback: It is vital for an author to have detailed control over (i) the levels of feedback: hints, right or wrong, simple answer, full worked solution and (ii) when it is delivered: directly after an answer is submitted, immediately after the test is completed (feedback, like revenge, is a dish best served hot), or later, after the assessment is closed.

Answer Records: If the assessment software stores users’ answer files – and only that designed for simple self-assessment doesn’t – then it is very important to be able to search those files efficiently. It should be easy to search all the database fields that are used to create assessments and accounts with all the usual functionality available in a respectable database; thus, for example, it should be possible to pull out all the answers to question 5 on assessment 2 done by students called “Smith” who are either based in the Mathematics Dept or whose students numbers begin with 02 (the year of entry). If the database has a field for email addresses, it should be possible to send emails to selected subsets of registered users containing information about, for example, their results and module administration.

Analysing Results: I have to confess that I have little experience in this area and that my views on what is desirable and useful are poorly developed. I would welcome some input from more experienced readers here. It is obviously helpful to be able to (i) analyse results in as much detail as the database allows and (ii) present the data in easy-to-grasp numerical and visual ways. A number of standard statistical tests can be applied to the data to provide insight into the success of an assessment and the performance of the users; for instance, one helpful test I have used measures the effectiveness of single a multiple-choice question (as part of a larger exam) in discriminating between students of differing ability (as indicated by their overall performance on the exam). Please let me have your views on the best tests to build into the software, by email or via the commenting option below.


January 29, 2007

CAA Fitness for Purpose: User Experience

Follow-up to Fit for Purpose? from Computer-aided assessment for sciences

Another heading in my check-list of criteria for judging whether CAA software, particularly that with mathematical capabilities, is up to the job. As usual, I welcome your comments and further ideas. Today I look at the heading

User Experience

Logging in and Submitting: Within a given intranet, so-called “single sign-on” avoids having to distribute special user IDs and passwords to students, who then have to remember them to access an assignment. With single sign-on, it is easier to call on institutional databases to display personal information (name, number, mug-shot) onscreen as an identity check for student and invigilator alike. Once signed on, students should be able to click quickly to the test they want to take. It should be clear how their answers to questions should be submitted for marking (grading), singly and/or in one final submission, and whether multiple attempts are permitted. Answers should be regularly saved to a local drive in case of network or software failure.

Navigation and Layout: It should be easy to navigate quickly through the questions (in any order), and to choose to display them all together on a scrolling page or one at a time. From any test page it should be clear which questions (i) have already been attempted and (ii) have been finally submitted. Each page layout should be visually easy to interpret (e.g. displayed equations, clear separation of question statement from answer boxes with hypertext for actions close by, adjacent questions with different background colours). Anchors to keep the right part of a long page in view, drop-down menus, and prompts to open help windows can all improve the user experience and sense of being in control.

Entering Answers: Entering text with standard keyboard characters is usually unproblematic – answer-boxes should accommodate the longest imaginable answer, they should display an easily-readable font, and should have the focus with a flashing cursor when appropriate. Entering non-standard symbols, in particular mathematical expressions, presents a challenge. There are some well-tried ways of dealing with this: informal entry using pocket-calculator conventions, LaTeX markup, a palette of standard symbols that can be dragged into the answer box. CAA software is unforgiving when trying to make sense of the kind of informal entry easily understood by humans, and so rigorous adherence to correct mathematical syntax (brackets, arithmetical operations, functional notation) is usually required. (For instance, WeBWorK is fairly tolerant of informal entry and includes a summary of user syntax in a pane on the right-hand side of its pages.)

Recording Progress: There are usually several stages in answering a question online: (1) entering the answer in the appropriate box(es); (2) validating the answer to check that the program correctly interprets it (especially if symbolic expressions are involved); (3) saving the answer (often combined with validation); (4) reviewing the answer and editing it; (5) submitting the answer for marking/grading; (6) making further attempts if allowed; (7) submitting the final attempt. It is important for this progress data to be displayed in a table on every page of the assessment, with direct navigation to uncompleted questions. It is also helpful to record individual question and total scores in this table and to display ”time remaining” in say minutes and an analogue clock.

Training and In-Test Help: It is desirable to give students a practice assignment in conjunction with an online tutorial to familiarise them with the assessment format and the syntax for entering symbolic notation. This can be delivered in advance of the test or as an initial part of it. A summary of this user guidance should be easily accessible at any stage of the test, perhaps through a help-box or in a separate pane of the test window.

Accessibility: Here is a short checklist of desirable feature for optimising access to web pages: (1) user control of font styles and sizes (especially important for the display of mathematics, which may be embedded as graphics); (2) text equivalents for graphics and multimedia; (3) simple and logical navigation; (4) control over text and background colour. (5) Compatibility with a screen-reader that handles mathematics and other symbolic notation (programs now exist to read mathematics that is coded in MathML – e.g. Design Science’s MathPlayer: see http://www.dessci.com/en/products/mathplayer/tech/accessibility.htm). Entering mathematical answers is particularly difficult for visually-impaired users and so an intelligent screen-reader for validation of answers would provide helpful reassurance.


January 24, 2007

CAA Fitness for Purpose: Pedagogy

Follow-up to Fit for Purpose? from Computer-aided assessment for sciences

This is the next contribution to my list of criteria for judging whether CAA software, particularly that with mathematical capabilities, is up to the job. Today I look at the heading

Pedagogy

Question types: MCQs, MRQs, yes/no, hot-spot, drag-and-drop, and so on—the more the merrier! For the assessment of deeper mathematical knowledge, more searching questions can be set when the assessment package can call on the services of a computer algebra system (CAS) – eg Maple TA and STACK. An option for multiple-part questions is valuable, especially if (i) partial credit is available and (ii) answers to later parts can be judged correct when calculations based on incorrect answers to earlier parts are correctly performed.
Marking/Grading, Scoring: It is important for the author (i) to have complete control over the marking system for each question, (ii) to be able to give the user full information about how each question will be scored, and (iii) to have the option of revealing scores to the user’s at specified stages. Default marking schemes may be useful but should be easy to over-ride and should allow an author to specify a different marking scheme for each question. If an answer involves mathematical expressions, the software should be able to parse equivalent answers.
Feedback: I believe this to be the most important pedagogical feature of CAA software! The author should be able to provide various types of feedback to each question (e.g. (1) whether the submitted answer was right or wrong, (2) the bare marks scored, (3) the correct answer—for instance, the correct MCQ choice, numerical entry, or symbolic expression, (4) the full worked solution) and to specify the point at which the feedback is made available (e.g. upon submission of a single answer, of a completed assessment, or at some later time). If questions contain variable parameters, the feedback should be tailored to the parameter values used. Another useful feature is an option to provide one or more graded hints after a wrong answer and to adjust the marks accordingly. An advanced feature, explored in Mathletics, is to be able to use a student’s answer to guess at errors or misconceptions (malrules) and to respond to them in the feedback.
Random features: The inclusion of varying degrees of randomness in the construction of individual questions and whole assignments/tests/exams can significantly enhance the educational value of CAA and simultaneously reduce the risks of cheating. For each question at the assignment level, the software should be capable of selecting randomly from a specific bank of questions which all test the same skill/knowledge/understanding. At the question level, there is considerable scope for randomised variation, using place-holders to vary such things as units, names, even subject contexts; and in mathematical subjects, using parameters within specified ranges of numerical values that require students to carry out different calculations, each testing essentially the same knowledge or skills. Considerable care is needed to ensure the questions make sense for all choices of variables (for instance, avoiding division by zero), but in a science discipline, it is possible to generate millions of different, but educationally-equivalent, questions. This makes copying answers pointless and allows students to have virtually unlimited practice in formative mode. When sufficient randomness is built into a question template, it becomes a “reusable learning object ”, a special case of a reusable learning object (RLO) beloved of educational theorists who study computer-mediated learning.


January 23, 2007

Fit for Purpose?

Software for computer-aided assessment comes in many shapes and sizes serving many purposes, ranging from simple quiz-building to the construction of complex question templates involving random parameters that are designed to test deeper understanding and provide intelligent feedback.

It is evaluation time for the software we have be trying out in the Science Faculty at Warwick. Because my project is specifically aimed at science disciplines, we have concentrated the trials on four CAA packages with serious mathematical capabilities: Maple TA, Mathletics, STACK and WeBWorK.

In order to judge the merits of these behemoths, it is important to lay down the criteria we will use. I have therefore started to produce a list of features and qualities that might be considered desirable in CAA software of this kind. PLEASE ADD TO MY LIST OR SUGGEST AMENDMENTS.

I have set out the features and qualities under the following headings:

  • Authoring
  • Pedagogy
  • User-experience
  • Administration
  • Data Security
  • Robustness

I will deal with each heading in separate blogs for ease of digestion. Today I start with:

Authoring

Ease of use (Ability to author questions in browser window, intelligent fully-functional editor (see Work-flow below), quick access to current projects, good GUI and navigation, natural syntax for writing questions, flexible file and folder structure for organising work, automatic save before closing browser, easy user account creation, spreadsheet import and export of both account and assignment data, optimised for accessibililty, re-usable user-created templates for (i) writing tests (ii) sets of properties and permissions.)
Mathematics entry and handling (WYSIWYG maths editor for symbolic and mathematical expressions. GIF-free options – MathML, (La)TeX, or WebEQ with MathPlayer. Platform-independent visually-pleasing rendering of symbols with scalable fonts and colours. Tex quality for both rendering and range of symbols. Intelligent display of mathematical objects (e.g. polynomials).)
Sharing questions and assignments/tests (Import/export of (i) questions created in same software and (ii) text from other applications. Compatibility with QTI and other interoperability standards. Control of permissions for other users,)
Creating assignments/tests (Easy selection from question banks. Easy control of assignment delivery options (ability to permute questions, permute parts of MCQs, choose “single scrollable page” or “one question per page”. Full control over length of test, period of availability, user-access, feedback timing.)
Work flow (WYSIWYG editor with (i) full features (e.g. find and replace) and (ii) instant rendering of modified entry. Cut and paste in all question fields (including mathematical expressions). Regular automatic-saving option. Control over time out. One-click question try-out.)
Testing (Ability to try out questions and feedback exactly as it would be experienced by a user. Separate windows for question testing and editing. Debugging and comprehensive error-reporting.)
Question, assignment and user tagging (Ability to create a number of database fields (e.g. level, topic, subtopic, creation date) for quick search and retrieval of questions from large banks. Likewise for retrieval of users from performance database.)


July 21, 2006

Transatlantic Tutorial

On Monday afternoon, five of us in Warwick's Elab and a member of the technical team at Adept Science hooked up to Waterloo in Canada for a 2-hour tutorial on Maple TA, the mathematical teaching and assessment software created by Maplesoft. It was my first exposure to web conferencing software (in this case Webex ): just a communal phone link and a real–time view of the tutor's desktop, but it was fast, effective and 100% fit for purpose. My only regret is that we didn't make a recording of the session.

July 10, 2006

More thoughts from 14th March

Follow-up to Looking back to 14th March … from Computer-aided assessment for sciences

In the previous blog we decribed some of the features of Maple TA and WeBWorK presented at the March workshop. Two other CAA software architects introduced their brainchildren at the meeting: Chris Sangwin told us about his System for Teaching and Assessment using a Computer algebra Kernel (STACK) , and Martin Greenhow gave us a roller-coaster ride through his Mathletics program.

STACK
This open source software is designed for intelligent assessment of deeper mathematical knowledge in the growing number of subject areas that require it. Although STACK can deliver standard online question types (e.g. MCQs), its real strength is to handle student–provided answers to questions like these:

1. Factorise the following polynomial into a linear and a quadratic factor and hence find its roots:

x^3-3x^2+3x-2

(where a different equation is generated each time the question is called)

2. Write down a continuous function passing through the points (1,0) and (0,1) with exactly three turning points: a maximum, a minimum and a point of inflexion.

This kind of functionality is made possible by calling a computer algebra system (CAS); currently STACK uses the open–source Maxima system (try it out ). It can not only manipulate students' answers and give responsive feedback but can also help to generate problems randomly from a single template and provide corresponding worked solutions. In his talk Chris gave us some fascinating insights into the challenges that mathematics presents in this area, in particular, how to handle the subtleties of notation in

  • students' submitted answers (fx might mean f times x or the functional value f(x))
  • the CAS (interpreting various positions of minus signs for instance)

STACK tolerates informal entries in student answers (for instance, accepting 3xy instead of 3*x*y) and encourages students to "validate" their answers, in other words, to confirm that the program has correctly interpreted their entry when it displays the formal version.

We are currently exploring the possibility of using STACK for some low–level assessment taken by large numbers of first–year mathematics and statistics students. Its PHP architecture marries well with the Department's learning resources website Mathstuff.

Mathletics
Martin Greenhow and his team of research students have been developing this CAA resource and using it for their teaching and assessment at Brunel University for some years now. Its strengths include

  • well–developed and thoughtful pedagogy
  • large banks of mathematics questions aimed at the A–Level/starting–university zone.

The questions (perhaps better thought of as "question templates") are written in a combination of HTML, MathML, Javascript, SVG script, and can call on a library of functions for such things as displaying a polynomial intelligibly in the conventional way. Responsive feedback is a central feature of Mathletics: question templates include randomised parameters and context-aware alternatives; the feedback of hints and model solutions respects the choice of parameters and context, and responds to student mistakes by using their wrong answers to guess at "mal-rules" or common student misconceptions. Experience at Brunel has shown that this feedback plays a central role in student learning. Although the stand-alone questions can be interpreted in a suitable web browser endowed with the appropriate plug-ins, they are really designed for use with Question Mark Perception, which can deliver sequences of questions, record and analyse students' answers, provide the feedback and so on.

Coding individual questions is a skilled and time–consuming activity, but when the parameters are varied and the context is changed, each template generates thousands of different questions (as many as 1020 for some templates!). Thus the Mathletics framework can create essentially unlimited numbers of different exams on the same set of topics, allowing students to learn by extended practice. Mathletics is responsive to issues of accessibility, gender, and ethnic background.

The demanding requirements of authoring have to be set against the huge searchable repository of existing questions: new–style Mathletics (with randomisation) has about 1500 question styles (each realising to thousands/millions of questions) spanning around 120 topics at GCSE/AS and A–level/university levels 1 and 2. They range broadly across algebra, geometry, calculus (incl. Laplace transforms, differential equations and vector calculus), logic, decision maths, numerical methods, economics applications, probability and statistics . New questions are constantly being developed (as part of the Mathematics for Economics: Enhancing Teaching and Learning Project (METAL) for instance), and Martin is keen to encourage others to join in this creative process.

We are planning to try out Mathletics in the Autumn on a small subset of the first-year engineering students without A-Level Maths. If this pilot is successful, we would hope to use it to support the mathematical needs of the whole cohort later on. We have a site licence for QM Perception and are well placed for this. Although my attempts a few months ago to get Mathletics running on the University network were abortive, one of our postgraduate CAA team members, John Hattersley, is now on the case. I hope to report soon of his success, at least on the well-tried version 3.4 of Perception, which will run another year here; another challenge will be to run it on version 4.2, to which we are upgrading next month. Stay tuned.


July 07, 2006

Looking back to 14th March …

Follow-up to Pi in the Sky? from Computer-aided assessment for sciences

… the date of a Computer–Aided Assessment workshop at Warwick showcasing some of the software we are evaluating as part of an in–house CAA Project (for 'in–house' read 'Warwick Science Faculty'). Four software packages were exposed to scrutiny, and I would now like to say a little about each, our experience so far and the plans we have for them. More details will be posted soon on the Project website.

Maple TA (MTA).
This commercial Teaching and Assessment package from the Canadian firm Maplesoft is built on their well-known computer algebra system Maple. This foundation give MTA one of its mains strengths, namely the way it handles mathematics:

  • at the authoring stage — it has a palette of mathematical symbols and a LaTeX facility
  • in its rendering of equations onscreen
  • its ability to include random parameters in question templates
  • its ability to parse mathematically–equivalent answers.

At the workshop, its praises were sung by a representative from Adept Science (who distribute MTA in the United Kingdom) and moderated "Warts and All" by Michael McCabe, who had used it to assess Mathematics students at Portsmouth University a few months earlier.

Five weeks after the workshop, we used MTA, in tandem with Question Mark Perception, for a 50-minute summative test taken by 166 students registered for a second-year module in elementary number theory. The test contained 11 multiple-choice questions (MCQs) and contributed around 6% to the module credit. It was fairly easy to author the questions, the test was reliably delivered on the day by the hosting service, and I found it straightforward to extract and process the students' answer files from its database. On the negative side I found its imposed marking scheme and MCQ format frustratingly restrictive and laboriously had to adjust marks student-by-student to allow 3 for a correct answer, 1 for 'don't know' and zero for a wrong choice, normalising the totals by subtracting 8. In particular, I could not return the students' scores as soon as they clicked the submit button. Here Perception won hands–down on pedagogical flexibility but couldn't compete on the maths.

We were fortunate in getting MTA for an extended trial evaluation period. We tried, without success, to run it locally on a Java server that was configured for other apps. Subsequently we changed to the hosted service. Apart from an unfortunate loss of data two days before the test, this worked robustly and we have now taken out a 500–student departmental licence for the coming academic year, when we hope to go beyond the simple MCQ format and begin to exploit MTA's full mathematical capabilities.

WeBWorK
This is a mature open–source assessment package developed with generous NSF funding at the University of Rochester. It has been around for over a decade and is currently used by over 80 (mainly North American) universities. When Jim Robinson of the Warwick Physics Department started looking for a suitable CAA package to improve and reinforce the mathematical abilities of his department's first–year students, he listed the following criteria. The software should preferably be:

  • Capable of rendering mathematics
  • Free
  • Client–independent and available off site using web–based technology

It should also offer:

  • A good bank of problems at right level
  • Easy authoring, customizable question formats and individualised problems
  • Instant feedback — hints, right/wrong, model solutions

WeBWorK has all these desirable features. Earlier this year, Jim installed WeBWorK on a Linux box, an old 500MHz PC. He needed some Linux systems admin experience – plus a few hours (no installation wizard) to install the WeBWorK software and problem libraries; it needs Perl, Apache, SQL server (MySQL), LaTeX, dvipng plus a few other apps (all free). Thereafter all course administration is web based.

Since Christmas, we have been running a pilot using volunteer Physics students taking the first–year Maths for Scientists module. Initially we plundered the very large collection of question banks to create a sample assigment based on the first term's material (mainly calculus) and subsequently provided a second assignment of home-made questions on the second-term's content (including linear algebra).The student feedback is currently being analysed, and we have begun to create assignments to be used by the whole cohort of Physics students taking the Maths for Scientists module next term.

As this entry is growing rather long, I will take a break now and discuss STACK and Mathletics later.


March 03, 2006

RU Autistic? The Eyes Have It

Writing about web page http://www.centralquestion.com/archives/2006/03/mind_reading_test.html

Try this test (it takes about 10 mins). It is created with the elegant Flash-based software Question Writer see my earlier blog.

Surprising Statistic

Writing about web page http://mathstore.ac.uk/articles/maths-caa-series/feb2006/

Imagine a test with 5 questions, where each question is selected randomly from a bank of 10 related alternatives. Some 100,000 different tests can be generated.

Question: How many of these would you need to generate, on average, to have sight of all 50 alternative questions?

Answer: Only 43 (Douglas Adams was one out).

This surprising fact should give pause for thought to an author of online exams concerned about cheating. One of my favourite models for driving learning through assessment is to offer students a number of attempts (say 5) at randomly-generated tests in formative mode before they take the one that counts for credit. In the given example, 10 students colluding could suss out all, or most of, the questions stored in the banks before they take their summative test.


June 2018

Mo Tu We Th Fr Sa Su
May |  Today  |
            1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30   

Search this blog

Tags

Galleries

Most recent comments

  • The LaTeX equations are a little wobbly in their baselines That's because they use mimeTeX. I am try… by Steve Mayer on this entry
  • Aha! You are right, John. I am familiar with a different terminology: Multiple Choice for what Quizb… by on this entry
  • Haven't you used the wrong question type, Trevor? Your answers look, to my unskilled eye, to be mutu… by John Dale on this entry
  • The usual factors in Information Assurance are: Confidentiality Integrity Availability And for syste… by Max Hammond on this entry
  • Is the workshop next Monday,26 March 07, open to anybody interested in computer aided assessment? If… by Kuldeep Singh on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXVIII