All entries for January 2007

January 29, 2007

CAA Fitness for Purpose: User Experience

Follow-up to Fit for Purpose? from Computer-aided assessment for sciences

Another heading in my check-list of criteria for judging whether CAA software, particularly that with mathematical capabilities, is up to the job. As usual, I welcome your comments and further ideas. Today I look at the heading

User Experience

Logging in and Submitting: Within a given intranet, so-called “single sign-on” avoids having to distribute special user IDs and passwords to students, who then have to remember them to access an assignment. With single sign-on, it is easier to call on institutional databases to display personal information (name, number, mug-shot) onscreen as an identity check for student and invigilator alike. Once signed on, students should be able to click quickly to the test they want to take. It should be clear how their answers to questions should be submitted for marking (grading), singly and/or in one final submission, and whether multiple attempts are permitted. Answers should be regularly saved to a local drive in case of network or software failure.

Navigation and Layout: It should be easy to navigate quickly through the questions (in any order), and to choose to display them all together on a scrolling page or one at a time. From any test page it should be clear which questions (i) have already been attempted and (ii) have been finally submitted. Each page layout should be visually easy to interpret (e.g. displayed equations, clear separation of question statement from answer boxes with hypertext for actions close by, adjacent questions with different background colours). Anchors to keep the right part of a long page in view, drop-down menus, and prompts to open help windows can all improve the user experience and sense of being in control.

Entering Answers: Entering text with standard keyboard characters is usually unproblematic – answer-boxes should accommodate the longest imaginable answer, they should display an easily-readable font, and should have the focus with a flashing cursor when appropriate. Entering non-standard symbols, in particular mathematical expressions, presents a challenge. There are some well-tried ways of dealing with this: informal entry using pocket-calculator conventions, LaTeX markup, a palette of standard symbols that can be dragged into the answer box. CAA software is unforgiving when trying to make sense of the kind of informal entry easily understood by humans, and so rigorous adherence to correct mathematical syntax (brackets, arithmetical operations, functional notation) is usually required. (For instance, WeBWorK is fairly tolerant of informal entry and includes a summary of user syntax in a pane on the right-hand side of its pages.)

Recording Progress: There are usually several stages in answering a question online: (1) entering the answer in the appropriate box(es); (2) validating the answer to check that the program correctly interprets it (especially if symbolic expressions are involved); (3) saving the answer (often combined with validation); (4) reviewing the answer and editing it; (5) submitting the answer for marking/grading; (6) making further attempts if allowed; (7) submitting the final attempt. It is important for this progress data to be displayed in a table on every page of the assessment, with direct navigation to uncompleted questions. It is also helpful to record individual question and total scores in this table and to display ”time remaining” in say minutes and an analogue clock.

Training and In-Test Help: It is desirable to give students a practice assignment in conjunction with an online tutorial to familiarise them with the assessment format and the syntax for entering symbolic notation. This can be delivered in advance of the test or as an initial part of it. A summary of this user guidance should be easily accessible at any stage of the test, perhaps through a help-box or in a separate pane of the test window.

Accessibility: Here is a short checklist of desirable feature for optimising access to web pages: (1) user control of font styles and sizes (especially important for the display of mathematics, which may be embedded as graphics); (2) text equivalents for graphics and multimedia; (3) simple and logical navigation; (4) control over text and background colour. (5) Compatibility with a screen-reader that handles mathematics and other symbolic notation (programs now exist to read mathematics that is coded in MathML – e.g. Design Science’s MathPlayer: see http://www.dessci.com/en/products/mathplayer/tech/accessibility.htm). Entering mathematical answers is particularly difficult for visually-impaired users and so an intelligent screen-reader for validation of answers would provide helpful reassurance.


January 24, 2007

CAA Fitness for Purpose: Pedagogy

Follow-up to Fit for Purpose? from Computer-aided assessment for sciences

This is the next contribution to my list of criteria for judging whether CAA software, particularly that with mathematical capabilities, is up to the job. Today I look at the heading

Pedagogy

Question types: MCQs, MRQs, yes/no, hot-spot, drag-and-drop, and so on—the more the merrier! For the assessment of deeper mathematical knowledge, more searching questions can be set when the assessment package can call on the services of a computer algebra system (CAS) – eg Maple TA and STACK. An option for multiple-part questions is valuable, especially if (i) partial credit is available and (ii) answers to later parts can be judged correct when calculations based on incorrect answers to earlier parts are correctly performed.
Marking/Grading, Scoring: It is important for the author (i) to have complete control over the marking system for each question, (ii) to be able to give the user full information about how each question will be scored, and (iii) to have the option of revealing scores to the user’s at specified stages. Default marking schemes may be useful but should be easy to over-ride and should allow an author to specify a different marking scheme for each question. If an answer involves mathematical expressions, the software should be able to parse equivalent answers.
Feedback: I believe this to be the most important pedagogical feature of CAA software! The author should be able to provide various types of feedback to each question (e.g. (1) whether the submitted answer was right or wrong, (2) the bare marks scored, (3) the correct answer—for instance, the correct MCQ choice, numerical entry, or symbolic expression, (4) the full worked solution) and to specify the point at which the feedback is made available (e.g. upon submission of a single answer, of a completed assessment, or at some later time). If questions contain variable parameters, the feedback should be tailored to the parameter values used. Another useful feature is an option to provide one or more graded hints after a wrong answer and to adjust the marks accordingly. An advanced feature, explored in Mathletics, is to be able to use a student’s answer to guess at errors or misconceptions (malrules) and to respond to them in the feedback.
Random features: The inclusion of varying degrees of randomness in the construction of individual questions and whole assignments/tests/exams can significantly enhance the educational value of CAA and simultaneously reduce the risks of cheating. For each question at the assignment level, the software should be capable of selecting randomly from a specific bank of questions which all test the same skill/knowledge/understanding. At the question level, there is considerable scope for randomised variation, using place-holders to vary such things as units, names, even subject contexts; and in mathematical subjects, using parameters within specified ranges of numerical values that require students to carry out different calculations, each testing essentially the same knowledge or skills. Considerable care is needed to ensure the questions make sense for all choices of variables (for instance, avoiding division by zero), but in a science discipline, it is possible to generate millions of different, but educationally-equivalent, questions. This makes copying answers pointless and allows students to have virtually unlimited practice in formative mode. When sufficient randomness is built into a question template, it becomes a “reusable learning object ”, a special case of a reusable learning object (RLO) beloved of educational theorists who study computer-mediated learning.


January 23, 2007

Fit for Purpose?

Software for computer-aided assessment comes in many shapes and sizes serving many purposes, ranging from simple quiz-building to the construction of complex question templates involving random parameters that are designed to test deeper understanding and provide intelligent feedback.

It is evaluation time for the software we have be trying out in the Science Faculty at Warwick. Because my project is specifically aimed at science disciplines, we have concentrated the trials on four CAA packages with serious mathematical capabilities: Maple TA, Mathletics, STACK and WeBWorK.

In order to judge the merits of these behemoths, it is important to lay down the criteria we will use. I have therefore started to produce a list of features and qualities that might be considered desirable in CAA software of this kind. PLEASE ADD TO MY LIST OR SUGGEST AMENDMENTS.

I have set out the features and qualities under the following headings:

  • Authoring
  • Pedagogy
  • User-experience
  • Administration
  • Data Security
  • Robustness

I will deal with each heading in separate blogs for ease of digestion. Today I start with:

Authoring

Ease of use (Ability to author questions in browser window, intelligent fully-functional editor (see Work-flow below), quick access to current projects, good GUI and navigation, natural syntax for writing questions, flexible file and folder structure for organising work, automatic save before closing browser, easy user account creation, spreadsheet import and export of both account and assignment data, optimised for accessibililty, re-usable user-created templates for (i) writing tests (ii) sets of properties and permissions.)
Mathematics entry and handling (WYSIWYG maths editor for symbolic and mathematical expressions. GIF-free options – MathML, (La)TeX, or WebEQ with MathPlayer. Platform-independent visually-pleasing rendering of symbols with scalable fonts and colours. Tex quality for both rendering and range of symbols. Intelligent display of mathematical objects (e.g. polynomials).)
Sharing questions and assignments/tests (Import/export of (i) questions created in same software and (ii) text from other applications. Compatibility with QTI and other interoperability standards. Control of permissions for other users,)
Creating assignments/tests (Easy selection from question banks. Easy control of assignment delivery options (ability to permute questions, permute parts of MCQs, choose “single scrollable page” or “one question per page”. Full control over length of test, period of availability, user-access, feedback timing.)
Work flow (WYSIWYG editor with (i) full features (e.g. find and replace) and (ii) instant rendering of modified entry. Cut and paste in all question fields (including mathematical expressions). Regular automatic-saving option. Control over time out. One-click question try-out.)
Testing (Ability to try out questions and feedback exactly as it would be experienced by a user. Separate windows for question testing and editing. Debugging and comprehensive error-reporting.)
Question, assignment and user tagging (Ability to create a number of database fields (e.g. level, topic, subtopic, creation date) for quick search and retrieval of questions from large banks. Likewise for retrieval of users from performance database.)


January 11, 2007

Think ahead

Spotted in the Terrace Gardens, Richmond upon Thames:

You have been warned!

You have been warned!

Clues of the year

If someone asked for my “best cryptic crossword clues of 2006”, I would include these, culled as usual from The Week:


Mater, mater! (7 letters, starting with O)

Food for dismissive feminist? (5,4)

January 10, 2007

Clippy says it all

Writing about web page http://www.technologyreview.com/Infotech/17969/page1/

It’s always comforting to have one’s prejudices confirmed.

Here is an excerpt from an interview with Charles Simonyi, Microsoft’s former chief architect, the tutelary genius behind its most famous applications, the inventor of the method of writing code that the company’s programmers have used for 25 years.

The excerpt explains why I hate software that intrusively imposes its idea of what I need and makes it hard for me to change it.

“On a gray afternoon last October, I sat down with ­Simonyi in Bellevue, WA, in front of two adjacent screens in his office at Intentional Software, the company that he founded after he left Microsoft in 2002 to develop and commercialize his big idea (meta-­programming). Simonyi was racing me through a presentation he was preparing for an upcoming conference; he used Microsoft Office PowerPoint slides to outline his vision for the proposed great leap forward in programming. He was in the middle of moving one slide around when the application just stopped responding.

In the corner of the left-hand screen, a goggle-eyed paper clip popped up: the widely reviled “Office Assistant” that Microsoft introduced in 1997. Simonyi tried to ignore the cartoon aide’s antic fidgeting, but he was stymied. “Nothing is working,” he sighed. “That’s because Clippy is giving me some help.”

I was puzzled. “You mean you haven’t turned Clippy off?” Long ago, I’d hunted through Office’s menus and checked whichever box was required to throttle the annoying anthropomorph once and for all.

“I don’t know how,” Simonyi admitted, with a little laugh that seemed to say, Yes, I know, isn’t it ironic?

It was. Simonyi spent years leading the applications teams at Microsoft, the developers of Word and Excel, whose products are used every day by tens of millions of people. He is widely regarded as the father of Microsoft Word. (I am, of course, using Word to write these sentences.) Could Charles Simonyi have met his match in Clippy?

Simonyi stared at his adversary, as if locked in telepathic combat. Then he turned to me, blue eyes shining. “I need a helper: a Super-Clippy to show me where to turn him off!” Simonyi was hankering for a meta-Clippy.”


January 2007

Mo Tu We Th Fr Sa Su
Dec |  Today  | Feb
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31            

Search this blog

Tags

Galleries

Most recent comments

  • The LaTeX equations are a little wobbly in their baselines That's because they use mimeTeX. I am try… by Steve Mayer on this entry
  • Aha! You are right, John. I am familiar with a different terminology: Multiple Choice for what Quizb… by on this entry
  • Haven't you used the wrong question type, Trevor? Your answers look, to my unskilled eye, to be mutu… by John Dale on this entry
  • The usual factors in Information Assurance are: Confidentiality Integrity Availability And for syste… by Max Hammond on this entry
  • Is the workshop next Monday,26 March 07, open to anybody interested in computer aided assessment? If… by Kuldeep Singh on this entry

Blog archive

Loading…
Not signed in
Sign in

Powered by BlogBuilder
© MMXX