May 21, 2008

Stout Party

An elegant clue from my usual source:

Beer with creamy head such as Guinness (4 letters)

appeared first in The Guardian but I don’t know the author.


January 08, 2008

Beatlefication?

A nice ‘clue of the week’ in The Week, taken from a puzzle by Virgilius in The Independent:

As good as John, Paul or George but not Ringo? (7, last letter y)


September 06, 2007

We're watching you!

Writers about assessment make the distinction between formative and summative modes (see Definitions below). It is too often seen as a black-and-white dichotomy. As part of a plea for shades of grey, let me suggest suggest the watched or moderated mode of assessment.

In watched mode, students are allowed a limited or unlimited number of attempts at a given test (or at several variations of it). All their activity is recorded, and they know that their teacher or assessor has access to their results and that the information may be used to form an interim judgement about their commitment, to discuss their progress, and to provide feedback, even though their marks do not influence their progression or their degree.

Other “shades of grey” please.

Definitions of Formative/Summative Assessment

There are many descriptions of these concepts. Here are two I feel happy with in the context of a degree course:
  • Formative assessment is designed to inform development, and to give learners practice at the assessed activity and feedback on their progress; but it does not contribute to the overall assessment.
  • Summative assessment contributes to the final outcome of a student’s degree and may include unseen examinations, essays, dissertations or presentations.

April 27, 2007

Quizbuilder's here

Warwick’s Elab has published the first draft of Quizbuilder, its elegant tool for writing simple online tests with the minimum of fuss. You might like to try your hand at this short test of 11 multiple-choice questions on elementary number theory, which took me less than an hour to write. The LaTeX equations are a little wobbly in their baselines, but perfectly fit for purpose.


OU = Open Utopia?

As part of its nearly-£5m investment in Moodle, the free open-source course management system (CMS, aka VLE), the Open University (OU) is currently adding its in-house assessment software OpenMark to Moodle’s assessment capabilities. It will also incorporate some of the authoring strengths of OpenMark into the Moodle Quiz. The full integration may take some time to complete but will mean that OpenMark becomes open source too.

OpenMark’s strengths include the ability to diplay complicated mathematical and symbolic expressions and to provide graduated targeted feedback in response to multiple attempts at variations of the same question. Given the OU’s high production standards and long term funding, this development can only bode well for the future of online assessment, in particular, the assessment of mathematics-based subjects.


April 24, 2007

CAA Fitness for Purpose: Data Security and Robustness

Follow-up to Fit for Purpose? from Computer-aided assessment for sciences

Data Security

Three kinds of data need to be kept safe: (i) the questions stored for a test; (ii) student answers entered during a test; (ii) submitted answers and results.

  • Keeping tests safe: It is clearly important to keep tests, questions, solutions and feedback safe from prying eyes, especially if they are to be used in summative mode. So the question database should be encrypted or otherwise made hacker-proof. It should also be regularly backed up in case of hardware failure (having lost questions on a hosted service, I would strongly advise authors to back up their work locally too). If a degree of randomness is introduced to reduce the risk of cheating (via multiple question banks or question templates with parameters, say), then thought should be given to the ease with which determined students could circumvent the protection thus provided (see this blog entry, for instance).
  • In-test Security: Some assessment software allows submission of answers to be postponed until the end of the test. This is dangerous. A user who has entered 9 out of 10 answers when the system crashes without saving them would have every reason to be angry. My preferred option is to require an entered answer to be validated (and simultaneously saved) before the user is allowed to proceed to the next question (or at least to be warned that they may lose their answer if they do not validate before moving on). Validation allows the software to interpret the answer and return its interpretation to the user in a standard form; it is an important stage in dealing with answers to questions with symbolic content, where the machine may not be able to cope with the informal context-dependent representation humans used to. Another kind of security involves limiting cheating during a test: impersonation, or copying from a neighbour, for example. Invigilation is still the safest answer to this.
  • Securing Results: The most important thing about the results database, apart from the obvious needs to be backed up and proof against hacking, it that it should store every bit of activity engaged in by a student during a test. If a student challenges their test outcome, the examiner needs to be able to trace every step the student took, including false validations, inappropriate mouse clicks (some assessment software swoons at the click of a browser back button). and the relaunching of the test. Although it is a good idea to insist that students jot their work down on paper during a test, this is not much help if a system fault requires a new test to be generated and it comes with different values of th random parameters; When parameters are used, the system should also be able to deliver the same test to a student who, through no fault of their own, is forced to make a fresh start. As I have said elsewhere, it is a great help if the database fields are chosen to optimise efficiency and flexibility in searching the results.

March 28, 2007

MADCAP Musings

Follow-up to Informal Workshop at Warwick on 26th March on CAA with Maths from Computer-aided assessment for sciences

For 5 hours on Monday, a score of us shared thoughts about online assessment, especially the assessment of mathematics. Here are some of my headline takes on the day:

  • Computer-aided assessment of mathematical knowledge and understanding has special needs but offers special rewards
  • SToMP, PROMPT, Mathletics, iAssess, OpenMark, STACK, WeBWorK, Maple TA are among the many maths-friendly CAA packages used by or known to people attending the workshop. These tools have many common features but can rarely talk to each other. So much work duplicated but not shared. Does it really have to be like that?
  • Question and Test Interoperatibility (QTI) standards to the rescue? Not with the generic Version 1 at least. A better chance with the developing Version 2, which will admit optional extensions users can create to handle special needs, in particular those of mathematics. Will they ever work sufficiently well to justify the limitations they impose and extra attention they require?

There are plans to produce a proper report of the day. Watch this space.


March 19, 2007

Informal Workshop at Warwick on 26th March on CAA with Maths

Follow-up to MADCAP from Computer-aided assessment for sciences

Venue: The Mathematics Institute in the Zeeman Building (find us)

Outline Programme

10.30 onwards: Coffee in Maths Common Room

11-00 till 1-00: Short presentations and long discussions in B3.02

1-00 till 1-45: Lunch in the Maths Common Room

1-45 till 3-00: Short presentations and long discussions in B3.02

3-00 till 3-30: Summing up and future plans.

3-30 onwards: Tea in the Maths Common Room


March 14, 2007

Elephant's Trunk

As a cockney I enjoyed this “clue of the week” in a recent copy of The Week:

Canned music producers (6,3,5)

MADCAP

On Bill Foster’s initative, we are plannng a small informal workshop at Warwick on Monday, 26th March to discuss priorities for computer-aided assessment (CAA) in Higher Education (HE). Bill accepts responsibility for the acronym MADCAP, short for “Mathematics and Computer-Aided Practice Group”. He has had considerable experience using the i-Assess package for large-scale mathematics assessment at the University of Newcastle. We will be joined by colleagues exploring other approaches at Birmingham, Brunel, Portsmouth, Surrey, The Open University, and Warwick.

Here are some topics we hope to talk about:

1. Assessing symbolic material (in particular mathematics) online. Which tools handle this well? How effectively can their functionality be bent to serve our pedagogic needs? Here are some aspects:

  • Authoring. Types of input: LaTeX, Asciimath, MathML, plug-ins
  • Student Input. Formal or informal syntax, WYSIWYG, symbolic menus/palettes
  • Feedback Making intelligent use of student answers. The role of computer algebra systems such as Maple or Maxima
  • Question types and Conceptual Understanding. Is CAA only effective at the early stages of mathematical education with large classes and concomitant efficiency gains? Can we go beyond the standard question types to probe deeper understanding in such different areas as Analysis, Algebra and Statistics
  • QTI issues. Are these relevant? Do we care? (See below)

2. How can HE institutions influence and gain some control over the development of assessment tools? Are there models of development beyond “buying out of the box”?

  • JISC is funding the major development of an assessment tool which meets the Question and Test Interoperability standards (QTI 2.1), but its specification does not accommodate symbolic input. Is this a problem or should we make do with variants of MCQs and numeric input types to satisfy the assessment needs of mathematics or statistics? Alternative products handling symbolic content are available but commercial software usually means some loss control. (One commercial developer will be present at this meeting to outline plans for joint development of assessment tools with HE institutions.)
  • How important is it to develop within QTI standards?
  • What policies do universities have towards computer-based assessment and how do they influence the choice of tools?

Follow-up workshops are planned at Heriot-Watt and the Open University based upon the outcomes of this meeting.


September 2023

Mo Tu We Th Fr Sa Su
Aug |  Today  |
            1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30   

Search this blog

Tags

Galleries

Most recent comments

  • The LaTeX equations are a little wobbly in their baselines That's because they use mimeTeX. I am try… by Steve Mayer on this entry
  • Aha! You are right, John. I am familiar with a different terminology: Multiple Choice for what Quizb… by on this entry
  • Haven't you used the wrong question type, Trevor? Your answers look, to my unskilled eye, to be mutu… by John Dale on this entry
  • The usual factors in Information Assurance are: Confidentiality Integrity Availability And for syste… by Max Hammond on this entry
  • Is the workshop next Monday,26 March 07, open to anybody interested in computer aided assessment? If… by Kuldeep Singh on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIII