All 10 entries tagged Pedagogy

View all 35 entries tagged Pedagogy on Warwick Blogs | View entries tagged Pedagogy at Technorati | There are no images tagged Pedagogy on this blog

September 06, 2007

We're watching you!

Writers about assessment make the distinction between formative and summative modes (see Definitions below). It is too often seen as a black-and-white dichotomy. As part of a plea for shades of grey, let me suggest suggest the watched or moderated mode of assessment.

In watched mode, students are allowed a limited or unlimited number of attempts at a given test (or at several variations of it). All their activity is recorded, and they know that their teacher or assessor has access to their results and that the information may be used to form an interim judgement about their commitment, to discuss their progress, and to provide feedback, even though their marks do not influence their progression or their degree.

Other “shades of grey” please.

Definitions of Formative/Summative Assessment

There are many descriptions of these concepts. Here are two I feel happy with in the context of a degree course:
  • Formative assessment is designed to inform development, and to give learners practice at the assessed activity and feedback on their progress; but it does not contribute to the overall assessment.
  • Summative assessment contributes to the final outcome of a student’s degree and may include unseen examinations, essays, dissertations or presentations.

March 03, 2006

Surprising Statistic

Writing about web page http://mathstore.ac.uk/articles/maths-caa-series/feb2006/

Imagine a test with 5 questions, where each question is selected randomly from a bank of 10 related alternatives. Some 100,000 different tests can be generated.

Question: How many of these would you need to generate, on average, to have sight of all 50 alternative questions?

Answer: Only 43 (Douglas Adams was one out).

This surprising fact should give pause for thought to an author of online exams concerned about cheating. One of my favourite models for driving learning through assessment is to offer students a number of attempts (say 5) at randomly-generated tests in formative mode before they take the one that counts for credit. In the given example, 10 students colluding could suss out all, or most of, the questions stored in the banks before they take their summative test.


February 10, 2006

CAA Twenty Years On

Writing about web page http://mathstore.ac.uk/articles/maths-caa-series/jan2006/

Cliff Beevers' article IT was twenty years ago today … covers two decades of work on computer-aided assessment for mathematical disciplines at Heriot-Watt University. The challenges of testing mathematics online are presented as an intelligent and ever-changing compromise between effective pedagogy and the technical limitations of the medium.

The CAA package they have created since 1985 is currently offering formative testing to 25,000 students in Scottish schools. As a registered blind teacher, Cliff is well placed to address the accessibility issues; the Heriot-Watt development team took the mathematical tests to youngsters at the Royal Blind School in Edinburgh last year.

He ends with the following advice:

  • Keep the questions fresh through randomisation

  • Seek answers using mathematical expressions wherever possible

  • Provide optional steps for those that need them

  • Report all of this to both student and the teacher alike

  • Keep all of these new assessments accessible so that disabled students are not disadvantaged

  • Extend where possible to include animations, simulations and explorations so that higher order skills can also be tested electronically

February 06, 2006

I got it wrong!

Follow-up to Unresponsive Multiple–Response from Computer-aided assessment for sciences

Christopher Munro, who is working with Michael McCabe on the SEXPOT Project, has kindly pointed out that my interpretation of the following "partial grading" formula (for multiple-response questions in software package B)
Grade Formula
was wrong. The reality is even more bizarre.

To explore the implications of this formula, let's lay down some ground rules for multiple-response questions (MRQs) and look at a concrete example.

Rule 1: The statements that make up the choices in are either true or false (although Martin Greenhow told me recently that he is exploring a question type for Mathletics that also allows the answer 'undecidable').

Rule 2: For each part of the MRQ, it is equally difficult to decide whether the statement is true or false (unlike competitions in popular magazines, which often make it humorously obvious which statements are false). I usually aim to satisfy this rule in my MRQs, although I freely acknowledge it's not an exact science and anyway, what is 'hard' varies from student to student.

Rule 3: It is equally likely that a statement is true or false (so that, on average for a four-part question, one question in sixteen has all parts false).

In the above formula the "# of correct answers" actually means the number of true statements in the question.

For our concrete example, take a four-part MRQ and a model student who gets all four choices correct. If, say, just one of the four statements is true, the student scores (4–0)/1 = 4 marks. On the other hand, if 3 statements are true and one is false, the student only gets (4–0)/3 = 4/3 marks for being just as clever as before. Maybe I am missing something, but that strikes me as a daft scoring system.

And for the one question in sixteen where all four answers are false, the student scores

\frac{4-0}{0}=\infty

Luckily here, the software under discussion violates Rule 3 by insisting that at least one statement is chosen to be true!


January 30, 2006

Sexpot

Follow-up to Fruit Flies Like An Apple … from Computer-aided assessment for sciences

Michael McCabe at the University of Portsmouth is giving some rigorous objective thought to marking schemes for objective tests in his SEXPOT Project (Scoring EXemplars and Principles of Objective Testing). Have a look at his very preliminary findings.

Fruit Flies Like An Apple …

… time flies like an arrow, which brings me neatly on to the theory of social choice functions (better known to most of us as 'voting systems'). Arrow's Theorem states, in a nutshell, that the only voting system that satisfies three very plausible 'fairness criteria' (for example, if more voters prefer X to Y, then X should appear above Y in the final list) is a dictatorship, where one person gets to decide for everyone else.

Could there be an analogue of Arrow's Theorem for marking objective tests? Is there a marking system that is fair to all? To answer this, one first needs a list of fairness criteria. Any suggestions for these? I'll start the ball rolling with

Criterion 1: If student X 'knows more' than student Y, then X should score more than Y. Of course, the examiner has to specify what is meant by 'knows more'.


January 23, 2006

Unresponsive Multiple–Response

I have been exploring two very different assessment programs recently and was drawn to compare the way they each handle Multiple-Response Questions (MRQs). To put things in context, consider the following naive example of such a question:

Decide whether the following arithmetical statements are true or false:

\circ\ \ 2+2=5\\ 
\circ\ \ 2+2=22\\ 
\circ\ \ 2+2=8-3\\
\circ\ \ 2+2=0\ (\text{mod}\ 4)

(For non-arithmeticians, this fourth part is the only true statement.)

The two packages impose their own different marking schemes and I am happy with neither. Here are their inflexible offerings:

Package A

This software is a simple quiz builder, very easy to learn and quick to author. (If you have the questions ready, you could put together a 10-question quiz in 15 minutes, even first time round.)

You answer an MRQ like the one above by checking all the buttons of the statements you think are true and leaving unchecked those you think are false — the buttons toggle on and off like conventional check boxes. Full marks are given if and only if every part is answered corrrectly (with true statements checked and false ones unchecked); otherwise zero is given.

I feel that this all-or-nothing approach is too severe; a student gettting three parts out of four right surely deserves some reward.

Package B

In contrast to the previous package, this one is a behemoth, powerful but hard to tame. (Incidentally, I notice that, unlike the "alleluias" and "slaves" that eBay claimed on Google to be auctioning before Christmas, today it doesn't appear to have any behemoths for sale.) Package B's multiple response offering is part of its MCQ environment — you move from MC to MR by simply ticking the box "allow more than one correct answer". Below each MRQ, a hyperlink partial grading explained appears in red; when clicked, the following message pops up in a new window:

What does formula mean? Are "correct choices" the same as "correct answers"? If not, then perhaps "# correct answers" means "# of true statements"? And what if the grade is negative? It's not clear. (Incidentally, it would save us all a lot of time if questions really "could calculate their grades".)

Imagine for simplicity that

  • the above MRQ is the sole question on a test
  • a desperate student in a rush launches the test and immediately presses the "submit" (or "grade") button, neither reading the question nor checking any buttons.

If we assume all the buttons start unchecked by default, the desperado gets 3 parts correct and one part wrong, and by the most likely interpretation of the formula, scores two-thirds out of a maximum one point; in other words, 66% !! That's certainly 'owt for nowt' and a fat reward for opportunism — hardly a desirable outcome.

What I would like

In neither package is the author given any choice about the format or a marking scheme for a multiple-response question. You must take it or leave it. But just in case the developers are listening, here is the kind of flexibility I would like to see as standard for MRQs:

  1. A drop-down menu in a combo box next to each part of the question with the three options: 'true', 'false', 'no attempt', and with all the boxes initially set to 'no attempt'.
  2. The ability to set the marks awarded for (i) a correct answer, (ii) an incorrect answer and (iii) no attempt, in each part of each question (or at least in each question).
  3. An option to display this information to the examinee next to each part of each question.

Other Scenarios?

I'd like to hear from other people about their preferred MRQ frameworks.


January 06, 2006

Ads Build Up Value

This blog's title is an anagram of David Paul Ausubel, the educational psychologist who made serious play of the fact that you can't teach someone effectively until you know what they already know (and, by implicaton, don't know). In Educational Psychology: A Cognitive View (1968) he wrote: "The most important single factor influencing learning is what the learner knows. Ascertain this and teach accordingly". His dictum is often quoted but rarely acted upon, and I see no harm in running another advertisement to build up its value.

When students submit to any kind of assessment, they reveal information about what they know and don't know. This is often valuable information and usually it goes to waste (think of all those exams where the only feedback is a mystery number between 1 and 100). Does computer-aided asssesment (CAA) offer any remedies? Can it find out "what the learner knows" and then act accordingly? I believe the answer is "yes" and will try to convince you of this with a couple of simple examples.

If a factual(ish) question (eg Which philosopher might have said "Blogo ergo sum"?) is wrongly answered ("Renée Zellweger" perhaps), the same question can be repeated in a later test, with a friendly admonition in the feedback for a second wrong answer. Persistent weaknesses over a series of formative tests can be routinely reported back to the student.

For conceptual questions, CAA can do even better. We will take a simple example to illustrate how computer assessment can identify a student's problem and try to deal with it. Consider the arithmetic question:

Add one third to one half and express your answer as a fraction (i.e. a number of the form m/n).

There are many reasons for getting the wrong answer: a simple error of calculation (unlikely here though), or a failure to understand

(i) the nature of fractions (both as numbers and processes — does the symbol 1/2 mean the number 'one half' or the process of dividing by 2?) and
(ii) the rules for calculating with them.

Let's suppose a student submits the following wrong answer:

 \frac{1}{2} + \frac{1}{3} = \frac{2}{5}

It's a fair guess that they have used the following wrong rule (mal-rule) of adding the numerators and the denominators:

 \frac{a}{x} + \frac{b}{y} = \frac{a+b}{x+y}

We could reinforce this guess with another example, say two-thirds plus a fifth. If they come up with three-eighths, we can be pretty certain we've sussed out the mal-rule they're using.

Next we start to generate doubt by getting them to apply their rule to one half plus one half (giving two quarters, which one hopes they will know equals one half). Something funny going on here! A fluke exception? How about this then:

1 + \frac{1}{3} =  \frac{1}{1} + \frac{1}{3} = \frac{1+1}{1+3} =  \frac{2}{4} =   \frac{1}{2}

Having thoroughly undermined their faith in the mal-rule, we eventually return to the drawing board, asking the student how many sixths make a half (three) and how many sixths makes a third (two). If the penny drops that three of a kind plus two of a kind is five of a kind, the following equations take on meaning

 \frac{1}{2} + \frac{1}{3} = \frac{3}{6} + \frac{2}{6} =  \frac{3 +2}{6}= \frac{5}{6}

More examples will be needed to elicit a thorough understanding of the general rule for adding fractions (including unlearning the mal-rule) and yet more practice for the student to feel comfortable applying the rule as a fast and accurate reflex.

What I have just described is only one possible CAA response to just one of many misconceptions or mal-rules that may be revealed in the simple exercise of adding a half to a third. A student may be guessing or may have mislearnt the rule at an earlier stage of education. An effective face-to-face (human) tutor will patiently probe a student's mistakes and identify the root causes their failures, then painstakingly rebuild their knowledge and understanding on a sound foundation. Intelligent computer assessment can be programmed to do the same and it is nothing if not patient and painstaking.


November 30, 2005

On Seeing the First Snow…

Follow-up to A Dish Best Served Hot from Computer-aided assessment for sciences

… of his lifetime, Cameron asked: "Is it because of the snowman?". In Cameron's nice logic the dustman creates dust, the chairman makes chairs, and (for misogynists) woman brings wo(e)s — not sure about the hangman though.

November 28, 2005

A Dish Best Served Hot

As my source of inspiration pointed out while administering dinner to our 2-year old Cameron last night, the blog title applies as well to "student feedback" as to "revenge". Effective formative assessment not only provides feedback to the assessor, but more importantly, gives fresh food for thought and enlightenment to the one being assessed.

A week is a long time (as Harold Wilson famously said of life in politics) to wait to find out where you went wrong; even a day later, your brain has probably gone cold. But a computer can tell you within microseconds.

So here's a situation where computer-aided assessment (CAA) can have a clear edge over traditional marking; I say "can" because, to gain the edge, you must take the trouble to design the CAA questions intelligently and to PROVIDE THE APPROPRIATE FEEDBACK (always assuming your software allows it).

I have been trawling through a lot of CAA software lately and have been disappointed by the perfunctory nature of the feedback provided in many samples that put the software through its paces (for instance, a single tick or a cross in response to a set of answers to a 6-part question). But, of course, there are beacons of good practice too. Here are two that caught my attention:

FAST (Formative Assessment in Science Teaching) -- A collaboration between the Open University (OU) and Sheffield Hallam (SHU) aiming to improve student engagement and learning through formative assessment following these principles . The Project has a Science focus (Biosciences, Chemistry, Physics) and is funded through the HEFCE Fund for the Development of Teaching and Learning (FDTL4). There are 30 development projects, 15 at the OU and SHU and 15 more at 13 other HE institutions.

Mathletics (near the bottom of this link's page) — Among the many features of Martin Greenhow's approach to online assessment that commend themselves is the attention given to the pedagogy of question setting and providing detailed feedback. Using his CAA software for modules at Brunel, Martin discovered surprisingly that the feedback is used by some students as their main learning tool. Martin uses the idea of 'mal-rules' (reflecting common conceptual errors) to generate plausible distractors in multiple-choice or multiple-response questions, and more generally uses students' answers to make a informed guesses at their misconceptions and provide targeted feedback.

Please extend this list with other examples of good assessment pedagogy.

Unfortunately the inspiration dried up when I was prevailed upon to take over the feeding of the aforesaid Cameron, slotting in spoonfuls as he wielded a sticky mouse to direct Adiboo's onscreen antics. Just as well this is not a blog on good parenting.


May 2023

Mo Tu We Th Fr Sa Su
Apr |  Today  |
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31            

Search this blog

Tags

Galleries

Most recent comments

  • The LaTeX equations are a little wobbly in their baselines That's because they use mimeTeX. I am try… by Steve Mayer on this entry
  • Aha! You are right, John. I am familiar with a different terminology: Multiple Choice for what Quizb… by on this entry
  • Haven't you used the wrong question type, Trevor? Your answers look, to my unskilled eye, to be mutu… by John Dale on this entry
  • The usual factors in Information Assurance are: Confidentiality Integrity Availability And for syste… by Max Hammond on this entry
  • Is the workshop next Monday,26 March 07, open to anybody interested in computer aided assessment? If… by Kuldeep Singh on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIII