Writing about web page http://www2.warwick.ac.uk/fac/soc/wie/teaching/masters/assessment/maeablogs/
(copied from my student blog… the picture of the graph is visible on that one – along with formatting!)
Excercise from MA in Educational Assessment, Summative Assessment module, session 1
Responses to Goldacre’s article in ‘The Guardian’ (Aug 21 2010) National: Bad Science: Mystery of the A-level
Initial reaction to the A-level results
Firstly – a confession. Saturday morning’s session was not the first occasion on which I had considered the A level results issue. Watching the media coverage, my initial reaction was two-fold – derisive and subsequently a tad peevish. Most of us have done A-levels at some time in the past. I sat mine in 1980 at a very well-regarded comprehensive (it was once listed as one of the top state schools in the country), with (at the time) supposedly high-achieving pupils. We would have been dumbfounded at anyone achieving 4-5 A grades at A-level in a year. That would have represented academic ability to a staggering degree worthy of a good deal of awe and even veneration! I think it would have been nigh on physically impossible to have covered the material necessary in the time given, for a start. I was intrigued to see how results had changed over time and came across this graph from the University of Buckingham Centre for Education and Employment Research.
I found it on the BBC website in their response to the ‘issue’.
http://www.bbc.co.uk/news/education-11012369
There are more of them here:
http://www.bbc.co.uk/news/education-11011564
More of that later.
Goldacre’s article
So does Goldacre have a particular argument? He seems to be saying that there is a definite issue in the public consciousness and that reactions to the high % of A grades at A level are polarised between ‘getting easier’ and ‘they are not’. Goldacre writes, ‘how do you know?’ and proceeds to outline the two positions and present some evidence.
Goldacre’s evidence is:
Students getting cleverer
IQ scores have to be adjusted for the Flynn effect.
Exams getting easier/students’ performance not improving
Royal Society of Chemistry research – Five Decade Challenge in 2008 – students’ scores in O-level papers from the 60s onwards showed a steady increase in time
TDA scores have declined and flattened out indicating no increase in performance ability over time in spite of increased A-level scores
Steady decline in scores in undergraduates’ basic maths skills from 60 university depts in maths , physics and engineering
Goldacre presents some arguments which may support either the notion that exams are easier or that pupils may just be performing better for other reasons, such as improved teaching methods, differences in subject choices, changes in exam focus etc. He pleads for research to clarify these issues.
What evidence would be appropriate – what research required?
Let’s assume for this purpose of considering how we might address his plea, that Goldacre’s presentation of it represents the actual issue. What would constitute evidence for either view and how could it be obtained? Given that the whole area of reliability in assessment is one fraught with problems, it’s difficult to see exactly how we could reasonably compare students’ performances across the decades in any convincing way. My own cohort of 1980s students are no longer the same people we were – sitting a current A-level in our subject (can one still do Zoology?) wouldn’t necessarily show how able we would have been to have sat the same paper back in the day. Similarly, the project to investigate how contemporary students coped with O-level papers of previous decades was not able to address all the variables that might account for the differences in results (though how much more interesting it would have been if the modern pupils had performed better in the old papers!). Is it possible that certain subjects (maths for example) have remained sufficiently static over time and the requirements for national qualifications so consistent, that papers from 1960 onwards could be used in a widespread test on performance with maths students, with levels awarded according to the relevant system of the time? (On a personal note, there must have been some assumed stability when I was revising for my O-levels at the end of the 70s; our practice papers reached back into the 1950s and were still considered relevant!)
Maybe, but the relevant system of the time presents a problem. In his response letter to the Guardian, KJ Eames stated that there had been a move from ‘norm referencing’ to ‘criterion referencing’ in awarding levels. So in analysing changes over time, do we apply one or either or both methods for grading in our investigation? Even the methods for monitoring comparability year on year have changed. For instance, Paul Newton writes (August 2007) that examining board researchers largely stopped using the common test method to monitor comparability in the 1980s because of the likelihood of plausible challenge on the grounds of uncontrolled variables. There is no comparability, therefore, between papers of the millennium and papers of the 70s.
Is research for evidence to support or to refute either position in this polarised debate even appropriate, however difficult? Possibly, in a general sense – there is some desire to know how the nation’s students are performing year on year – but probably not in this case specifically, because there is no like comparison. The A-level of 2010 is fundamentally not the same exam or qualification I was entered for in 1980. Even the way in which students study for it is different – the rise of coursework and the interference of the AS level are two examples.
That said, there clearly was a change in the 1980s – did we make a big move from norm-referenced levelling to criterion-referenced? That could certainly account for the nature of the graph above – stasis for two decades followed by a steady rise. If we had retained a norm-referenced system would we not have kept our A grades at roughly 8% forever?
How do I feel about the article and the issues?
I think Goldacre is presenting his perceptions of public opinion which may or may not be true. Perhaps someone should actually investigate whether the public really does hold those opinions. I don’t think the actual issue, however, is about the ease of the exams or the relative intelligence or academic ability of the students over time (I’m skeptical about the degree to which this can change in a human population of this size over a few decades, anyway). However, the statistics clearly show it’s easier to achieve an A grade at A level for whatever reason. This is what Robert Coe (April 2007) had to say about the 2006 results:
A level grades achieved in 2006 certainly do correspond to a lower level of general academic ability than the same grades would have done in previous years. Whether or not they are better taught makes no difference to this interpretation; the same grade corresponds to a lower level of general ability.
If that is true, is it important? It is in this respect: there is an impact on perception of the value of the qualification by the public, the students themselves, further education establishments, employers and previous students. The question arises as to what is the value of an A-level. If 25% of students achieve an A, then how do institutions award those who could demonstrate higher achievement? With an A*? Will there be a continuous devaluation process? A**? If the A-level of today is not the same as that of 1980 then why is it still called an A-level since that implies that it has retained certain characteristics over time? What is it even for as a qualification? Maybe it is the vestige of something which no longer serves its originally intended purpose.
Refs:
Coe, Robert (2007) Changes in standards at GCSE and A-Level: Evidence from ALIS and YELLIS – Report for the ONS by, CEM Centre, Durham University
Newton, Paul (2007) Techniques for monitoring the comparability of examination standards – QCA