May 22, 2012

I need a little help from my friends!

Morning Campers!

I'm being very ambitious and have gone for a hefty promotion at school. As part of my interview process I need to do a presentation on how to get our sixth form from good to outstanding. I'd greatly appreciated either being guided to some good reading on this, or being told if you have any views on the matter (especially those of you with an FE focus).

It can be related to curriculum, teaching, learning, structure, anything really.

So I ask: What do you think makes an outstanding sixth form?

Thanks in advance for all your help and keep your fingers crossed for me!


January 08, 2012

Being assessed summatively

The following is merely thoughts collected during my recent fight with 15 pages of quotations, summations, assumptions and conclusions, which I managed to wrestle into an essay form and hand in today. I do not imply any criticism, but when studying a masters on assessment, one must assume one's students will assess the assessments they are given!

While Cathie (I think) joked about the fact that we are all being assessed for our summative module in a highly summative way, I have been considering, while completing my assignment, what we are actually being assessed on. My essay was based around my test design, so I spent the last few weeks picking an assessment apart, analysing it's reliablity, validity and purpose, so consequently, I started to do the same for my essay. I have not allowed myself to delve into this too much, otherwise criticising the document which I was creating would be far less than productive! So, now that I have submitted it, and it's power lies in anothers hands, I started to properly consider it's value as an assessment tool.

The main conclusion I came to was that I was unsure as to the purpose of the essay. While it was clearly meant to be a demonstration of my understanding of the topic, having high content validity, it was the costruct it was assessing that puzzled me most. For many of you this will not necessarily be so. You were being assessed on your ability to understand, combine, relate, critisise and present your own and other's views and findings on summative assessment, but here is where I fall short. I am dyslexic. To be honest it never bothers me in my daily life, other than relying on students to correct my spelling or add in words I've skipped over on occasion. But when it comes to a large document, like a 5000 word essay I most definitely stumble. The thing which frustrates dyslexics, is that we could all so easily prove our understanding and knowledge in other ways, yet are always subjected to gargantuan written tasks the higher up in education we go. In fact, the way in which dyslexic brains work often means we are highly skilled in making connections between ideas, seeing patterns and correlations, that we should be very good at getting high marks in essays in higher education. But the fact of the matter is that by making us put pen to paper (or fingers to keyboard) we feel the pressure of high stakes assessment is on HOW we write, and not WHAT we write. It's a common annoyance amongst school reports and essay feedback "She clearly understand the topic in class, but needs to learn how to write it for the exam" or "She offers detailed and articulate ideas in seminars, but fails to include these within her writing". Is it wrong that there is still a frustrated seven year old inside screaming "I KNOW I KNOW IT, YOU KNOW I KNOW IT, WHY DO I HAVE TO WRITE IT DOWN???"

Anyway. Rant over. Needless to say I see the value in the way in which we are assessed in higher educaion, even if only the scope given to students to follow their own interests and strengths. But I know in the result I'm waiting for in the marking of my essay will have no reflection (for me) on the quality of my understanding of the module. I know I know it, you know I know it, it's just a case of how well I wrote it down!


November 28, 2011

Firts Stats teaching

So, today I had my first experience of teaching teachers. I have organised aseries of 20 min sessions, on varied levels of Statistics knowledge, for my co-workers. I emailed the entire staff and out of over 150 I had 2 replies. Great!

So, I had no one sign up for today's session on the basics (mean, median, mode, range, frequency tables and bar charts). I had a cunning plan. I had already created the prompt sheet with all the numbers on it, so printed off 5 copies and took them with me to the staff room when I went for lunch. I left them 'subtly' on the table, and walked away to organise my lunch. When I came back three people had picked up the sheet and were working out the questions either verbally in pairs, or on their own, pen in hand, tongue poked out in concentration. Result!

Feeback was positive. Some (mostly those working through verbally) said they knew most of it already, but it was good to have the recap. One saw how useful frequency tables were in terms of analysing her GCSE data, as she could look at the spread, and could easily work out the mode.

This session was successful in terms of easily introducing some key terms and how they are used. Most teachers had this knowledge already but appreciated the recap.


November 21, 2011

Statistics Session 19th NOv 2011

When we completed the self assessment on our stats knowledge on 29th October I was very disappointed in myself. Ive always prided myself on being good at maths. I like knowing that I have somethingI don't really need to worry about. With all the talk of reliability of assessments, what I did like about maths at school, was that it was either right, wrong, or it would have been right if you hadn't put that decimal point there!

I, for one, really enjoyed Saturday's stats class. Attending a lecture on Ibsen or Aristotle or Stalin or the role of g/God in modern society may have been more appealing to some, but I really liked playing with numbers; maths has no glamour. So many times have I heard people make excuses about numbers. In a lesson the other day I heard a teacher say "I'm no good at maths, I don't know what 2+2 is". Really?! Are you sure?! Because if that's the case shall I call up the TDA and say you have to take your skills tests again?

As teachers we have to be able to do maths. We all pay our bills, budget the shopping, buy Christmas presents for everyone, work out what percentage our students got, wrestle with averaging grades and levels, and count the children in our classes! It's such a shame that we pass on this 'acceptible' disregard for our mathematical knowledge. This is only going to get passed on to students, and thus the cycle perpetuates.

Since Saturday's session I have designed my test, printed off my copies and dished it out to my fellow teachers. Enthused by the mathsI can do withit, I've created a simple test, with 6 question types, on periods. Not the most glamourousof subjects either, but it fits in with my scheme f work, and is actually a really useful teaching tool for me and my fellow year 7 form tutors. I'm hoping to get 150 girls to take it (I am at an all girls'school). I'm so enthused I've even designed my spreadsheet already, with all of the calculations in place (even standard deviation) so that I can just input my 1s and 0s when the test results come piling in!

Now, trying to find people to teach my newly found stats knowledge to will be another matter.........


October 23, 2011

Comaparing exams – Reliability and Validity

An interesting point came to me when reading chapter 4 of Black's Testing Friend or Foe. Well, more than one point. When looking at the comparability of exams (specifically the inter decade A level argument), and the ways in which people have tried to explore it (modern students taking older exams) the subject of context is a vital one.

To reduce bias, test designers will attempt to eradicate questions which favour genders, cultures and economic backgrounds. Part of this bias is based on the idea that "a question may only be intelligible within certain cultural ans social assumptions," making bias part fairness in performance, part fairness in accessibility. A test's language, constant and context cannot be transferred to another and maintain any reliability, due to its extent of cultural and social bias.

This was demonstrated to me during my teaching recently. While still a young teacher (only 15 years older than my youngest students) I am continuously amazed at the difference in understanding and knowledge of my students to myself. I am not implying any greater knowledge or understanding, just different. Whether this be their shock at me not having a mobile phone until I was 13, or mine at realising that 9/11 happened when some of them were 1, having very little impact on their lives.

Most recently I have been working on a devised project (Drama) with my year 10s based on the Aberfan disaster. Not only did they not comprehend many of the different aspects of life back then, regarding heating, television, media and communication, they had no concept of coal. Ask a 1950s child a scientific, practical or even abstract question about coal, I am sure they could write reams. My 2011 students had no clue. I resorted to taking some to school to show them what it was!

Not only are exams, education and assessment methods and purposes changing, so is the world. How can we compare exams from then and now, when then and now are two very, very different places?


October 16, 2011

All but not only summative

"examinations are clearly summative, but may also contribute to teachers' formative work"

Black, 1998, Testing: Friend or Foe, p30


In our afternoon session (8/10/11) many people didn't understand the point I made regarding my opinion on summative assessment - that no assessment can ever be wholly or only summative as it will, ultimately, always be put to a secondary purpose.

For every student, the receipt of any summative grade triggers a gut response. This could be onf of cultural application (relating to a sense of worth perceived by others, related to the grade), personal achievement (relating to an understanding of the result of personal application, knowledge of ones ability to apply oneself, gain knowledge or tick the right boxes), personal analysis ( relating to a student's reflection on what has/hasn't been learnt/achieved and what does/doesn't need to be practised/improved upon) or personal development (relating to the knowledge of which paths the grade opens onto). This list is not exhaustive, but shows what the person receiving the grade could do with this knowledge. This doesn't even include what a teacher could do with this grade - to improve teaching, revise a curriculum/syllabus, or to celebrate personal worth and achievement! Nor does it include what a school, exam board or country will do with this grade, My point being that while a summative judgement HAS been made, that judgement MUST be used for a purpose by at least one party, or the summative judgement is devoid of purpose, and there was no point carrying out the assessment in the first place!

It is in this light that I made my second point - that every assessment is a summative one. By looking again at Black's Testing: Friend or Foe (whilst acknowledging that it is not the most up to date of his works) he states, on the subject of summative assessment, that "The term implies and overview of previous learning." While much of what many consider to be a true summative assessment is a reflection on a lengthy period of learning, and in a high stakes environment, by looking at Black's thinking, me asking "Did you understand my explanation of active and passive voice" can be deemed summative, as it asks a student of an overview of their previous learning, or their perception of previous learning.

By taking the view that all summative assessment judgements can be put to any purpose, we can also assume that all assessment judgements can be deemed summative, as they can be applied to a non summative purpose.


Comparability of exams

The following are reflection written whilst sat on the train back from uni, directly after reading 'Techniques for Monitoring the comparability of examination standards' Paul Newton, Aug 2007. They are therefore not overly ordered thoughts, and should be taken as such.

Ratification Method

Newton looks at the processes in place for maintaining standards and comparability between boards within the same subject, this form of cross board standardisation being useful if only on a year by year basis. While the process gains some validity through its methodology, with a number of senior examiners making judgements against exemplar papers from other board, it does still remain that this judgement is made against 'the standard in their own head'.

While Newton recognises the examiners are 'those who are actually empowered to set standards in their respective boards', by holding that standard 'in their own head', highlights the human and subjective element of assessment. I am drawn back to the philosophy of the morning (8/10/11) with Cathie (an area in which I profess no proficiency), where we discussed the idea of each person being unable to truly know the world, as each experience is individual to that person, implying no ability to be able to make a standardised judgement, if the perception of this judgement is individual. Also, a cynic could suggest a chinese whispers effect in the dissemination of this standard down to examiners at ground level.

In addition to this, Newton recognises the challenge, possibly impossibly so, to apply one known standard to 'an unfamiliar syllabus, paper and mark scheme.' This highlights the incomparability (or challenge to comparability) between papers at thye same level, within the same subject, of the same year. An application of any similar methods for comparing across subject, year, board and tier is highly unrealistic.

Paired comparrison method

When I first read Newton's description of this method I instantly related it to how I approach marking coursework and key summative assessments, as I am sure many teachers do. I mark work focusing on successes and improvements before rank ordering students. often this is initially based on a 'gut reaction' to the quality of the students performance, before approaching the assessment criteria and boundaries, and allocating grades and marks. Here, however, Newton is referring to the comparison of two different examinations, and a judgement of which script is better. Is this a comparison of each exam board's harshness or leniency in regard to summative marking, or an evaluation of which board asks the questions which get the best from pupils? Is it only me who wonders this?

Common test method

This method I find intriguing, although clearly bringing up a variety of issues in terms of initial assessment, variables and reliability. This is another example of assessment at cross purposes - with a test specifically designed to assess one thing, how can we make any judgement on correlation to another assessment topic, format or style? For some reason it made me think of training I had two years ago on Jesson data, used for creating aspirational targets for our KS4 students. The idea that a 'student like you' achieved X, or more specifically illustrating a spread of possible outcomes, where 5% 'students like you' achieved V, 15% achieved W, 50% achieved X, 19% achieved Y, and 10% achieved Z. Thus suggesting the median and mode of students' achievement was X, but they had a 50% chance of actually achieving a different grade altogether!

I realise I digress here, but it highlights the incomparability student to student in terms of predicted grade, let alone other factors outlined for the comparison of results.

Conclusion

I realise I am reading Newton (2007) AFTER having read the Goldacre (2010) article. However, in insight terms it seems to highlight the complexities in comparability, in terms of existing methodologies. These are limited to those widely used at the time, and do not include specific, discrete research into comparability on a wider scale. It also does not set out with any agenda or answer relating to the ideas and questions posed in the the Goldacre article.


September 2022

Mo Tu We Th Fr Sa Su
Aug |  Today  |
         1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30      

Search this blog

Galleries

Most recent comments

  • Hi Prudence. That's a weird kind of thing to ask you, since the 6th form is made up of young adults … by Juliet Nickels on this entry
  • I know where you are coming from – my goodness, do I! I am also dyslexic (perhaps we should form a c… by Graeme Sutherland on this entry
  • HI Prudence Very valid points indeed, however, if your essay is anything like your written rant abov… by Paula Riddle on this entry
  • I think this is a really good point, Prudence, and I was discussing recently with a fellow student, … by Juliet Nickels on this entry
  • So, The first session I had people write on the sheet, talk through and verbally discuss what the di… by on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXII