All entries for Friday 23 April 2010

April 23, 2010

Evaluating the “impact” of non–trivial educational innovations – the reality

Here's a text i've just written for various purposes. It's just a first draft knowledge dump, to become something more substantial when i get a chance. And be warned, there is some shouting...

In the past year I’ve had many conversations with innovation-minded teachers, educational developers, administrators and policy makers ending in something along the lines of:

“we have to objectively demonstrate quantifiable impact on the student experience”.

This has become the mantra amongst decision makers, budget guardians, in HE. Good. About time. If only they had any idea about how we can actually achieve this worthy ambition! If only they could accept that it’s not a simple task. If it were that easy, why hasn’t it been done before? Perhaps it’s because innovators don’t like to be tested with reality? No, I don’t think so. Painful experience has taught us that doing innovation in HE is hard, evaluating the true impact of innovations is even harder.

Let’s make a serious consideration of the options.

Firstly, the easiest route to objective quantifiable evidence is, as they say in the design world, to “get it in the workflow” – that is to say, make the innovation unavoidable. Furthermore, we need to make the actions that provide data for our evaluations unavoidable. In the world of HE, that usually means to make the activity mandatory, and design formal assessment mechanisms that evaluate its impact on performance. That might not give us information about softer and more subtle aspects of impact (or the genuinely longitudinal), but it’s the best chance we’ve got.

Great idea. Let’s do it.

Not. It turns out that opportunities to introduce mandatory non-trivial innovations (i.e. one’s that aren’t just small enhancements or fillers of obvious gaps) are scarce. Opportunities for modifying formal assessment and feedback processes are even more rare. There are many reasons for this: overcrowded curriculum, pressure on time and resources, the long cycle of the academic year (sometimes there will only be one opportunity every year or less), aversion to risk (especially amongst students). And don’t forget the ethical issue. Students usually only get one chance at anything in a degree programme. You can’t make changes that risk screwing up a student’s chances. I recently heard a report from an Australian university at which a course had been re-engineered. Large changes were made simultaneously (often it seems to make more sense to change many things at once). Beta testing was not undertaken (it’s hard to realistically beta test such things). The results were a disaster for a whole year’s intake of students, along with the careers of several teachers.

Management edicts concerning the need to demonstrate impact will not make this any easier. We are most often all impotent in these matters.

That leads to the commonly adopted second best solution: getting voluntary involvement and voluntary feedback. In some cases this might take the form of testing the performance of a sample of students who have voluntarily undertaken the activity. Payments might be necessary. More usually, a prize is on offer. In a few rare occasions, extra academic incentives are available (I once worked on a project that offered students an opportunity to access an additional 5% on their mark). Great in theory. In an ideal world, all students would engage in development activities that offered to improve the quality of the student experiences. In reality, even when additional marks are on offer, participation rates are low. This often means that evaluations are based upon a self-selected group with a tendency to start off with favouring the innovation. Worse still, if we respond to this by offering the innovation more widely (say to target 1% of all students from across a whole university) variations of context introduce extraneous variables that are difficult to identify and account for (ramping up cost and duration).

Next we move on to more sophisticated hybrid methodologies. These approaches are more promising, but force us to abandon simplistic notions of quantitative representation of action and impact. In my experience, university managers find it difficult to let go of such simplicities. Worse still, the involvement of any kind of interpretative element in research leads to suspicion of manipulation (however, I argue, to evaluate impact within reasonable limits of time and cost THEY JUST HAVE TO START TRUSTING).

The fundamental concepts here are: calibration and correlation. Any research process, action-research included, is really built out of a set of different view points. We might be able to get richly descriptive and personal information from a smaller number of participants. We might be able to act as participant observers (with inescapably limited and prejudiced horizons). We might be able to introduce interventions at certain times and locations. And alongside this focussed information, we can gather wider data for quantitative analysis. Now connect these together into an iterative process of investigation. For example, talk to key individuals to suggest models and conjectures. Survey the wider population. Look for correlations and connections. Modify models and return to closer focus to test them with individuals. All the time understanding how each element in the investigation, each form of evidence, is calibrated along the scale of micro and macro, individual and population – being aware of the limitations that implies for our claims, but also what it tells us about the changing dynamics of generic characteristics at the level of the population and divergent deviations at the level of the individual: what the sociologist Bruno Latour calls “actor-network-theory”.

That gets us a long way, despite requiring rather more skill and intelligence than most managers will admit. But we still have the problem of engagement. How can we get a sufficiently large number and range of people involved in this action-research?

Let’s turn to the experts: people who’s job it is to dream-up, test and implement new solutions to some very difficult problems. For example, innovation in health care processes. The IDEO design company are amongst the best, and are certainly the most documented. To start with, they address the problem of engagement, of getting people to take risks with new ideas, by drawing end users into the design process from the outset. By the time a product reaches the stage of being a live prototype, its users have already got ownership and responsibility.

They call it design thinking. That's often misinterpreted as a technique used by professional designers. The real trick is to get everyone design thinking (managers, users) – sharing responsibility from the outset. For the same reason, Latour encourages everyone to do ANT. Solutions are therefore to a great extend solutions created by the people who will benefit from them (indeed a compromise between all stake holders). They will have been part of dreaming it up (along with other options) and making it fit. IDEO call this generative activity “inspiration”. It is complemented by a second and clearly distinguished activity “ideation” – collectively creating and testing (breaking) prototypes under realistic conditions. Only after much inspiration and ideation will there be a move to implementation – the point at which we are sure about impact.

Combine “design thinking” with a more traditional “action research” and we are on to a method for innovating with meaningful and achievable evaluation of impact.


Decisions that aren't designs aren't decisions

....that could be O'Toole's first law of education management!

If a so called 'management decision' makes no explicit attempt to explain how it will be achieved, what the consquences are and how they will be apprehended, it's an empty gesture or worse, an edict. Decisions design action. Well designed decisions design effective action.