April 23, 2010

Evaluating the “impact” of non–trivial educational innovations – the reality

Here's a text i've just written for various purposes. It's just a first draft knowledge dump, to become something more substantial when i get a chance. And be warned, there is some shouting...

In the past year I’ve had many conversations with innovation-minded teachers, educational developers, administrators and policy makers ending in something along the lines of:

“we have to objectively demonstrate quantifiable impact on the student experience”.

This has become the mantra amongst decision makers, budget guardians, in HE. Good. About time. If only they had any idea about how we can actually achieve this worthy ambition! If only they could accept that it’s not a simple task. If it were that easy, why hasn’t it been done before? Perhaps it’s because innovators don’t like to be tested with reality? No, I don’t think so. Painful experience has taught us that doing innovation in HE is hard, evaluating the true impact of innovations is even harder.

Let’s make a serious consideration of the options.

Firstly, the easiest route to objective quantifiable evidence is, as they say in the design world, to “get it in the workflow” – that is to say, make the innovation unavoidable. Furthermore, we need to make the actions that provide data for our evaluations unavoidable. In the world of HE, that usually means to make the activity mandatory, and design formal assessment mechanisms that evaluate its impact on performance. That might not give us information about softer and more subtle aspects of impact (or the genuinely longitudinal), but it’s the best chance we’ve got.

Great idea. Let’s do it.

Not. It turns out that opportunities to introduce mandatory non-trivial innovations (i.e. one’s that aren’t just small enhancements or fillers of obvious gaps) are scarce. Opportunities for modifying formal assessment and feedback processes are even more rare. There are many reasons for this: overcrowded curriculum, pressure on time and resources, the long cycle of the academic year (sometimes there will only be one opportunity every year or less), aversion to risk (especially amongst students). And don’t forget the ethical issue. Students usually only get one chance at anything in a degree programme. You can’t make changes that risk screwing up a student’s chances. I recently heard a report from an Australian university at which a course had been re-engineered. Large changes were made simultaneously (often it seems to make more sense to change many things at once). Beta testing was not undertaken (it’s hard to realistically beta test such things). The results were a disaster for a whole year’s intake of students, along with the careers of several teachers.

Management edicts concerning the need to demonstrate impact will not make this any easier. We are most often all impotent in these matters.

That leads to the commonly adopted second best solution: getting voluntary involvement and voluntary feedback. In some cases this might take the form of testing the performance of a sample of students who have voluntarily undertaken the activity. Payments might be necessary. More usually, a prize is on offer. In a few rare occasions, extra academic incentives are available (I once worked on a project that offered students an opportunity to access an additional 5% on their mark). Great in theory. In an ideal world, all students would engage in development activities that offered to improve the quality of the student experiences. In reality, even when additional marks are on offer, participation rates are low. This often means that evaluations are based upon a self-selected group with a tendency to start off with favouring the innovation. Worse still, if we respond to this by offering the innovation more widely (say to target 1% of all students from across a whole university) variations of context introduce extraneous variables that are difficult to identify and account for (ramping up cost and duration).

Next we move on to more sophisticated hybrid methodologies. These approaches are more promising, but force us to abandon simplistic notions of quantitative representation of action and impact. In my experience, university managers find it difficult to let go of such simplicities. Worse still, the involvement of any kind of interpretative element in research leads to suspicion of manipulation (however, I argue, to evaluate impact within reasonable limits of time and cost THEY JUST HAVE TO START TRUSTING).

The fundamental concepts here are: calibration and correlation. Any research process, action-research included, is really built out of a set of different view points. We might be able to get richly descriptive and personal information from a smaller number of participants. We might be able to act as participant observers (with inescapably limited and prejudiced horizons). We might be able to introduce interventions at certain times and locations. And alongside this focussed information, we can gather wider data for quantitative analysis. Now connect these together into an iterative process of investigation. For example, talk to key individuals to suggest models and conjectures. Survey the wider population. Look for correlations and connections. Modify models and return to closer focus to test them with individuals. All the time understanding how each element in the investigation, each form of evidence, is calibrated along the scale of micro and macro, individual and population – being aware of the limitations that implies for our claims, but also what it tells us about the changing dynamics of generic characteristics at the level of the population and divergent deviations at the level of the individual: what the sociologist Bruno Latour calls “actor-network-theory”.

That gets us a long way, despite requiring rather more skill and intelligence than most managers will admit. But we still have the problem of engagement. How can we get a sufficiently large number and range of people involved in this action-research?

Let’s turn to the experts: people who’s job it is to dream-up, test and implement new solutions to some very difficult problems. For example, innovation in health care processes. The IDEO design company are amongst the best, and are certainly the most documented. To start with, they address the problem of engagement, of getting people to take risks with new ideas, by drawing end users into the design process from the outset. By the time a product reaches the stage of being a live prototype, its users have already got ownership and responsibility.

They call it design thinking. That's often misinterpreted as a technique used by professional designers. The real trick is to get everyone design thinking (managers, users) – sharing responsibility from the outset. For the same reason, Latour encourages everyone to do ANT. Solutions are therefore to a great extend solutions created by the people who will benefit from them (indeed a compromise between all stake holders). They will have been part of dreaming it up (along with other options) and making it fit. IDEO call this generative activity “inspiration”. It is complemented by a second and clearly distinguished activity “ideation” – collectively creating and testing (breaking) prototypes under realistic conditions. Only after much inspiration and ideation will there be a move to implementation – the point at which we are sure about impact.

Combine “design thinking” with a more traditional “action research” and we are on to a method for innovating with meaningful and achievable evaluation of impact.


- 11 comments by 1 or more people Not publicly viewable

[Skip to the latest comment]
  1. John

    Bit confused by your remark “If it were that easy, why hasn’t it been done before?”. My answer would be that it has been done before, many many times.

    As an example, John Hattie’s latest book “Visible Learning” has a very nice meta-meta-analysis of the comparative effectiveness of various educational innovations.

    23 Apr 2010, 23:33

  2. Robert O'Toole

    That was rhetorical.

    This is the important point: “we move on to more sophisticated hybrid methodologies” – “calibration and correlation” – that’s drawing together diverse evidence, either as part of a single multi-dimensional research activity, or as an analysis of the wider programme of educational research. The problem is, in HE, the evidence that we can draw upon, and the opportunities to generate worthwhile new evidence, are rare. As a HE researcher, i’m frequently disappointed to discover that a promising paper is backed up by a tiny and un-representative sample (although given that HE is about diversity and specialisation, being representative is a difficult issue in itself). Trying to build conclusions based upon scraps of evidence collected in diverse situations is rare (for each contributing study, the amount of unknown contextual factors increases). Research in schools is much more common, and in many cases relevant to HE. However, getting at an objective understanding of the impact of specific HE innovations is difficult. In the cases of the Australian university that I mentioned, the assumption that the changes had failed was based solely upon student satisfaction feedback. We have no idea if that was simply the result of student conservatism (or other factors). The changes seemed to have been undermined by some members of faculty. How could we be sure? Perhaps under different conditions it would have been a success? Perhaps the changes in fact present a significantly better way of teaching history?

    What is obvious about the Australian example is that the innovations (and accompanying action-research activity) should have been undertaken in a very different manner. “The real trick is to get everyone design thinking (managers, users) – sharing responsibility from the outset.” To reiterate, my argument is this:

    1. If we want to take evidence of impact seriously, we have to use more sophisticated research methodologies (with much more research activity).
    2. In HE that is particularly difficult and costly.
    3. To increase engagement in action-research, to increase the number of opportunities for evidence, we should use the “design thinking” approach.

    25 Apr 2010, 07:21

  3. Robert O'Toole

    One further point. We need to differentiate between two distinct aims.

    1. Identifying simple facts about the consequences of simple actions. For example, is there evidence that for tasks like “learning to use a new software application” working in pairs is more effective than working solo? Even then, when we try to apply that to real-world situations, there is the potential for variation. Perhaps some software applications are easier to learn solo, whereas others are more suited to collaboration? And perhaps there are important variations in the age and backgorund of the learners? Start thinking about the variables and combinatorics and things become less clear (at which point you start theorising about cognitive tendencies and constraints).
    2. Understanding the impact of a specific and substantial change in a specific context (the kind of evidence of impact that has now become crucial). For example, introducing a radically redesigned first-year programme for history undergraduates at an Australian university.

    25 Apr 2010, 07:32

  4. jersey salon

    In an ideal world, all students would engage in development activities that offered to improve the quality of the student experiences. In reality, even when additional marks are on offer, participation rates are low. This often means that evaluations are based upon a self-selected group with a tendency to start off with favouring the innovation.

    jersey salon

    30 Jun 2010, 15:46

  5. jennypittsberg

    Implementing such a change in teaching technique through the existing American educational system would be a non-trivial project. Short-cuts might be available in the form of TV, movies, or even public forums.

    christian louboutin

    23 Jul 2010, 10:34

  6. WoW Accounts

    The IDEO design company are amongst the best, and are certainly the most documented. To start with, they address the problem of engagement, of getting people to take risks with new ideas, by drawing end users into the design process from the outset. By the time a product reaches the stage of being a live prototype, its users have already got ownership and responsibility.

    27 Jul 2010, 16:02

  7. Comment awaiting approval

    This comment is awaiting approval by the author of the entry.

    30 Jul 2010, 11:04

  8. Comment awaiting approval

    This comment is awaiting approval by the author of the entry.

    02 Aug 2010, 18:00

  9. Comment awaiting approval

    This comment is awaiting approval by the author of the entry.

    03 Aug 2010, 04:05

  10. Comment awaiting approval

    This comment is awaiting approval by the author of the entry.

    04 Aug 2010, 03:07

  11. Comment awaiting approval

    This comment is awaiting approval by the author of the entry.

    05 Aug 2010, 07:46


Add a comment

You are not allowed to comment on this entry as it has restricted commenting permissions.