June 28, 2010

Impact in the Context of the REF

Writing about web page http://www.kcl.ac.uk/iss/support/ref/june2010

On Friday last week I attended an event at King's College London, all about REF measurement of research impact.

General plans

David Sweeney spoke about the need to justify public expenditure and that this is worth doing because we believe in research, and in "new knowledge deployed for the benefit of society". Selective funding will maximise the contribution of research, and we need to consider how we should select...

David recommended that anyone interested in what the REF will be measuring should start with the document released by HEFCE just before the election... I'm not sure exactly which document that was! In any case, they are still waiting for results of their impact pilot, and in particular the feedback from the peer review panels which will take until some time in July.

David's "Next steps" slide listed:

  • Completing the impact pilot exercise
  • Setting up the expert panels
  • Equalities and diversity steering group
  • Development of the submissions system - to be overseen by an expert steering group
  • Establishing formal institutional REF contacts.

Pilot process

Graeme Rosenberg spoke next, about the progress of the impact pilot exercise. Graeme gave a definition of "impact" in the REF context as including: "economic, environmental, social, health, cultural, international, quality of life & others" They are definitely NOT looking for academic impact.

This impact must arise from research work specifically rather than from broader work that academics engage in, and it should be about actual impact that has already occurred rather than projected or potential impacts. The RCUK might be concerned with future impacts, through the grant application process, but the REF is concerned with historical impact.

On the day, there was some confusion as to whether it is best to concentrate on the impact of an individual researcher, or on a portfolio of research which might actually belong to a group, when writing up case studies. Graeme's point was simply that those who submit to the REF ought to consider carefully which is appropriate for your context. Graeme also made a point that the REF is about assessment rather than measurement of research impact.

The pilot looked at 5 different disciplines, and each institution which took part was asked to provide an "impact statement" about a broad range of activities and case studies which had a word limit of 1500 words each. Each institution was asked to provide 1 case study per 10 FTE staff, with a minimum of 2 case studies for the smallest departments.

Research included in the case studies had to have taken place at the submitting institution: this has implications because researchers do move on, and their work would count towards their previous institution's submission, if that institution had kept a record of their research and its impact! Or failing that, if they were at least successful in attempting to reach former staff to ask for their contribution to a case study. (NB retired staff's previous work could count in this context?) Narrative evidence had to be supported by indicators and references to external sources, eg links to websites or names of people (external to the institution) to contact for a reference.

The Pilot's panels were made up of both researchers and "users", and each case study was looked at by at least 4 people from each panel, preferably by two of each type.

Theory of research measurement/assessment

The next speaker was Claire Donovan, who spoke about the Australian experience of backing away from measurement/assessment (Research Quality Framework) with a change of government. Claire introduced some interesting concepts, speaking about a "relevance gap" between the research that society most needs and that which academics produce, and about "Triple bottom line accounting" which takes into account social and environmental consequences, as well as the economic ones. Claire also spoke about "Technometrics" and "Sociometrics" and she amused the audience by saying that the latter had been said to make alchemy look good!

4 definitions which had been proposed in the Australian RQF that never happened remain in the current ERA, and they look very useful (to me) for anyone considering what the impact of their research might be/have been and looking to gather evidence of such:

1) Social Benefit: Improving quality of life, stimulating new approaches to social issues; changes in community attitudes and influence upon developments or questions in society at large; informed public debate and improved policy-making; enhancing the knowledge and understanding of the nation; improved equity; and improvements in health, safety and security.

2) Economic Benefit: Improved productivity; adding to economic growth and wealth creation; enhancing the skills base; increased employment; reduced costs; increased innovation capability and global competitiveness; improvements in service delivery; and unquantified economic returns resulting from social and public policy adjustments.

3) Environmental Benefit: Improvements in environment and lifestyle; reduced waste and pollution; improved management of natural resources; reduced consumption of fossil fuels; uptake of recycling techniques; a reduced environmental risk; preservation initiatives; conservation of biodiversity; enhancement of ecosystem services; improved plant and animal varieties; and adaptation to climate change.

4) Cultural Benefit: Supporting greater understanding of where we have come from, and who and what we are as a nation and society; understanding how we relate to other societies and clutures; stimulating creativitity within the community; contributing to cultural preservation and enrichment; and bringing new ideas and new modes of experience to the nation.

Claire's main point was for complexity in whatever measurement/assessment of research takes place.

Putting together a University's return

The next three speakers were from Universities who had taken part in the pilot and they spoke about their approaches to/issues identified by submitting their impact statements and case studies. Afternoon breakout sessions included similar information, so they are also described here. I attended the ones on Clinical Medicine and Social work and social policy.

Approaches:

  • HoD identifies who to approach and ask to submit a case study: ask for twice as many as needed and then choose the best.
  • Academics write the materials themselves but then they are re-drafted by the Uni's "research support" team.
  • Involve the press office in the writing of case studies.
  • Set up a writing team to create the case studies, steered by a group of senior academics who provided stories and contacts for the writing team to interview, and supported by a reading group of external people.
  • Training of academic staff who write the materials.
  • "Impact champions" from amongst the academic community to encourage others in the right direction.
  • “Ebullient re-writes in marketing speak don’t go down well with academics”!

Issues:

  • Too much focus on the problem that the research was intended to address and the actual benefit/impact is not properly described.
  • Repeated iterations were needed for almost all case studies, in both speakers' experiences.
  • Impact statements were harder to write than case studies and there was much overlap in the way the documents were written up.
  • The process creates a tendency to focus on impacts that can be measured, rather than on those which matter most.
  • Attribution and provenance of impact is time consuming to demonstrate.  (gathering of evidence!)
  • Cannot guarantee confidentiality of HEFCE materials (because of FOI) so could not include impacts in commercial sector where industrial sensitivities were involved.
  • Belief that Universities don't create impact, they contribute to it.
  • Difference between activity and impact: evidence tends to be about activity, rather than about impact. Is this "interim impact" ie not prospective but not yet historical either?
  • Patchy provision of evidence by the academics: eg dates and times they sat on government research committees, or the contribution their research made to a white paper or green paper is not properly referenced.
  • Negative findings can also have impact: how/whether to write these up? (NB none admitted to it on the day!).
  • Sometimes the research might have had impact (or have been expected to) but political (or other reasons) got in the way.
  • If you're only writing 1000 words then you have to pitch it in relatively simple terms: and some of the panel are "users" rather than academics, so the academic discipline's writing style is not always appropriate... but this is a balance because some panel members are academics as well and even the "users" are not entirely lay audience.

Some found that subject knowledge was vital in supporting a department in writing case studies, whilst others felt that the lay person or different discipline person added a valuable perspective. Most case study writing seems to employ social scientists' skills. The subjective nature of the selection and then panel process makes scientists nervous.

Panel review process

The panel process was then described by 2 chairs of the Clinical Medicine (Alex Markham) and English Lang & Lit panels (Judy Simons).

Panels were made up of distinguished individuals because it is important that the community should value the panel & therefore the process. It is good for the academic sector to create these kinds of case studies: Alex Markham was formerly director of Cancer Research UK and when they told the public what they were doing, doubled their income!

General tips, some of which relate nicely to Claire's definitions above:

  • What did the institution do to generate the impacts cited?
  • “Reach and significance” balance the two: hundreds of cases of athletes foot, or just a handful of people who wouldn’t have survived otherwise? Both have benefit and impact, but which is most important? 
  • Question of how far down the pathway of having an impact is the case study? (ie the point Graeme made about projected or historical impact).
  • Panel chose to err on the side of positivity.
  • Qn of a specific objective which was achieved : this is a much more impressive impact than an unintentional one. Intended impact gets credit! It didn’t just happen without being planned. (NB this needs to be balanced with point about dwelling too much about what the research was intended to address.)
  • Avoid “look at what I’ve done in my distinguished career” style of a case study as this is captured in other elements of the REF. 
  • Does impact equal engagement? Or benefit? How to measure? Hard or soft indicators? Can’t tell if audience go away thinking about the performance or not…
  • Research might not be ground-breaking, but it might be “transportive".
  • Articulating the contribution: use language to reflect the character of the discipline!
  • Give hard supportive evidence: name names and provide dates,
  • Impact criteria for the humanities based on the BBC framework for public value.
  • Feeding the creative economy – consider the publishing industry.
  • Preserves national cultural heritage through partnerships…
  • Extend global/national knowledge base (beyond academia).
  • Contribute to policy development
  • Enrich/promote regional communities and their cultural industries.
  • Alex Markham later said in a breakout session that it was obvious when contributions had not been read: he recommended a "reading group" to go through the material before it is submitted.

RCUK Research Outcomes Project

Before the day ended, Sue Smart spoke about this project to create “a single harmonised process allowing us to demonstrate the impact of cross-council activities”. RCUK also have to demonstrate to UK government what the UK is getting from its investment in research. I've written about this elsewhere previously, so I'm not going into detail about it here. David Sweeney contributed to the closing comments to say that RCUK are looking to gather outcomes from all research, whilst REF is picking “the best” of research.

What's next?

HEFCE will give institutions pointers about what works in the case studies and what doesn’t in the autumn 2010. There will be a series of workshops because more work needs to be done with arts and humanities and soc scis in particular.

A broad framework will be devised, with scope for panels to tailor, in consultation with their communities.


- No comments Not publicly viewable


Add a comment

You are not allowed to comment on this entry as it has restricted commenting permissions.

Subscribe to this blog by e-mail

Enter your email address:

Delivered by FeedBurner

Find out more...

My recently bookmarked sites

Tweet tweet

Search this blog

Most recent comments

  • Oh yes, I'm writing that too! And tidying up my paperwork, plastering each piece with post–it notes … by Jenny Delasalle on this entry
  • A useful list, thanks Jen. I would add "it's never too early to start writing your handover document… by Emma Cragg on this entry
  • Yes, Google does find things very fast: I use it a lot to find sites that I know and regularly visit… by Jenny Delasalle on this entry
  • Mac OS has the ability to share Safari www bookmarks and other data, securely across multiple machin… by Andrew Marsh on this entry
  • Hi Peter, I see that you practice what you preach… and indeed the point that you make about being … by Jenny Delasalle on this entry

Blog archive

Loading…
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIII