All 5 entries tagged Ref

View all 11 entries tagged Ref on Warwick Blogs | View entries tagged Ref at Technorati | There are no images tagged Ref on this blog

October 04, 2011

The UK REF 2014 – my notes on what citation measurement we might expect

Writing about web page http://www.hefce.ac.uk/research/ref/pubs/2011/02_11/02_11.pdf

These are my notes upon reading the REF 2014 document: “Assessment framework and guidance on submissions” - http://www.hefce.ac.uk/research/ref/pubs/2011/02_11/02_11.pdf page 22 onwards.


Each member of staff who is eligible for inclusion in the REF submission will have up to four outputs submitted.


WHAT CAN BE SUBMITTED:


In summary, each output must be:

- “The product of research, briefly defined as a process of investigation leading to new insights, effectively shared.” There is a full definition of research in the documentation.

- “First brought into the public domain during the publication period, 1 January 2008 to 31 December 2013 or, if a confidential report, lodged with the body to whom it is confidential during this same period”

NB if you get an online pre-print available within these dates but the official publication date is after 31 December 2013, you can still submit the item so long as you have evidence to prove the public domain availability of the item. One form of evidence relating to web content which is acceptable is: “a date-stamped scanned or physical printout or evidence derived from web-site archiving services.” (See Paragraph 111) Likewise though, if your publication was actually in the public domain prior to 1 January 2008 then it won’t be eligible.

- “Produced or authored solely, or co-produced or co-authored, by the member of staff against whom the output is listed, regardless of where the member of staff was employed at the time they produced that output.”

Although the official documentation expands on this more. (Paragraph 105 onwards)


Examples of output types given are:

“new materials, devices, images, artefacts, products and buildings; confidential or technical reports; intellectual property, whether in patents or other forms; performances, exhibits or events; work published in non-print media”, and other types can be included. The documentation goes on to say: “Reviews, textbooks or edited works (including editions of texts and translations) may be included if they embody research…”


“A confidential report may only be submitted if the HEI has prior permission from the sponsoring organisation that the output may be made available for assessment.” (para 115)


WHAT IS NOT ELIGIBLE:

Editorships or other activities are not outputs, and so should not be included in the submission. Theses and “items submitted for a research degree” won’t count, although it does seem that published items and other eligible outputs based on your research degree can be listed.

Panels might choose to assess two outputs which are based on the same research and so have “significant material in common” as a single output, or to assess just the content which is distinct in each. Panels own judements will also be used in the instance of a publication which is a version of one published prior to 1 January 2008, as to whether the publication is eligible and how it is to be assessed: “Submissions should explain where necessary how far any work published earlier was revised to incorporate new material” (Paragraph 113)

“HEIs may not list as the output of a staff member any output produced by a research assistant or research student whom they supervised, unless the staff member co-authored or co-produced the output.” (para 110)

DATA ABOUT THE OUTPUTS

Output types are to be categorized in the institution’s submission, into:

i. Books (or parts of books).

ii. Journal articles and conference contributions.

iii. Physical artefacts.

iv. Exhibitions and performances.

v. Other documents.

vi. Digital artefacts (including web content).

vii. Other.

(Para 118) and there will be different data requirements for each of these categories.

Paragraph 119 says:

“Each of the following is required where applicable to the output:

a. Co-authors: the number of additional co-authors.

b. Interdisciplinary research: a flag to indicate to the sub-panel if the output embodies interdisciplinary research.

c. The research group to which the research output is assigned, if applicable. This is not a mandatory field, and neither the presence nor absence of research group is assumed.

d. Request for cross-referral: a request to the sub-panel to consider cross-referring the output to another sub-panel for advice (see paragraph 75d).

e. Request to ’double weight’ the output: for outputs of extended scale and scope, the submitting institution may request that the sub-panel weights the output as two (see paragraphs 123-126).

f. Additional information: Only where required in the relevant panel criteria, a brief statement of additional information to inform the assessment (see paragraph 127).

g. A brief abstract, for outputs in languages other than English (see paragraph 128-130).”


And there is much more information about how information on each of these features can be provided, in the document.


CITATION DATA:

“Some sub-panels will consider the number of times that an output has been cited, as additional information about the academic significance of submitted outputs.” (Paragraph 131) They won’t be interested in the impact factors of the journals as such, but the number of citations accrued by the outputs themselves. The citation data is to be provided to the panels by the REF team, and submissions may not include details of citations in additional information for outputs.


“In using such data panels will recognise the limited value of citation data for recently published outputs, the variable citation patterns for different fields of research, the possibility of ‘negative citations’, and the limitations of such data for outputs in languages other than English.” (para 132)


I’ve not read criteria from each panel, but David Young’s blog entry at http://research.blogs.lincoln.ac.uk/2011/08/01/hefce-publishes-draft-ref-panel-criteria-and-working-methods/ neatly summarises the expected use of citation data across the different panels (although note that consultation is still underway at present):

* “Different panels and UoAs will use citation data to differing degrees, and they will also differ in what kinds of outputs are acceptable
* All sub-panels of Panel A (life sciences and allied health disciplines) will use citation data “where it is available, as an indicator of the academic impact of the outputs, to inform its assessment of output quality”
* Just under half of the sub-panels in Panel B (physical sciences) will use citation data: Earth systems & Environment, Chemistry, Physics, and Computer Science.
* Two sub-panels in Panel C (social sciences) will make use of citation data: Geography, Environmental Studies and Archaeology (although not all of this UoA will use citations) and Economics and Econometrics.
* None of Panel D (arts and humanities) will use citation data.
* Physical sciences will be able to submit a larger range of outputs, including patents, computer algorithms and software. In contrast life scientists will be more restricted, and can only include some kinds of outputs, such as databases or textbooks “exceptionally”.”

Where sub-panels are using citation data, it is being made available to them, matched to outputs by the REF team, using DOIs and other bibliographic data. Institutions will be able to verify these matches and to view the citation counts provided to the panels. Citations made after the submission deadline will continue to be counted and provided to the panels. (See para 133)

ACCESS TO SUBMITTED OUTPUTS

Journal articles and conference papers will be accessed by the REF team via the publishers, so will require DOIs in the submission. Other output types can be provided in an electronic format, or a physical copy, or as appropriate evidence, seemingly in that order of preference. We await the submission system software in autumn 2012.


June 28, 2010

Impact in the Context of the REF

Writing about web page http://www.kcl.ac.uk/iss/support/ref/june2010

On Friday last week I attended an event at King's College London, all about REF measurement of research impact.

General plans

David Sweeney spoke about the need to justify public expenditure and that this is worth doing because we believe in research, and in "new knowledge deployed for the benefit of society". Selective funding will maximise the contribution of research, and we need to consider how we should select...

David recommended that anyone interested in what the REF will be measuring should start with the document released by HEFCE just before the election... I'm not sure exactly which document that was! In any case, they are still waiting for results of their impact pilot, and in particular the feedback from the peer review panels which will take until some time in July.

David's "Next steps" slide listed:

  • Completing the impact pilot exercise
  • Setting up the expert panels
  • Equalities and diversity steering group
  • Development of the submissions system - to be overseen by an expert steering group
  • Establishing formal institutional REF contacts.

Pilot process

Graeme Rosenberg spoke next, about the progress of the impact pilot exercise. Graeme gave a definition of "impact" in the REF context as including: "economic, environmental, social, health, cultural, international, quality of life & others" They are definitely NOT looking for academic impact.

This impact must arise from research work specifically rather than from broader work that academics engage in, and it should be about actual impact that has already occurred rather than projected or potential impacts. The RCUK might be concerned with future impacts, through the grant application process, but the REF is concerned with historical impact.

On the day, there was some confusion as to whether it is best to concentrate on the impact of an individual researcher, or on a portfolio of research which might actually belong to a group, when writing up case studies. Graeme's point was simply that those who submit to the REF ought to consider carefully which is appropriate for your context. Graeme also made a point that the REF is about assessment rather than measurement of research impact.

The pilot looked at 5 different disciplines, and each institution which took part was asked to provide an "impact statement" about a broad range of activities and case studies which had a word limit of 1500 words each. Each institution was asked to provide 1 case study per 10 FTE staff, with a minimum of 2 case studies for the smallest departments.

Research included in the case studies had to have taken place at the submitting institution: this has implications because researchers do move on, and their work would count towards their previous institution's submission, if that institution had kept a record of their research and its impact! Or failing that, if they were at least successful in attempting to reach former staff to ask for their contribution to a case study. (NB retired staff's previous work could count in this context?) Narrative evidence had to be supported by indicators and references to external sources, eg links to websites or names of people (external to the institution) to contact for a reference.

The Pilot's panels were made up of both researchers and "users", and each case study was looked at by at least 4 people from each panel, preferably by two of each type.

Theory of research measurement/assessment

The next speaker was Claire Donovan, who spoke about the Australian experience of backing away from measurement/assessment (Research Quality Framework) with a change of government. Claire introduced some interesting concepts, speaking about a "relevance gap" between the research that society most needs and that which academics produce, and about "Triple bottom line accounting" which takes into account social and environmental consequences, as well as the economic ones. Claire also spoke about "Technometrics" and "Sociometrics" and she amused the audience by saying that the latter had been said to make alchemy look good!

4 definitions which had been proposed in the Australian RQF that never happened remain in the current ERA, and they look very useful (to me) for anyone considering what the impact of their research might be/have been and looking to gather evidence of such:

1) Social Benefit: Improving quality of life, stimulating new approaches to social issues; changes in community attitudes and influence upon developments or questions in society at large; informed public debate and improved policy-making; enhancing the knowledge and understanding of the nation; improved equity; and improvements in health, safety and security.

2) Economic Benefit: Improved productivity; adding to economic growth and wealth creation; enhancing the skills base; increased employment; reduced costs; increased innovation capability and global competitiveness; improvements in service delivery; and unquantified economic returns resulting from social and public policy adjustments.

3) Environmental Benefit: Improvements in environment and lifestyle; reduced waste and pollution; improved management of natural resources; reduced consumption of fossil fuels; uptake of recycling techniques; a reduced environmental risk; preservation initiatives; conservation of biodiversity; enhancement of ecosystem services; improved plant and animal varieties; and adaptation to climate change.

4) Cultural Benefit: Supporting greater understanding of where we have come from, and who and what we are as a nation and society; understanding how we relate to other societies and clutures; stimulating creativitity within the community; contributing to cultural preservation and enrichment; and bringing new ideas and new modes of experience to the nation.

Claire's main point was for complexity in whatever measurement/assessment of research takes place.

Putting together a University's return

The next three speakers were from Universities who had taken part in the pilot and they spoke about their approaches to/issues identified by submitting their impact statements and case studies. Afternoon breakout sessions included similar information, so they are also described here. I attended the ones on Clinical Medicine and Social work and social policy.

Approaches:

  • HoD identifies who to approach and ask to submit a case study: ask for twice as many as needed and then choose the best.
  • Academics write the materials themselves but then they are re-drafted by the Uni's "research support" team.
  • Involve the press office in the writing of case studies.
  • Set up a writing team to create the case studies, steered by a group of senior academics who provided stories and contacts for the writing team to interview, and supported by a reading group of external people.
  • Training of academic staff who write the materials.
  • "Impact champions" from amongst the academic community to encourage others in the right direction.
  • “Ebullient re-writes in marketing speak don’t go down well with academics”!

Issues:

  • Too much focus on the problem that the research was intended to address and the actual benefit/impact is not properly described.
  • Repeated iterations were needed for almost all case studies, in both speakers' experiences.
  • Impact statements were harder to write than case studies and there was much overlap in the way the documents were written up.
  • The process creates a tendency to focus on impacts that can be measured, rather than on those which matter most.
  • Attribution and provenance of impact is time consuming to demonstrate.  (gathering of evidence!)
  • Cannot guarantee confidentiality of HEFCE materials (because of FOI) so could not include impacts in commercial sector where industrial sensitivities were involved.
  • Belief that Universities don't create impact, they contribute to it.
  • Difference between activity and impact: evidence tends to be about activity, rather than about impact. Is this "interim impact" ie not prospective but not yet historical either?
  • Patchy provision of evidence by the academics: eg dates and times they sat on government research committees, or the contribution their research made to a white paper or green paper is not properly referenced.
  • Negative findings can also have impact: how/whether to write these up? (NB none admitted to it on the day!).
  • Sometimes the research might have had impact (or have been expected to) but political (or other reasons) got in the way.
  • If you're only writing 1000 words then you have to pitch it in relatively simple terms: and some of the panel are "users" rather than academics, so the academic discipline's writing style is not always appropriate... but this is a balance because some panel members are academics as well and even the "users" are not entirely lay audience.

Some found that subject knowledge was vital in supporting a department in writing case studies, whilst others felt that the lay person or different discipline person added a valuable perspective. Most case study writing seems to employ social scientists' skills. The subjective nature of the selection and then panel process makes scientists nervous.

Panel review process

The panel process was then described by 2 chairs of the Clinical Medicine (Alex Markham) and English Lang & Lit panels (Judy Simons).

Panels were made up of distinguished individuals because it is important that the community should value the panel & therefore the process. It is good for the academic sector to create these kinds of case studies: Alex Markham was formerly director of Cancer Research UK and when they told the public what they were doing, doubled their income!

General tips, some of which relate nicely to Claire's definitions above:

  • What did the institution do to generate the impacts cited?
  • “Reach and significance” balance the two: hundreds of cases of athletes foot, or just a handful of people who wouldn’t have survived otherwise? Both have benefit and impact, but which is most important? 
  • Question of how far down the pathway of having an impact is the case study? (ie the point Graeme made about projected or historical impact).
  • Panel chose to err on the side of positivity.
  • Qn of a specific objective which was achieved : this is a much more impressive impact than an unintentional one. Intended impact gets credit! It didn’t just happen without being planned. (NB this needs to be balanced with point about dwelling too much about what the research was intended to address.)
  • Avoid “look at what I’ve done in my distinguished career” style of a case study as this is captured in other elements of the REF. 
  • Does impact equal engagement? Or benefit? How to measure? Hard or soft indicators? Can’t tell if audience go away thinking about the performance or not…
  • Research might not be ground-breaking, but it might be “transportive".
  • Articulating the contribution: use language to reflect the character of the discipline!
  • Give hard supportive evidence: name names and provide dates,
  • Impact criteria for the humanities based on the BBC framework for public value.
  • Feeding the creative economy – consider the publishing industry.
  • Preserves national cultural heritage through partnerships…
  • Extend global/national knowledge base (beyond academia).
  • Contribute to policy development
  • Enrich/promote regional communities and their cultural industries.
  • Alex Markham later said in a breakout session that it was obvious when contributions had not been read: he recommended a "reading group" to go through the material before it is submitted.

RCUK Research Outcomes Project

Before the day ended, Sue Smart spoke about this project to create “a single harmonised process allowing us to demonstrate the impact of cross-council activities”. RCUK also have to demonstrate to UK government what the UK is getting from its investment in research. I've written about this elsewhere previously, so I'm not going into detail about it here. David Sweeney contributed to the closing comments to say that RCUK are looking to gather outcomes from all research, whilst REF is picking “the best” of research.

What's next?

HEFCE will give institutions pointers about what works in the case studies and what doesn’t in the autumn 2010. There will be a series of workshops because more work needs to be done with arts and humanities and soc scis in particular.

A broad framework will be devised, with scope for panels to tailor, in consultation with their communities.


June 22, 2010

Measuring impact

Writing about web page http://www.aslib.com/membership/resources/RAND_research.pdf

I wasn't able to attend the recent ASLIB event on Research Support, but it looks like it was a very good one. I particularly like this presentation and the slides in it about international practice in capturing research impact. I'm gearing up for a forthcoming event on the REF at Kings College London on Friday, so I guess that impact is on my mind!


April 29, 2010

All you ever wanted to know about bibliometrics and a lot more that you didn't.

Writing about web page http://www.ics.heacademy.ac.uk/events/displayevent.php?id=238

I attended this event, run by the HE Academy for Information & Computer Science yesterday. Charles Oppenheim presented a briefer version of his slides that I already blogged about, from his visit to Warwick, and he hosted the day. Presentations were each 30 minutes long and this made for a very fast-paced event as speakers crammed considerable knowledge into their short talks.

I was very pleased to meet Nancy Bayers, the new bibliometrician at the University of Leicester. Nancy has come all the way over from Philadelphia in the US and has a background of working for Thomson Reuters. Nancy's presentation was particularly useful in that she looked at the broader aspects of metrics in introducing her talk on citation analysis. My commentary here doesn't do her presentation justice because I'm no bibliometrician. Numbers are not my strongest point and I confess to getting lost in the talk about ratios and percentile distributions! I gleaned a lot, all the same.

Some tools for assessing research (other than bibliometrics) mentioned by Nancy were:

  • Peer review involvement
  • Editorships
  • Research Income/awards

To which I would add:

  • Research grant applications (useful for an institution to measure who is unsuccessful but trying, especially if you can create links with someone who has been successful with the same funding bodies!)
  • PhD supervision load

Although Nancy may also have mentioned those: I was very busy scribbling!

Nancy went on to look at a citation measurement and what it means in context, and how we can go about using that measurement. But before I get into citation measurement stuff, another perspective that I gained from the day was just how many kinds of metrics there are, relating to research assessment. There are those listed above, but on the day people also mentioned "webometrics" and "scientometrics" and various other metrics, and indeed there are other kinds of bibliometric than just numbers of citations. So, some other things that might be worth measuring in terms of research assessment, to add to our list are:

  • Numbers of publications (interesting to note what counts as a publication to be measured: RCUK and REF output types are likely to define these, in my opinion)
  • Impact factors of the journals in which those articles are published.
  • Webometrics - sometimes simply numbers of visitors to a webpage/website, but it could be a whole lot more complex if you want it to be! (eg geographic coverage of visitors, visitors from academic networks, people linking to your site, etc) Could include online version(s) of a journal article, in which case it would be article-level metrics - especially if consolidated from different sources, i.e. different versions of the paper online.
  • Citations - of course! From various sources: Thomson Reuters' Web of Science, Elsevier's Scopus or Google Scholar.

I expect that the list of what could be measured could grow and grow, especially in the digital age... but essentially these things are only worth measuring if we have a use for them.

Back to Nancy's explanation of the context of citation measurements. If we know that our article has had, eg 42 citations, what does that figure actually mean? Nancy spoke about the "expected citation rate" which is the average number of citations achieved by articles in the journal in which our article has been published. Thus we can obtain a ratio of the actual number of citations to the expected number for that article. We could also provide context by looking at the average number of citations obtained by an article in our field or discipline.

Using Bibliometrics... (within context, though!)

Comparisons across departments might help University managers to see which departments to invest in. Comparisons across Universities might help with benchmarking exercises.

Bibliometrics can be helpful when choosing which papers to put forward for the REF: we would be looking for papers which are highly cited and which would have a wide geographic outreach.

Authors might want to look at bibliometrics when choosing journals to submit their article(s) to.

Researchers might include bibliometrics relating to previous work in grant applications, as part of their documentation - or indeed to identify potential collaborators. Especially by looking at who is citing their own work!

Nancy also mentioned that bibliometrics could be used "to analyse the current state of research" and I'm not too sure what that meant when reading my notes, but a topic that is attracting lots of citations would seem to be one which would be an important one to research, I guess.

Nancy's caveats about bibliometrics are:

  • beware of field/disciplinary differences
  • use multiple metrics
  • use alongside peer review
  • they're probably not suitable for humanities subjects and in some social sciences... and those disciplines that communicate primarily through conferences/other methods, rather than journals.
  • always ask yourself whether the measures that you find seem reasonable!

NB Nancy also spoke about how TR's Web of Science does cover some conference proceedings, but it seemed that the citation measurements for these might not be as reliable as that for journal articles, at present.

All that (and more) in only 30 minutes!

The next speaker was Eric Meyer from the University of Oxford. Eric mentioned the "network effect", that the more authors who contribute to a paper, the more citations it seems to attract. In any case, science teams are getting bigger and collaborative research is becoming more and more important... and possible, in the electronic world. This is at least in part as a response to European Commission funding priorities! Publishing workflows are also changing since lab procedures might be published these days, even before there are publications on the results of research. Eric mentioned the JOVE journal of visualised experiments.

Eric stated that open access publishing works as a way to raise citations, for an article that is a good one. At the moment OA authors stand to gain because not everyone is publishing their work on OA - the implication being that there is a competitive advantage to be had.

One pit-fall of collaborative work Eric described is that, since the different disciplines have different publishing norms, those who are collaborating across disciplines sometimes find that they need to negotiate what will be published where. One researcher might want the work to be published in conference proceedings "or it won't do me any good", whilst another might want the work to appear in a journal article in order to raise awareness of his/her work.

Interestingly, Eric also mentioned that authors might want to use the Sherpa-Romeo website to choose where to publish their journal articles, on the grounds that they might choose one which supports/allows open access publication of their article in some way. I've contributed to Warwick's web pages for researchers on "disseminating your research" and suggested that authors might like to look for an OA journal on the DOAJ and indeed check their funders' policies on Sherpa Juliet, but I've never thought that they might use Sherpa-Romeo as a selection tool! I might add it in...

Eric had a slide looking at "Impact Measures" which referenced the TIDSR toolkit. Something for me to look up because I've not used it myself.... at first glance it looks very interesting indeed. Most of the measures I've listed so far are of the quantitative type, but of course there are various qualitative measures for research assessment, which Eric's slide listed, including stakeholder interviews, user feedback and surveys and questionnaires. The stakeholders mentioned included:

  • Project or own institution's staff
  • User communities
  • Subject specialists
  • Funding bodies

Another concept that Eric spoke about was that of "guild publishing", i.e. departments or groups which launch their own series of working papers or other such publications that have not included a peer review process but which ought to represent a degree of quality owing to the select membership of that department or "guild". This is similar to the aims of Warwick's institutional repository (WRAP) in that we wanted to present Warwick's research in one collection because it is known to be of high quality.

Next up was David Tarrant from the University of Southampton. I knew I'd seen David before and later remembered where - he was at OR08 and is very much a part of the repository developers' scene. However, David is doing a PhD on whether we can predict the order of paper's eventual citation score/impact by looking at which other papers are co-cited with yours. Or something similarly clever...

David mentioned that our researchers might be interested in understanding/maximising their bibliometrics from the point of view of their job safety and promotion prospects. In this context, I believe that it would be worth an author's while checking which papers are actually attributed to them on the likes of Web of Science. This is because we don't have any universal author identifiers and variations of names and addresses can lead to some authors losing out on measurements which are actually theirs to claim!

Also, David pointed out that the funding cycles might influence where researchers would choose to disseminate information about their research: if the funding cycle only covered one publishing cycle of a key journal or one key conference, then this would be the dissemination route of choice for the author and there would not be time for further publications to be produced. Funder expectations could also dictate what dissemination route researchers might take... as indeed our library web pages advise researchers to check funders' policies on open access. What this means, is that researchers are not necessarily free to decide for themselves what to write and where to publish, about their research - even if we were not to think of the pressures of the REF and other bibliometric measurements.

David suggested that the purpose of publishing and disseminating information about your research is to build networks; to enhance science and to increase citations (and thereby status). Later discussion amongst the academics in attendance did illustrate the concern that their research should be led by the desire to seek new knowledge and to contribute to the field rather than the desire to achieve publication in certain journals or to achieve certain metric scores... which they believe that the REF is in danger of driving them to.

Those who wish to target high impact journals might be interested to know how the impact factor is calculated. It's basically the number of citations per year divided by the number of citable articles, on TR's WoS.

It may or may not be significant that the number of citable articles does not include things like editors' comments and letters, whilst the number of cited articles will include these (if they have been cited). I noticed some time ago that letters could be cited and I wondered if they would count towards an author's own H-Index on WoS. So far as I can tell, they do... although whether it's a good tactic to attempt to get lots of letters published on the grounds that you might achieve more citations more easily is debatable. Few letters are likely to be worth citing, I would have thought. Still, getting a letter published might raise interest in your work more generally... and therefore citations for your other published work.

David spoke about the "long tail" effect that the majority of citatons are for the minority of articles, and quoted some work published in the BMJ in 1997 which I've seen quoted before. I've also heard that we can tell that an article that has not achieved citations within the first two years of its published life is unlikely to achieve citations at all. (See my WRAP blog posting about an event I attended at Cranfield a couple of years ago.) And Nancy Bayers had earlier mentioned the concept of a "hot" article being one which achieved a disproportionate number of citations in its early life.

It was mentioned at yeserday's event and also at the UKSG conference that I attended recently that there are more an more journal articles being published. There is a pressure on our researchers to publish and so more people are writing more articles (sometimes leading to higher journal subscription costs for libraries!). Presumably therefore there are more articles being cited, because there are more citations being published, but are they all actually citing the same content?

David also quoted some research done on or by the Physics subject repository known as arXiv, which indicated that one third of citations are copied verbatim. This was identified by looking at mistakes in citations, apparently. Charles Oppenheim had already told me a lovely anecdote about an article attributed to him in the Library and Information Science Abstracts database which is entirely fictitious in that no such article exists, which has yet achieved a citation! It also reminded me of the many inter-library loan forms I processed when I worked at Northampton, and how often I would find that the reference details were erroneous because they had been copied from some web page or other turned up by Google. I picked up on mistakes because I would look at the library catalogue to check if we had the source journal, and would spot things like the fact that our sequence started in 1997 with volume 58 so the reference for volume 3 in 1993 must surely be wrong! I would then look for the correct details by doing a Google search for the article title and author surname and would often find the source of the erroneous details, as well as the right ones.

I also later remembered the statistic quoted in the versions toolkit which I often recommend to researchers, which more or less says that the majority of economists surveyed would reference the final version of a journal article, even if they had only read a pre-print version in a repository. I daresay that there are disciplinary differences as to whether an author would cite an article that they had either never read (ie they had just copied the citation from someone else's work or a database) or had only read in an early draft. But it is interesting that a citation is not necessarily an indication that a work has been read. Similarly, we all believe that just because people are reading stuff does not mean that they are citing it... although there might be some correlations between webometrics and citations to be discovered.

David also explained that the way the REF Pilot metrics worked was that the number of citations achieved by our article were divided by the average number of citations to similar articles published in the same year. The implication of this is that it is not in your interests to cite lots of other people, as an author! Because your citations of others are adding to the average number of citations to similar articles and a citation to your own article would therefore appear more significant... well I hope no-one is trying to game the system in this way! I doubt that journal peer reviewers would actually let you get away with not referencing someone's work that was absolutely key to your subject. Thank goodness for peer review!

It's probably worth re-iterating at this point, that although there probably are many ways to "game the system", Charles' talk at the beginning of the day and his earlier one at Warwick did point out that research indicates that none of these tactics make any difference to the validity of the correlations between metrics and the RAE results.

David spoke about PageRank which I haven't listed above in my set of measures for research assessment. I didn't list it because it is something calculated by Google who don't just give away their data. However, if you can create your own algorithms for this then you could calculate PageRanks. Or there may be other sources which I'm not aware of. The basic concept of PageRank is that someone who has linked to your page does not link to many other pages and is linked to by lots of other pages in turn, then you score a higher ranking for your page. The Eigenfactor is a similarly calculated metric, based on a complex weighted algorithm, which David also explained but I missed in all my note-taking. It's well explained on wikipedia in any case!

A study from Citebase.org apparently indicates that numbers of hits and PageRank don't correlate well, but PageRank and number of citations do. David had a wealth of information on studies and research in this area and to be honest, you need to look at his slides and hear his explanations. I particularly liked a slide of his which referred to work done at Los Alamos, mapping different types of metrics on a scale that David described as "popularity to prestige", although that is apparently a very approximate description. Popularity would be stuff like webpage hits, whilst prestige would be stuff like citations. I say "stuff" because at this point it gets far more technical and complicated!

The final presentation of the day was by David Mawdsley of HEFCE. By this time I was quite tired so I took fewer notes! I thought that this David added context to David Tarrant's explanation of the REF pilot, in defining what is meant by "similar articles". These are articles in the same subject area, of the same publication year and same document type. Apparently HEFCE would also be glad of a better way of subject-classifying journals, too. I suggested that they would be better off finding a classification of all journal articles rather than journals, and that repositories might be a source of such data for them... and that repositories might also be a better source of citation data too, if the data could be got at from the world-wide collection of repositories and processed by someone. I put him onto UKCoRR! (NB on this point: WRAP does record all citations in a separate field. Not all repositories do but I know that those clever developer types can extract citation data from the full text files - I saw a rough and ready tool a couple of years ago so I expect it will only get better!)

David Mawdsley also spoke on the reasons why REF can't take data directly from Scopus or WoS and why institutions must submit it to them. Pretty obvious really, because they don't index all our outputs and in any case, an institution doesn't want to submit all its work but a selection of the best.

Basically what is likely to happen with the REF is that a recommendation will be made as to how bibliometrics might be used by panels... ie which ones they can use and how. The panels will then decide whether bibliometrics are appropriate for their discipline or not, and use them or not, according to what suits the discipline. There won't be variation as to what kinds of metrics are used by which discipline. HEFCE are currently deciding which source of data might be used.

One final comment from amongst the delegates on the day. If scientists are so used to objective, mathematical measures, then they must surely understand the logic of bibliometrics... they are useful and they do help in research assessment. We just need to use them in contexts which make sense, and it is also entirely logical that the arts & humanities disciplines which are used to subjective methods of research are going to remain wedded to the subjectiveness of peer review as an appropriate assessment of their research...


April 07, 2010

Charles Oppenheim's lecture on the REF, Bibliometrics and suchlike!

Writing about web page http://www.hefce.ac.uk/Research/ref/

I'm catching up on myself after a busy couple of weeks: I attended a lecture by Charles Oppenheim, here at the University of Warwick way back in March. Here follows a summary of the sense I gained from Charles' lecture...

Charles' talk covered his involvement with bibliometrics and how HEFCE came to look at bibliometrics owing to Gordon Brown's announcement that the REF would include them, back when he was Chancellor of the Exchequer. Ever since that announcement, it seems that the REF has become less and less about bibliometrics, and that is perhaps as a result of the findings of their pilot. Although it's fair to say that we still don't know exactly what the REF will measure and how... or whether there will be one at all, depending on which party is in government in 2011.

The REF framework is described on the page I've linked to, and it includes three elements: the research environment, outputs and some kind of measure of impact. Charles suggested that one way to use bibliometrics for the REF, should you wish to, would be to use them to add evidence of impact, regardless of however the REF itself might use bibliometrics. I guess that will depend on what is meant by impact.

There are many ways to calculate bibliometrics. The REF pilot was only about Science, Technology and Medicine subjects and even that identified that there was no single method which correlated as consistently across all the disciplines. It is for this reason that there can be no cheap and reliable method to evaluate research by numbers! Even if there were a reliable method, it would be a backwards-looking measure, and future research strategy is important to a University. So bibliometrics can only ever be a part of the answer.

Charles admitted that he has published papers which argued that the RAE should be replaced by bibliometrics, but his papers also said that analyses of such measures would need to be done by subject experts. Correlations between the bibliometrics for research outputs and RAE scores can be found when looking at the bibliometrics on all a researcher's outputs, or on only the outputs put forward for the RAE. Only one study of UK RAE ratings and bibliometrics has not found a correlation of any kind, and that was one based at Cranfield which looked at impact factors of the journals in which outputs were published, rather than at the metrics for the individual articles, apparently.

Charles went on to bust some myths about bibliometrics, one of which was that there is no evidence to suggest that writing an article that is deliberately flawed will earn you more citations as people cite you to publish corrections and refutations of your work! Apparently, people are far more likely to ignore your work if you publish work full of mistakes... which makes sense to me. Another myth is that self-citations and citation syndicates would affect bibliometric measurements, but apparently the effects of these are minimal as they are statistically insignificant. In any case, whatever measurement is used for research measurement, people will alter their behaviour to game the system, and no measurement is entirely without flaws. At this point I wanted to ask Charles what his top tip for gaining most citations would be... and I suspect that the answer would be to publish the best quality of research on the most important topics, but I did want to put my tongue in my cheek and ask!

There are various sources of data. Charles refered to Thomson Reuter's data in Web of Knowledge and to Elsevier's SCOPUS data. At this point I wanted to ask Charles whether he saw repositories as a source of potential citations data. I did ask my question at the end of his talk, and he was very keen on repositories as a source of data for research outputs, but there wasn't really time to discuss more about it as a source of references and therefore citation measurements. Someone else asked about Google Scholar as a potential source of data, also at the end of Charles' talk and Charles more or less said that the measures on Google Scholar are of little use. In GScholar, the same article might be listed more than once in GScholar's results, but from different sources and the citation counts for each source might need to be added together, and in any case, GScholar will pick up on a citation in someone's slides or a student essay. Although Charles did say that even with such weaknesses there may still be correlations to be found with the RAE ratings, except that there is no study to prove such. (NB my own interpretation of why GScholar doesn't index all of Warwick's repository's articles is that they do conflate records for the same article in at least some instances, and present the publisher record only: see my WRAP blog for more info on that!)

However citations are measured, they can be used in different ways to inform the REF, and no-one is yet certain how they might be used. They might be used to inform judgements about particular papers. They might be used to ensure the consistency of panel review ratings, so that departments with similar citation rates will not be scored with vastly different star ratings, unless with good reason. And a new government might scrap the REF altogether and do something entirely different.

Some bibliometric measurement techniques that Charles has used:

1) Look at the total number of citations received by a department in the same period as covered by the former RAE and then correlate that with the RAE ratings.

2) Look at the average number of citations per member of staff within a department, and then to correlate this with the RAE outcomes.

3) Look at the citations of just the outputs reported for the RAE or the total outputs of the staff members.

At any rate, since Charles' talk there has been yet another article in the THES explaining further back-tracking on the part of HEFCE about the importance of bibliometrics and the timing of the REF and we are now all awaiting the outcome of the general election and any impact that may have...


Subscribe to this blog by e-mail

Enter your email address:

Delivered by FeedBurner

Find out more...

My recently bookmarked sites

Tweet tweet

Search this blog

Most recent comments

  • Oh yes, I'm writing that too! And tidying up my paperwork, plastering each piece with post–it notes … by Jenny Delasalle on this entry
  • A useful list, thanks Jen. I would add "it's never too early to start writing your handover document… by Emma Cragg on this entry
  • Yes, Google does find things very fast: I use it a lot to find sites that I know and regularly visit… by Jenny Delasalle on this entry
  • Mac OS has the ability to share Safari www bookmarks and other data, securely across multiple machin… by Andrew Marsh on this entry
  • Hi Peter, I see that you practice what you preach… and indeed the point that you make about being … by Jenny Delasalle on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIII