All entries for Thursday 29 April 2010
April 29, 2010
All you ever wanted to know about bibliometrics and a lot more that you didn't.
Writing about web page http://www.ics.heacademy.ac.uk/events/displayevent.php?id=238
I attended this event, run by the HE Academy for Information & Computer Science yesterday. Charles Oppenheim presented a briefer version of his slides that I already blogged about, from his visit to Warwick, and he hosted the day. Presentations were each 30 minutes long and this made for a very fast-paced event as speakers crammed considerable knowledge into their short talks.
I was very pleased to meet Nancy Bayers, the new bibliometrician at the University of Leicester. Nancy has come all the way over from Philadelphia in the US and has a background of working for Thomson Reuters. Nancy's presentation was particularly useful in that she looked at the broader aspects of metrics in introducing her talk on citation analysis. My commentary here doesn't do her presentation justice because I'm no bibliometrician. Numbers are not my strongest point and I confess to getting lost in the talk about ratios and percentile distributions! I gleaned a lot, all the same.
Some tools for assessing research (other than bibliometrics) mentioned by Nancy were:
- Peer review involvement
- Editorships
- Research Income/awards
To which I would add:
- Research grant applications (useful for an institution to measure who is unsuccessful but trying, especially if you can create links with someone who has been successful with the same funding bodies!)
- PhD supervision load
Although Nancy may also have mentioned those: I was very busy scribbling!
Nancy went on to look at a citation measurement and what it means in context, and how we can go about using that measurement. But before I get into citation measurement stuff, another perspective that I gained from the day was just how many kinds of metrics there are, relating to research assessment. There are those listed above, but on the day people also mentioned "webometrics" and "scientometrics" and various other metrics, and indeed there are other kinds of bibliometric than just numbers of citations. So, some other things that might be worth measuring in terms of research assessment, to add to our list are:
- Numbers of publications (interesting to note what counts as a publication to be measured: RCUK and REF output types are likely to define these, in my opinion)
- Impact factors of the journals in which those articles are published.
- Webometrics - sometimes simply numbers of visitors to a webpage/website, but it could be a whole lot more complex if you want it to be! (eg geographic coverage of visitors, visitors from academic networks, people linking to your site, etc) Could include online version(s) of a journal article, in which case it would be article-level metrics - especially if consolidated from different sources, i.e. different versions of the paper online.
- Citations - of course! From various sources: Thomson Reuters' Web of Science, Elsevier's Scopus or Google Scholar.
I expect that the list of what could be measured could grow and grow, especially in the digital age... but essentially these things are only worth measuring if we have a use for them.
Back to Nancy's explanation of the context of citation measurements. If we know that our article has had, eg 42 citations, what does that figure actually mean? Nancy spoke about the "expected citation rate" which is the average number of citations achieved by articles in the journal in which our article has been published. Thus we can obtain a ratio of the actual number of citations to the expected number for that article. We could also provide context by looking at the average number of citations obtained by an article in our field or discipline.
Using Bibliometrics... (within context, though!)
Comparisons across departments might help University managers to see which departments to invest in. Comparisons across Universities might help with benchmarking exercises.
Bibliometrics can be helpful when choosing which papers to put forward for the REF: we would be looking for papers which are highly cited and which would have a wide geographic outreach.
Authors might want to look at bibliometrics when choosing journals to submit their article(s) to.
Researchers might include bibliometrics relating to previous work in grant applications, as part of their documentation - or indeed to identify potential collaborators. Especially by looking at who is citing their own work!
Nancy also mentioned that bibliometrics could be used "to analyse the current state of research" and I'm not too sure what that meant when reading my notes, but a topic that is attracting lots of citations would seem to be one which would be an important one to research, I guess.
Nancy's caveats about bibliometrics are:
- beware of field/disciplinary differences
- use multiple metrics
- use alongside peer review
- they're probably not suitable for humanities subjects and in some social sciences... and those disciplines that communicate primarily through conferences/other methods, rather than journals.
- always ask yourself whether the measures that you find seem reasonable!
NB Nancy also spoke about how TR's Web of Science does cover some conference proceedings, but it seemed that the citation measurements for these might not be as reliable as that for journal articles, at present.
All that (and more) in only 30 minutes!
The next speaker was Eric Meyer from the University of Oxford. Eric mentioned the "network effect", that the more authors who contribute to a paper, the more citations it seems to attract. In any case, science teams are getting bigger and collaborative research is becoming more and more important... and possible, in the electronic world. This is at least in part as a response to European Commission funding priorities! Publishing workflows are also changing since lab procedures might be published these days, even before there are publications on the results of research. Eric mentioned the JOVE journal of visualised experiments.
Eric stated that open access publishing works as a way to raise citations, for an article that is a good one. At the moment OA authors stand to gain because not everyone is publishing their work on OA - the implication being that there is a competitive advantage to be had.
One pit-fall of collaborative work Eric described is that, since the different disciplines have different publishing norms, those who are collaborating across disciplines sometimes find that they need to negotiate what will be published where. One researcher might want the work to be published in conference proceedings "or it won't do me any good", whilst another might want the work to appear in a journal article in order to raise awareness of his/her work.
Interestingly, Eric also mentioned that authors might want to use the Sherpa-Romeo website to choose where to publish their journal articles, on the grounds that they might choose one which supports/allows open access publication of their article in some way. I've contributed to Warwick's web pages for researchers on "disseminating your research" and suggested that authors might like to look for an OA journal on the DOAJ and indeed check their funders' policies on Sherpa Juliet, but I've never thought that they might use Sherpa-Romeo as a selection tool! I might add it in...
Eric had a slide looking at "Impact Measures" which referenced the TIDSR toolkit. Something for me to look up because I've not used it myself.... at first glance it looks very interesting indeed. Most of the measures I've listed so far are of the quantitative type, but of course there are various qualitative measures for research assessment, which Eric's slide listed, including stakeholder interviews, user feedback and surveys and questionnaires. The stakeholders mentioned included:
- Project or own institution's staff
- User communities
- Subject specialists
- Funding bodies
Another concept that Eric spoke about was that of "guild publishing", i.e. departments or groups which launch their own series of working papers or other such publications that have not included a peer review process but which ought to represent a degree of quality owing to the select membership of that department or "guild". This is similar to the aims of Warwick's institutional repository (WRAP) in that we wanted to present Warwick's research in one collection because it is known to be of high quality.
Next up was David Tarrant from the University of Southampton. I knew I'd seen David before and later remembered where - he was at OR08 and is very much a part of the repository developers' scene. However, David is doing a PhD on whether we can predict the order of paper's eventual citation score/impact by looking at which other papers are co-cited with yours. Or something similarly clever...
David mentioned that our researchers might be interested in understanding/maximising their bibliometrics from the point of view of their job safety and promotion prospects. In this context, I believe that it would be worth an author's while checking which papers are actually attributed to them on the likes of Web of Science. This is because we don't have any universal author identifiers and variations of names and addresses can lead to some authors losing out on measurements which are actually theirs to claim!
Also, David pointed out that the funding cycles might influence where researchers would choose to disseminate information about their research: if the funding cycle only covered one publishing cycle of a key journal or one key conference, then this would be the dissemination route of choice for the author and there would not be time for further publications to be produced. Funder expectations could also dictate what dissemination route researchers might take... as indeed our library web pages advise researchers to check funders' policies on open access. What this means, is that researchers are not necessarily free to decide for themselves what to write and where to publish, about their research - even if we were not to think of the pressures of the REF and other bibliometric measurements.
David suggested that the purpose of publishing and disseminating information about your research is to build networks; to enhance science and to increase citations (and thereby status). Later discussion amongst the academics in attendance did illustrate the concern that their research should be led by the desire to seek new knowledge and to contribute to the field rather than the desire to achieve publication in certain journals or to achieve certain metric scores... which they believe that the REF is in danger of driving them to.
Those who wish to target high impact journals might be interested to know how the impact factor is calculated. It's basically the number of citations per year divided by the number of citable articles, on TR's WoS.
It may or may not be significant that the number of citable articles does not include things like editors' comments and letters, whilst the number of cited articles will include these (if they have been cited). I noticed some time ago that letters could be cited and I wondered if they would count towards an author's own H-Index on WoS. So far as I can tell, they do... although whether it's a good tactic to attempt to get lots of letters published on the grounds that you might achieve more citations more easily is debatable. Few letters are likely to be worth citing, I would have thought. Still, getting a letter published might raise interest in your work more generally... and therefore citations for your other published work.
David spoke about the "long tail" effect that the majority of citatons are for the minority of articles, and quoted some work published in the BMJ in 1997 which I've seen quoted before. I've also heard that we can tell that an article that has not achieved citations within the first two years of its published life is unlikely to achieve citations at all. (See my WRAP blog posting about an event I attended at Cranfield a couple of years ago.) And Nancy Bayers had earlier mentioned the concept of a "hot" article being one which achieved a disproportionate number of citations in its early life.
It was mentioned at yeserday's event and also at the UKSG conference that I attended recently that there are more an more journal articles being published. There is a pressure on our researchers to publish and so more people are writing more articles (sometimes leading to higher journal subscription costs for libraries!). Presumably therefore there are more articles being cited, because there are more citations being published, but are they all actually citing the same content?
David also quoted some research done on or by the Physics subject repository known as arXiv, which indicated that one third of citations are copied verbatim. This was identified by looking at mistakes in citations, apparently. Charles Oppenheim had already told me a lovely anecdote about an article attributed to him in the Library and Information Science Abstracts database which is entirely fictitious in that no such article exists, which has yet achieved a citation! It also reminded me of the many inter-library loan forms I processed when I worked at Northampton, and how often I would find that the reference details were erroneous because they had been copied from some web page or other turned up by Google. I picked up on mistakes because I would look at the library catalogue to check if we had the source journal, and would spot things like the fact that our sequence started in 1997 with volume 58 so the reference for volume 3 in 1993 must surely be wrong! I would then look for the correct details by doing a Google search for the article title and author surname and would often find the source of the erroneous details, as well as the right ones.
I also later remembered the statistic quoted in the versions toolkit which I often recommend to researchers, which more or less says that the majority of economists surveyed would reference the final version of a journal article, even if they had only read a pre-print version in a repository. I daresay that there are disciplinary differences as to whether an author would cite an article that they had either never read (ie they had just copied the citation from someone else's work or a database) or had only read in an early draft. But it is interesting that a citation is not necessarily an indication that a work has been read. Similarly, we all believe that just because people are reading stuff does not mean that they are citing it... although there might be some correlations between webometrics and citations to be discovered.
David also explained that the way the REF Pilot metrics worked was that the number of citations achieved by our article were divided by the average number of citations to similar articles published in the same year. The implication of this is that it is not in your interests to cite lots of other people, as an author! Because your citations of others are adding to the average number of citations to similar articles and a citation to your own article would therefore appear more significant... well I hope no-one is trying to game the system in this way! I doubt that journal peer reviewers would actually let you get away with not referencing someone's work that was absolutely key to your subject. Thank goodness for peer review!
It's probably worth re-iterating at this point, that although there probably are many ways to "game the system", Charles' talk at the beginning of the day and his earlier one at Warwick did point out that research indicates that none of these tactics make any difference to the validity of the correlations between metrics and the RAE results.
David spoke about PageRank which I haven't listed above in my set of measures for research assessment. I didn't list it because it is something calculated by Google who don't just give away their data. However, if you can create your own algorithms for this then you could calculate PageRanks. Or there may be other sources which I'm not aware of. The basic concept of PageRank is that someone who has linked to your page does not link to many other pages and is linked to by lots of other pages in turn, then you score a higher ranking for your page. The Eigenfactor is a similarly calculated metric, based on a complex weighted algorithm, which David also explained but I missed in all my note-taking. It's well explained on wikipedia in any case!
A study from Citebase.org apparently indicates that numbers of hits and PageRank don't correlate well, but PageRank and number of citations do. David had a wealth of information on studies and research in this area and to be honest, you need to look at his slides and hear his explanations. I particularly liked a slide of his which referred to work done at Los Alamos, mapping different types of metrics on a scale that David described as "popularity to prestige", although that is apparently a very approximate description. Popularity would be stuff like webpage hits, whilst prestige would be stuff like citations. I say "stuff" because at this point it gets far more technical and complicated!
The final presentation of the day was by David Mawdsley of HEFCE. By this time I was quite tired so I took fewer notes! I thought that this David added context to David Tarrant's explanation of the REF pilot, in defining what is meant by "similar articles". These are articles in the same subject area, of the same publication year and same document type. Apparently HEFCE would also be glad of a better way of subject-classifying journals, too. I suggested that they would be better off finding a classification of all journal articles rather than journals, and that repositories might be a source of such data for them... and that repositories might also be a better source of citation data too, if the data could be got at from the world-wide collection of repositories and processed by someone. I put him onto UKCoRR! (NB on this point: WRAP does record all citations in a separate field. Not all repositories do but I know that those clever developer types can extract citation data from the full text files - I saw a rough and ready tool a couple of years ago so I expect it will only get better!)
David Mawdsley also spoke on the reasons why REF can't take data directly from Scopus or WoS and why institutions must submit it to them. Pretty obvious really, because they don't index all our outputs and in any case, an institution doesn't want to submit all its work but a selection of the best.
Basically what is likely to happen with the REF is that a recommendation will be made as to how bibliometrics might be used by panels... ie which ones they can use and how. The panels will then decide whether bibliometrics are appropriate for their discipline or not, and use them or not, according to what suits the discipline. There won't be variation as to what kinds of metrics are used by which discipline. HEFCE are currently deciding which source of data might be used.
One final comment from amongst the delegates on the day. If scientists are so used to objective, mathematical measures, then they must surely understand the logic of bibliometrics... they are useful and they do help in research assessment. We just need to use them in contexts which make sense, and it is also entirely logical that the arts & humanities disciplines which are used to subjective methods of research are going to remain wedded to the subjectiveness of peer review as an appropriate assessment of their research...