October 08, 2011

No more posts here: This blog has moved!

I have today moved this blog to WordPress: the new URL is http://statgeek.wordpress.com .

There will not be any further posts made here, they will all be on the WordPress blog pages linked above.


June 25, 2011

R and citations

We’re hosting the international useR! conference at Warwick this summer, and I thought it might be interesting to try to get some data on how the use of R is growing. I decided to look at scholarly citations to R, mainly because I know where to find the relevant information.

I have access to the ISI Web of Knowledge, as well as to Google Scholar. The data below comes from the ISI Web of Knowledge database, which counts (mainly?) citations found in academic journals.

Background: How R is cited
Since version 0.90.0 of R, which was released in November 1999, the distributed software has included a FAQ document containing (among many other things) information on how to cite R. Initially (in 1999) the instruction given in the FAQ was to cite When R version 1.8.1 was released in November 2003 the advice on citing R changed: people using R in published work were asked to cite

The “2003” part of the citation advice has changed with each passing year; for example when R 1.9.1 was released (in June 2004) it was updated to “2004”.

ISI Web of Knowledge: Getting the data
Finding the citation counts by searching the ISI database directly does not work, because:
  1. the ISI database does not index Journal of Computational and Graphical Statistics as far back as 1996; and
  2. the “R Core Development Team” citations are (rightly) not counted as citations to journal articles, so they also are not directly indexed.

So here is what I did: I looked up published papers in the ISI index which I knew would cite R correctly. [This was easy; for example my friend Achim Zeileis has published many papers of this kind, so a lot of the results were delivered through a search for his name as an author.] For each such paper, the citation of interest would appear in its references. I then asked the Web of Knowledge search engine for all other papers which cited the same source, with the resulting counts tabulated by year of publication.

It seems that the ISI database aims to associate a unique identifier with each cited item, including items that are not themselves indexed as journal articles in the database. This is what made the approach described above possible.

There’s a hitch, though! It seems that, for some cited items, more than one identifier gets used. Thus it is hard to be sure that the counts below include all of the citations to R: indeed, as I mention further below, I am pretty sure that my search will have missed some citations to R, where the identifier assigned by ISI was not their “normal” one. (This probably seems a bit cryptic, but should become clearer from the table below.)

Citation counts
As extracted from the ISI Web of Knowledge on 25 June 2011:

ISI identifier 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 Total
IHAKA R
J COMPUTATIONAL GRAP 5 : 299 1996
5 15 18 43 131 290 472 528 435 419 449 378 396 3579
*R DEV COR TEAM
R LANG ENV STAT COMP : 2003






39 123 91 57 39 25 14 388
*R DEV COR TEAM
R LANG ENV STAT COMP : 2004






16 235 421 327 289 187 126 1601
*R DEV COR TEAM
R LANG ENV STAT COMP : 2005







42 397 531 511 445 366 2292
*R DEV COR TEAM
LANG ENV STAT COMP : 2005







5 39 75 41 25 10 195
*R DEV COR TEAM
R LANG ENV STAT COMP : 2006








55 438 849 656 461 2459
*R DEV COR TEAM
R LANG ENV STAT COMP : 2007









92 714 962 733 2501
*R DEV COR TEAM
R LANG ENV STAT COMP : 2008










208 1402 1906 3516
*R DEV COR TEAM
LANG ENV STAT COMP : 2008










7 21 44 72
*R DEV COR TEAM
R LANG ENV STAT COMP : 2009











172 1363 1535
*R DEV COR TEAM
R LANG ENV STAT COMP : 2010












205 205
*R DEV COR TEAM
R LANG ENV STAT COMP :






1 12 14 25 36 81 93 262
Total 5 15 18 43 131 290 528 945 1452 1964 3143 4354 5717 18605

For the “R Development Core Team (year)” citations, the peak appears about 2 years after the year concerned. This presumably reflects journal review and backlog times.

There are almost certainly some ISI identifiers missing from the above table (and, as a result, almost certainly some citations not yet counted by me). For example, the number of citations found above to R Development Core Team (2009) is lower than might be expected given the general rate of growth that is evident in the table: there is probably at least one other identifier by which such citations are labelled in the ISI database (I just haven’t found it/them yet!). If anyone reading this can help with finding the “missing” identifiers and associated citation counts, I would be grateful.

The graph below shows the citations found within each year since 1998.

© David Firth, June 2011

To cite this entry:
Firth, D (2011). R and citations. Weblog entry, University of Warwick, UK; at URL http://blogs.warwick.ac.uk/davidfirth/entry/r_and_citations/.

bb



The graph shows the citations found within each year since 1998.

[Click on the graph to view it at a larger size.]

Citations to Ihaka and Gentleman (1996) and to R Core Development Team (any year) are distinguished in the graph, and the total count of the two kinds of citation is also shown.


February 03, 2011

Have rail fares gone up this year?

Why does that big number end in 8?
I have to go to London tomorrow, so I thought I’d check how much the price of my normal rail ticket has increased in the new year. I didn’t ask for the first class fare, but they told me it anyway. Having picked myself up off the floor, I’m a bit curious about that last digit. (Click on the image to see it more clearly.)


February 07, 2010

RAE 2008: How much weight did research outputs actually get?

In the 2008 UK Research Assessment Exercise each subject-area assessment panel specified and published in advance the weight to be given to each of the three parts of the assessment, namely “research outputs”, “research environment” and “esteem”. The quality “sub-profiles” for those three parts were then combined into an overall quality profile for each department assesed, by using the published weights. The overall quality profiles have since been used in HEFCE funding allocations, and in various league tables published by newspapers and others.

For example, RAE Panel F (Pure and Applied Maths, Statistics, Operational Research, Computer Science and Informatics) specified the following weights:
  • Research outputs: 70%
  • Research environment: 20%
  • Esteem: 10%

The weight specified for research outputs varied between 50% (for RAE Panel G, engineering disciplines) to 80% (RAE Panel N, humanities disciplines).

When the RAE sub-profiles were published in spring 2009, it became clear that the assessments for the three parts were often quite different from one another. For example, some of the assessment panels awarded many more 4* (“world leading”) grades for research environment and esteem than for research outputs. These seemingly systematic differences naturally prompt the question: to what extent are the agreed and published weights for the three parts reflected in funding allocations, league tables, etc.?

Let’s leave the consideration of league tables for another time. Here we’ll calculate the actual relative weights of the three parts in terms of their effect on funding outcomes, and compare those with the weights that were published and used by RAE assessment panels.

The formula used by HEFCE in 2009 awarded quality-related research funding to departments in proportion to

7 p_{4d} + 3 p_{3d} + p_{2d}

where the p’s come from the department’s overall RAE profile (being the percentages at quality levels 4*, 3* and 2*). Now, from the published sub-profile for research outputs, it can also be calculated how much of the any department’s allocated funding came from the research outputs component, in the obvious way. The actual weight accorded to research outputs in the 2009 funding outcomes by a given RAE Sub-panel is then

  {\sum_d(\textrm{funding from RAE research outputs profile for department } d) \over\sum_d(\textrm{funding from overall RAE profile for department } d)}

where the summations are over all of the departments d assessed by the Sub-panel. (In the calculation here I have used the un-rounded overall profiles, not the crudely rounded ones used by HEFCE in their 2009 funding allocation. I’ll write more about that in a later post. Rounded or un-rounded doesn’t really affect the main point here, though.)

For 2010 it seems that the HEFCE funding rates will be in the ratio 9:3:1 rather than 7:3:1, i.e., proportionately more funds will be given to departments with a high percentage of work assessed at the 4* quality level. The table below lists the discrepancies between the actual and intended weight given to Outputs, by RAE Sub-panel, using the 2009 and 2010 HEFCE funding rates. For example, the RAE Sub-panel J41 (Sociology) decided that 75% of the weight should go to Outputs, but the reality in 2009 was that only 56.6% of the HEFCE “QR” funding to Sociology departments came via their Outputs sub-profiles; the corresponding figure that appears in the table below is 56.6 - 75 = -18.4. An alternative view of the same numbers is that the Sociology Sub-panel intended to give combined weight 25% to “research environment” and “esteem”, but those two parts of the assessment actually accounted for a very much larger 43.4% of the 2009 funding allocation to Sociology departments (and with the new funding rates for 2010 that will increase to 45.4%).

RAE Panel RAE Sub-panel name 2009 2010
A Cardiovascular Medicine -2.9 -3.4
A Cancer Studies -3.8 -4.3
A Infection and Immunology -7.7 -9.0
A Other Hospital Based Clinical Subjects -13.4 -16.1
A Other Laboratory Based Clinical Subjects -4.5 -4.9
B Epidemiology and Public Health -10.7 -12.5
B Health Services Research -9.2 -10.6
B Primary Care and Other Community Based Clinical Subjects -5.0 -5.4
B Psychiatry, Neuroscience and Clinical Psychology -6.4 -7.2
C Dentistry 0.2 -0.5
C Nursing and Midwifery -2.3 -3.6
C Allied Health Professions and Studies -1.9 -2.5
C Pharmacy -4.1 -5.3
D Biological Sciences -5.5 -6.0
D Pre-clinical and Human Biological Sciences -4.9 -5.6
D Agriculture, Veterinary and Food Science -7.0 -8.2
E Earth Systems and Environmental Sciences -4.9 -5.6
E Chemistry -3.5 -3.9
E Physics -3.2 -4.1
F Pure Mathematics -3.8 -4.2
F Applied Mathematics -6.0 -7.2
F Statistics and Operational Research -3.2 -3.5
F Computer Science and Informatics 1.8 1.6
G Electrical and Electronic Engineering 2.0 1.8
G General Engineering and Mineral & Mining Engineering 3.2 2.7
G Chemical Engineering 7.1 7.8
G Civil Engineering 1.2 0.6
G Mechanical, Aeronautical and Manufacturing Engineering 3.7 3.7
G Metallurgy and Materials 2.6 2.2
H Architecture and the Built Environment -2.8 -3.7
H Town and Country Planning -3.2 -3.3
H Geography and Environmental Studies -3.8 -4.5
H Archaeology -10.9 -12.6
I Economics and Econometrics -1.2 -1.1
I Accounting and Finance -4.0 -4.7
I Business and Management Studies -5.3 -6.2
I Library and Information Management -8.9 -10.0
J Law -13.4 -14.7
J Politics and International Studies -9.6 -10.0
J Social Work and Social Policy & Administration -8.4 -9.7
J Sociology -18.4 -20.4
J Anthropology -18.6 -21.2
J Development Studies -11.5 -12.9
K Psychology -3.6 -4.6
K Education -5.9 -7.5
K Sports-Related Studies -4.5 -5.5
L American Studies and Anglophone Area Studies -13.3 -14.7
L Middle Eastern and African Studies -14.3 -16.0
L Asian Studies -14.1 -15.5
L European Studies -11.1 -13.1
M Russian, Slavonic and East European Languages -11.1 -12.8
M French -6.9 -7.6
M German, Dutch and Scandinavian Languages -4.5 -5.3
M Italian -8.6 -9.7
M Iberian and Latin American Languages -12.2 -13.8
M Celtic Studies -9.8 -11.3
M English Language and Literature -10.9 -12.7
M Linguistics -7.9 -9.1
N Classics, Ancient History, Byzantine and Modern Greek Studies -8.8 -10.1
N Philosophy -11.9 -14.1
N Theology, Divinity and Religious Studies -9.6 -11.4
N History -8.4 -9.5
O Art and Design -13.9 -14.8
O History of Art, Architecture and Design -1.0 -1.0
O Drama, Dance and Performing Arts -2.8 -2.9
O Communication, Cultural and Media Studies 0.7 0.8
O Music -4.5 -4.6


RAE 2008: relation between 2009 and 2010 funding rates
Most of the discrepancies are negative: the actual weight given to research outputs, in terms of funding, is less than was apparently intended by most of the assessment panels. Some of the discrepancies are very large indeed — more than 20 percentage points in the cases of Sociology and Anthropology, under the HEFCE funding rates that will be applied in 2010.

Click on the image for a graphical view of the relationship between the discrepancies for 2009 (funding rates 7:3:1:0:0 for the five RAE quality levels) and 2010 (funding rates 9:3:1:0:0).

\begin{opinion}
In RAE 2008 the agreed and published weights were the result of much discussion and public consultation, most of which centred on the perceived relative importance of the three components (research outputs, research environment, esteem) in different research disciplines. The discrepancies that are evident here arise from the weighted averaging of three separate profiles without (it seems) careful consideration of the differences of distribution between them. In the case of funding, it’s (mainly) differences in the usage of the 4* quality level that matter: if 4* is a relatively rare assessment for research outputs but is much more common for research environment, for example, the upshot is that the quality of research outputs actually determines less of the funding than the published weights would imply.

It is to be hoped that measures will be put in place to rectify this in the forthcoming replacement for the RAE, the Research Excellence Framework. In particular, the much-debated weight of 25% for the proposed new “impact” part of the REF assessment might actually turn out to be appreciably more if we’re not careful (the example of Sociology, see above, should be enough to emphasise this point).
\end{opinion}

Acknowledgement
The calculation done here was suggested to me by my friend Bernard Silverman, and indeed he did the same calculation independently himself (for the 2009 funding formula) and got the same results. The opinion expressed above is mine, not necessarily shared by Bernard.

© David Firth, February 2010

To cite this entry:
Firth, D (2010). RAE 2008: How much weight did research outputs actually get? Weblog entry, University of Warwick, UK; at URL http://blogs.warwick.ac.uk/davidfirth/entry/rae_2008_how/.


November 25, 2009

RAE 2008: Assessed quality of research in different disciplines

RAE 2008 aggregate quality assessments, by discipline (1295 x 788 pixels)
This graph was drawn with the help of my daughter Kathryn on her “take your daughter to work” day in Year 10 at school. Her skill with spreadsheet programs was invaluable!

The graph shows how different disciplines — that is, different RAE “sub-panels” or “units of assessment” — emerged in terms of their average research quality as assessed in RAE 2008. The main data used to make the graph are the overall (rounded) quality profiles and submission-size data that were published in December 2008. Those published quality profiles were the basis (in March 2009) of ‘QR’ funding allocations made for 2009-10 by HEFCE to universities and other institutions.

Each bar in the graph represents one academic discipline (as defined by the remit of an RAE sub-panel). The blue and pink colouring shows how the sub-panels were organised into 15 RAE “main panels”. A key task of the main panels was to try to ensure comparability between the assessmenta made for different disciplines. Disciplines within the first seven main panels are the so-called “STEM” (Science, Technology, Engineering and Mathematics) subjects.

The height of each bar is calculated as the average, over all “full-time equivalent” (FTE) researchers whose work was submitted to the RAE, of a “quality score” calculated directly from the published RAE profile of each researcher’s department. The quality score for members of department d is calculated as a weighted sum

w_4 p_{4d} + w_3 p_{3d} + w_2 p_{2d} + w_1 p_{1d} + w_0 p_{0d},

where the p’s represent the department’s RAE profile and the w’s are suitably defined weights (with w4w3 ≥ ... ≥ w0). The particular weights used in constructing such a quality score are rather arbitrary; here I have used 7:3:1:0:0, the same weights that were used in HEFCE’s 2009 funding allocation, but it would not make very much difference, for the purpose of drawing this graph to compare whole disciplines, to use something else such as 4:3:2:1:0.


Example: for a department whose RAE profile is

            4*   3*   2*   1*   0*
          0.10 0.25 0.30 0.30 0.05

the quality score assigned to each submitted researcher is

(7 \times 0.10) + (3\times 0.25) + (1 \times 0.30) = 1.75.


The average quality score over the whole RAE was about 2.6 (the green line in the graph).

The graph shows a fair amount of variation between disciplines, both within and between main panels of the RAE. The differences may of course reflect, at least to some extent, genuine differences in the perceived quality of research in different disciplines; the top-level RAE assessment criteria were the same for all disciplines, so in principle this kind of comparison between disciplines might be made (although in practice the verbally described criteria would surely be interpreted differently by assessors in different disciplines). However, it does appear that some main panels were appreciably tougher in their assessments than others. Even within main panels it looks as though the assessments of different sub-panels might not really be comparable. (On this last point, main panel F even went so far as to make an explicit comment in the minutes of its final meeting in October 2008 (available in this zip file from the RAE website): noting the discrepancy between assessments for Computer Science and Informatics and for the other three disciplines under its remit, Panel F minuted that ”...this discrepancy should not be taken to be an indication of the relative strengths of the subfields in the institutions where comparisons are possible.” I have not checked the minutes of other panels for similar statements.)

Note that, although the HEFCE funding weights were used in constructing the scores that are summarized here, the relative funding rates for different disciplines cannot be read straightforwardly from the above graph. This is because HEFCE took special measures to protect the research funding to disciplines in the “STEM” group. Within the non-STEM group of disciplines, relative heights in the above graph equate to relative funding rates; the same applies also within each main panel among the STEM disciplines. (On this last point, and in relation to the discrepancy minuted by Panel F as mentioned above: Panel F also formally minuted its hope that the discrepancy would not adversely affect the QR funding allocated to Pure Maths, Applied Maths, and Statistics & Operational Research. But, perhaps unsurprisingly, that expression of hope from Panel F was ignored in the actual funding formula!)

\begin{opinion}
HEFCE relies heavily on the notion that assessment panels are able to regulate each other’s behaviour, so as to arrive at assessments which allow disciplines to be compared (for funding purposes at least, and perhaps for other purposes as well). This strikes me as wishful thinking, at best! By allowing the relative funding of different disciplines funding to follow quality scores so directly, HEFCE has created a simple game in which the clear incentive for the assessors in any given discipline is to make their own scores as high as they can get away with. The most successful assessment panel, certainly in the eyes of their own colleagues, is not the one that makes the best job of assessing quality faithfully, but the one with the highest scores at the end! This seems an absurd way to set up such an expensive, and potentially valuable, research assessment exercise. Unfortunately in the current plans for the REF (Research Excellence Framework, the RAE’s successor) there is little or no evidence that HEFCE has a solution to this problem. The REF pilot study seems to have concluded, as perhaps expected, that routinely generated “bibliometric” measures cannot be used at all reliably for such inter-discipline comparisons.

Since I don’t have an alternative solution to offer, I strongly favour the de-coupling of allocation of funds between disciplines from research quality assessment. If the Government or HEFCE wishes or needs to increase its funding for research in some disciplines at the expense of others, it ought to be for a good and clear reason; research assessment panels will inevitably vary in their interpretation of the assessment criteria and in their scrupulousness, and such variation should not be any part of the reason.
\end{opinion}

© David Firth, November 2009

To cite this entry:
Firth, D (2009). RAE 2008: Assessed quality of research in different disciplines. Weblog entry, University of Warwick, UK; at URL http://blogs.warwick.ac.uk/davidfirth/entry/rae_2008_assessed/.


Let's look at the figures! My first stab at doing a blog…

This blog’s title Let’s Look at the Figures is inspired by the book of the same name written by Bartholomew and Bassett (Penguin, 1971) — highly recommended if you can find a copy.

I had the pleasure of being a member of one of the assessment sub-panels for RAE 2008, the “Research Assessment Exercise” for UK universities. I’m going to start things off here by posting a few bits and pieces of simple data analysis related to that. (It should be stressed that all of the data used here are in the public domain; none of what appears here is privileged information arising from my RAE sub-panel membership.)

After that, who knows what else?


December 2014

Mo Tu We Th Fr Sa Su
Nov |  Today  |
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31            

Search this blog

Tags

Disclaimer


Everything that I write or show here will be correct to the best of my knowledge. But I do sometimes make mistakes, and any reader needs to be aware of this. I give no guarantee that the data and/or analyses presented here are correct, nor any guarantee of fitness for any purpose whatsoever. This means that if you want to use what’s here for a purpose that really matters to you, you should check the figures for yourself first! If you do not accept this disclaimer you absolutely must not use anything that is written in this blog.

Any opinions expressed here are my own, not necessarily shared by others at the University of Warwick.

Most recent comments

  • Getting citation counts is most difficult because the keyword "R" is terribly ambiguous for most bib… by Adriano on this entry
  • Thanks, that seems a good idea. Kathryn is too busy at present with more important work (her GCSE ex… by David Firth on this entry
  • Next time Kathryn would like some work, it might be interesting also to construct the graph using on… by Bernard Silverman on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXIV