All 1 entries tagged Book Club
No other Warwick Blogs use the tag Book Club on entries | View entries tagged Book Club at Technorati | There are no images tagged Book Club on this blog
November 08, 2010
WRAP Book Club: Entry 1
Writing about web page http://liber.library.uu.nl/
This entry is the first in an (hopefully long) series of entries on interesting articles and books in and around the institutional repository sector. I'm starting with an area that I am still struggleing with as a Repository Manager, that of benchmarking. I was asked in a recent interview with our Knowledge Centre where WRAP was in terms of the 'repository scene' and I couldn't really give her an answer that I was happy with, added to the fact that benchmarking is something that I have looked at again and again in the past few months meant that the article I found in the recent issue of Liber Quarterly was particularly timely:
- Cassella, Maria (2010). Institutional repositories : an internal and external perspective on the value of IRs for researchers' communities. Liber Quarterly, 20(2), p. 210-225.
A perennial problem for repository managers if that as all repositories are slightly different, it what they collect, how they are administered and what they do with their content once they have it. This makes it very difficult to hold a repository up against another repository and work out exactly how to rank one above the other in terms of performance. So far the best that can be done is locate a single repository on a sliding scale of similarity and to benchmark it against itself. Maria Cassella's article introduces a set of fourteen internal indicators and a further three external indicators to try and formalise the process of repository benchmarking. These indicators are based on and rooted in Kaplan and Nation's 'Balenced Scorecard' (BSC) methodology which is already used in various measures of Library performance.
The measures proposed by Cassella fall into four broad categories:
- The Customer Perspective;
- The Internal Perspective;
- The Financial Perspective and
- The Innovation and Learning Perspective.
These performance indicators are intended to allow repository manages to "align their repository strategies with the institutional mission and goals and to identify priorities in performance measurement" (Cassella, p.214). Many of the PI's that are suggested are things that, I'm sure, many repository managers are already recording in terms of who is depositing into the repository at what rate and the levels of external use that these items are getting. Some of the PI's suggested are very interesting, for example she suggests that one things that should be measured is the number of 'value-added' services offered is used as a measure, which fits in well with the prevailing trends in the repository world, where the 'value-added' is being increasingly seen as if not more important then at least as important as the core functions of visibility and preservation. Two of the 'financial' indicators I found very interesting, recently the 'cost per deposit' has become a focus of much discussion but Cassella goes one step further and suggests that another measure that would be interesting to keep is the 'cost per download'. This she states will allow repository managers to "evaluate the scholarly efficiency of repository collections" (Cassella, p. 219) but she does allow that recording accurate sets of statistics has long been a problem.
Some of the measures suggested, disappointingly, assume you are using a certain type of software. For example a measure of the 'number of active collections' in a repository is going to be impossible to record at a level lower than department for many people using the standard EPrints set up, as we are doing with WRAP. Also some of the external measures are not quite as developed as I would have liked. For example Cassella suggests that one of the external measures should be "Interoperability", which I agree is an important measure, but Cassella never quite articulates fully exactly what she means by interoperability and whether or not it should be an active or a passive measure.
Overall I found the article very thought provoking on the issue of benchmarking, and some of the measures I plan to add into my regular statistics collection, but there is definitely more work waiting to be done in this area.