February 10, 2013

Is my book the most highly cited in its field?

To answer this, you need data on how many citations there are to your book and to others in your field. There are two sources of citation data for books, that I know of:

  1. Thomson Reuters' book citation index. Not everyone will have access to this, of course, as it's a subscription product.
  2. Google Scholar: this is available to everyone and is the source I've investigated.

A simple search for your book on Google Scholar will tell you how many citations there are. Note that G Scholar does try to collate records for all versions of your book, but for books available in many editions and reprints, then it might not be too successful at this!

Next, how do you know if your book is the MOST highly cited in your field? It's impossible to tell really, but a good clue is to invstigate the "related articles" link in the results of your search that brought you data about your book. This will find items that are similar to yours so therefore are likely to be in your field.

Within that list, there will be citations and journal articles as well as books. You can look through the results and spot books quite easily: look at how many times the books have been cited. If any are more highly cited than yours, then you know that your book can't be the most highly cited in your field, at least as far as GScholar is concerned. Whether or not you choose to trust their data on citations is a separate matter!

If none of those citations are anywhere near your citation count, then it would seem that there is a good chance that your book is one of the most highly cited in your field. You probably know some of the competitor books to yours: try searching for them on Google Scholar too, to check.

If you don't already know competitor books in your field then I recommend looking on the COPAC union catalogue at the record for your book, and clicking on the subject heading links from within that record to find books in the same subject category.

Best of luck!


February 04, 2013

Measures of journal article quality?

Writing about web page http://techcrunch.com/2013/02/03/the-future-of-the-scientific-journal-industry/

The TechCrunch blog post linked to is by the founder of Academia.edu and it discusses the possible contribution that journal article metrics could make, to academic publishing.

In order to interpret readership metrics provided by sites/services like the three mentioned in that post, Academia, ResearchGate and Mendeley, researchers should ask, "what is the level and quality of activity on these sites?" My experience is that there are a lot of students amongst those "researcher" numbers advertised. Students can be readers too, of course, but we need to be clear about what the metrics are actually telling us. Activity and membership varies from one site to another and from one discipline to another, of course, so researchers would need to investigate for themselves. If you're investigating and interpreting for yourself then you're not going to be entirely comfortable with others using such metrics to make some judgement about the quality of your work!

My previous blog post was about publishers who display reader metrics. I wish I had time to investigate them some more!

Mendeley's metrics used to be available for others to use through an API: as ImpactStory, once called TotalImpact were doing. That seems to me to be the most useful model for researchers: then they can follow readership metrics for their papers from all locations. In my opinion, collated stats are great for researchers to track which activity affects their readership numbers most: their paper featuring on their mate's Twitter feed, or professor x's blog, or being delivered at a conference.

But are reader numbers going to lead to a new way of assessing a journal article's quality? They would need to be available from all sources where the article is displayed: publishers, repositories and networking sites would all need to count reader accesses in the same way, and share their data publicly, so that they can be collated and displayed in a reliable and consistent way. They would need to become trusted and used by the researchers themselves. That is going to take a lot of time and effort, I believe, if all the discussion about citation metrics and altmetrics that I've seen is anything to go by.

January 25, 2013

Sharing metrics relating to articles

Writing about web page http://altmetrics.org/manifesto/

This time last year, PLoS started to display even more article level metrics. I felt intrigued by the openness about article downloads on PLoS. It led me to wonder whether other publishers were sharing such information so publicly.

Repository tools are available to display accesses for articles in the repository, as at the Bath University repository in this record of a journal article. Some repositories have chosen not to make such statistics publicly visible, however.

I don't see many publishers publicly displaying article level metrics like this, but publishers do sometimes showcase the most downloaded content publicly on their website. For example Springer journal home pages display their most downloaded articles: see the journal Artificial Intelligence

Other publishers share download statistics with the authors, although not publicly displaying them on article records. For example, Nature describe this as a benefit to their authors.

I'd be glad to hear of other examples of publishers displaying these download statistics. I think that authors should be able to monitor activity around their papers for themselves, and I wonder if there is a role for such statistics in helping our readers to ascertain the highest quality papers.

However, I am slightly cautious about download statistics being publicly visible because there are often many versions of a journal article available: on the author's website, on the publisher website and in a repository, for example. And I think that all of these versions should be available because this provides an insurance and archival possibility for authors as well as additional discovery and access options for readers, but that a focus on download statistics could lead authors to become wary of sharing their articles in so many places.

The Internet allows us to create lots of metrics about a researcher or a work, as evidenced by all the altmetrics activity (see my link, above). But just because a measure exists, should it be publicly visible? How should we use these new measures? My own answer is "with caution" and indeed that the author is best placed to make use of the download statistics, because the author will be best able to understand what they mean.


January 08, 2013

Some Twitter tips for the New Year

I've heard it said that:

You get the Twitter feed that you deserve!

The key to using Twitter effectively is to know who you want to listen to and be in discussions with. There is nothing inherently frivolous about Twitter itself, it's just that you do need to be brief and that can lead to spontaneity and frivolity but equally, you can spend a long time crafting a perfect 140 character tweet to express your idea in as brief a way as possible.

Twitter is a great way to get a summary or overview of what's going on in your field, if you follow people who do craft their tweets carefully. Twitter is not only a great way to listen to those people but also to interact with them: you can publicly tweet at people who you want to reach and you can tweet directly at people who follow you, for a private conversation.

If you can't find the right people then you could always start tweeting on your topic yourself, and others will find you. It's worth investigating the profiles of people who follow you on Twitter, to see if you might want to follow them back.

And if you find you're not following the right people after all, well you can clear out your twitter feed and unfollow people here or there. It's up to you to create and curate your own experience of Twitter!


December 12, 2012

Where is my most up to date profile?

Writing about web page http://scholar.google.co.uk/citations

I have blogged about author profile sites a number of times. I've investigated quite a few of them, and I've not properly invested in my own profile on any of them. So as a totally unscientific measure of the ease and useability of the many sites where I could have put my profile, I thought I would look at my own profile on these sites to see which one is the most complete, in terms of the listing of publications I have authored!

And the winner is: Google Scholar! http://scholar.google.co.uk/citations

The truly surprising thing here is that this is the profile site I have invested least in. I haven't even blogged about it properly!


December 05, 2012

What do publishers do for authors?

Is there an advantage to setting up your own journal or publishing your work online yourself? What do journal publishers actually do for authors? Since RCUK funded authors are soon to be paying large sums of money for OA publication of their articles, where is the value for that spend? This piece explores a little bit of what publishers do.

The Finch report has highlighted the need for publishers to be able to continue to invest in publishing innovations. On page 51, it states that

Access on its own does not necessarily make for effective communication.

and on p95 it says that

Quality assurance through peer review coupled with the wide range of discovery, navigation, linking and related services provided by publishers... are of critical importance to both authors and users of research publications.

Back in 1997, Fytton Rowland described four functions of a scholarly journal:

  1. dissemination - publishing and marketing activity.
  2. quality - this is where editorial, peer review and quality assurance come in.
  3. canonical version - a work that others can refer to. Involves archiving, issuing DOIs and ISSNs, etc.
  4. recognition & credit for the authors.

In my view, the recognition authors want is quite often tied to the dissemination and quality activity. If your peers don't know about your article (the dissemination hasn't been good enough) then the recognition and credit can't follow. If the journal you are published in is not one of the high quality ones then it follows that the audience and recognition you might get for being published there might be less. Although if your work is of high quality itself then it might help to raise the perceived quality of a publication.

Authors have told me that they want the following things from a publisher:

  1. To edit and improve their work.
  2. Bestow prestige on their work.
  3. Publicise their work & bring them an audience. The audience they want might be scholars or a broader reach, leading to "impact".
  4. Protect their work against plagiarism.
  5. A perpetual record of their work.
  6. Money: probably more applicable to book deals but for journals, at least the author won't want it to cost them a huge amount to publish.
  7. Timeliness: some authors want their work published as soon as possible.

I daresay that the list could grow a lot longer for some and be shorter for others, but essentially authors often have to balance their needs when choosing where to publish.

Earlier this year (2012) Jason Priem described a "de-coupled" journal" and how the journal system could be reformed to provide essential functions of:

  • archiving : relates to "canonical version", in Rowland's list above.
  • registration : relates to "recognition", above.
  • dissemination : also mentioned above.
  • certification : relates to the quality function, above.

The concept of a de-coupled journal is one where there is more variety in how each of the different functions are provided, so that they might not all come from the publisher. Eg archiving might be shared with repositories which store a preservation copy. Dissemination activity can be carried out by authors themselves. The online environment brings a variety of channels and services that authors can use, beyond the traditional publishing system.

I wanted to explore more of what publishers do:

Filter for quality: co-ordinating the peer review process

Editors provide one layer of a quality filter, and then the peer reviewers provide the next level. Editors and peer reviewers refine and polish articles for publication, so they also enhance articles in terms of their quality.

Managing a journal and co-ordinating the quality process is no small task, even when the peer reviewers and editors work for free. The authors need instructions, the editors benefit from tracking tools to monitor where peer reviewers are at in the process and to chase peer reviewers. Copyediting and proof reading tasts need to be carried out. Digital media or associated data might also need corrections and modifications to the way they display.

There are lots of experiments with the peer review process:

Is there a role for more post-publication peer review? eg F1000 offers this. Accessible science might need to be more peer reviewed than science that is only for sharing within the academic sphere, where researchers are able to assess quality for themselves owing to their expertise, whilst members of the public and amateur experts might not be as well able to assess the quality of articles they find.

Many journals publish articles with a comments field at the bottom, rather like on blogs, but relatively few articles attract worthwhile comments. Journals (eg PLoSONE) sometimes publish information on downloads, "tweets" and "likes" for their articles, so that readers can use those measures as post-publication quality markers, too.

Alternatively, peer review could take place even before an author submits an article: American Journal Experts offer a pre-submission peer review service, for a fee. It could save you time if you have the money to spend and the process is indeed rigorous and helpful, since they promise turnaround times of days.

Dealing with ethical concerns

Pre-publication, the ethical concerns could be said to be a part of the quality filtering process. Before publication, publishers:

  • issue instructions to authors
  • use editors and peer review to screen articles,
  • require authors to sign agreements.

Editors need to be experienced and knowledgeable in their field to identify ethical concerns. Scientific "mis-conduct" is not defined in exact terms and practices might vary. Ethical considerations might include:

  • the work of others is properly acknowledged, credited and referenced.
  • data should be accurate and preserved and accessible - as appropriate.
  • the article should be complete and publication well timed (eg results not being shared prematurely).
  • co-authorship is properly attributed.
  • confidentiality is respected and maintained.

Publishers are not the only filter for ethical considerations, of course: such issues are included in grant proposals to research funders and the process by which they are reviewed. Institutions might have ethical review panels to approve grant proposals even before they are submitted to the research funders.

After publication, publishers might use retractions or corrections to deal with ethical concerns. This is perhaps more of a service to readers than to authors, but it does help to maintain a journal's prestige if ethical matters are dealt with professionally.

ALPSP's Learned Publishing journal from April 2011 features an article about ethical considerations. Advice from the Committee on Publication Ethics (COPE) is particularly useful and well presented, with flowcharts.

Dissemination & discoverability

An earlier guest post on this blog, by Yvonne Budden, describes the importance of metadata to resource discovery. By providing good quality metadata, publishers are bringing readers to the article you have written, and helping you to find articles that you should be reading.

Search Engine Optimisation seems to me a "dark art" but it is important for scholarly articles to be discoverable through Google and Google Scholar: that's where a lot of researchers will be looking for stuff.

Some publishers are huge and they build and market their own discovery platforms for scholarly articles. Other publishers ensure that their content is indexed in others' discovery environments. Most publishers offer table of contents alerts.

Publishers have staff dedicated to marketing and sales, helping to ensure that their work reaches key target audiences. Perhaps in an Author-pays OA world, sales staff will be selling the services on offer to authors rather than the services offered to subscribers and readers. Marketing staff will be building the prestige of the publisher and journal brands.

Journal publishers should monitor the audiences for their publications and ensure that their material is discoverable in the places where people are looking for it, in the way(s) that they like to search.

International copyright protection?

In my view, authors are concerned that others should not copy their work without attribution but this is more a question of plagiarism. I don't think they mind about the actual copying so long as they are credited. With the RCUK policy on Open Access, the articles that they pay Gold OA fees for should be made available for others to copy for any purpose, as long as the work is properly attributed, using the so-called CC-BY licence. With such a licence, the copyright is not something to be protected.

I'm also not sure to what extent publishers pursue copyright internationally when they own it and don't licence copying, and I expect a variety of practice between publishers and from one nation to the next. After all, copyright law must vary on an international scale. So I'm leaving my big question mark in the heading of this piece!

Awards schemes that they run or sponsor

See my earlier blog post on Journal awards for examples of the kinds of award schemes that publishers might offer... or indeed put their journals forward for.

Awards act as a route to recognition but also as a way of building prestige of a journal if at the title level and from an external and prestigious source.

Open Access repository deposit

Research which has been funded by the Wellcome Trust has to have outputs deposited into Pubmed Central: authors who pay a fee for the Gold Open Access route, which the Wellcome Trust will pay for, can have publishers make this deposit on their behalf.

Publishers sometimes also allow authors to make deposits. The Sherpa ROMEO tool makes it easy to look up publishers' policies on repository deposit by authors, although authors really ought to keep copies of the agreements they sign with publishers as these will be the legally binding expectations, rather than the publisher's latest policy.

Summary

In summary then, it seems to me that publishers should be doing the following things for authors:

  • co-ordinate the editorial and peer review process to filter for quality and also polish works.
  • provide instructions and support to authors, peer reviewers and editors.
  • build the reputation and prestige of their titles through professional handling of ethical concerns.
  • provide quality metadata to the right search tools.
  • ensure that their content is easily discoverable on the web via search engines.
  • measure downloads and activity around articles: this could be used to enhance their dissemination activity but could also be used as a further mark of quality if displayed to readers.
  • adapt to the OA and copyright needs of researchers as authors and readers.
  • provide authors with clear agreements and keep SherpaROMEO's records up to date.
  • offer awards and put their journals forward for awards, by way of offering recognition for authors and building prestige for their journals.
  • invest in publishing innovations... which could be around any of the themes above.

It's quite daunting to think of setting up a journal and doing all this yourself. Do leave a comment and let me know all the things I've missed out!


November 08, 2012

"Just About" New YouTube channel of our tips for researchers

We are creating short video clips of the best tips we give to researchers in our information skills workshops, on literature searching and disseminating your research. The series is called "Just about" as the clips are about 3 minutes long and they are each about one particular tip.


October 09, 2012

Which index measures your research?

I recently compiled a little list of measures like the h-index, intended to measure the performance of individual authors. Have I missed out your favourite(s)? Which one(s) do you like and why? How should they be used? (Or not used!)

Personally, I like the h-index best because it is well established and relatively simple to understand. However, it needs to be expressed along with the date and the data source used. Any of these measures ought to be presented along with some explanation and/or examples of well known researchers' scores, to give it context. Sometimes, researchers are asked for their h-index, but if their g-index or m-index score is more impressive, then why not give that too?

h-index – an author with an index of h has published h papers each of which has been cited in other papers at least h times: there’s a great Wikipedia article about it. http://en.wikipedia.org/wiki/H-index

g-index – while the top h papers can have many more citations than the h-index would suggest, the g-index is the highest number g of papers that together received g squared or more citations. This means that the g-index score will be higher than that of the h-index. (I found this explanation at: http://www.researchtrends.com/issue1-september-2007/from-h-to-g/ )

m-index (aka m-quotient) - h/n, where n is the number of years since the first published paper of the scientist. (supposedly handy to differentiate between authors of different vintage in the same discipline)

contemporary h-index –where younger papers accrue higher weightings for each citation, as calculated (and documented) on Publish or Perish (http://www.harzing.com/pophelp/metrics.htm#hcindex).

hI-index – This takes account of co-authorship also documented on Publish or Perish.

hI, norm index – see Publish or Perish

hm-index – see Publish or Perish

AWCR - see Publish or Perish

AWCRpA - see Publish or Perish

AW-index - see Publish or Perish

i10-index – Gscholar “My Citations” gives me this score: “i10-index is the number of publications with at least 10 citations. The second column has the "recent" version of this metric which is the number of publications that have received at least 10 new citations in the last 5 years.”

n-index - Researcher's h-index divided by the highest h-index of the journals of his/her major field of study (n is the first letter of Namazi) Proposed in an article at: http://dx.doi.org/10.4103/0378-6323.62960

A-index – average no. of citations in the article set that makes up the Hirsch core. (Hirsch core is the set of articles whose citation scores count towards the h-index score.) The muddled explanation is mine but you can read about it properly at: http://eprints.rclis.org/bitstream/10760/13282/1/hIndexReviewAlonsoCabrerizoHerrera-Viedma.pdf An excellent article reviewing these measures, published in 2009.

R-index - square root of the sum of citations in the Hirsch core (in the same article linked above!)

m-index – (yes, it looks like another type of m-index entirely & probably explains why the measure listed above is also known as the m-quotient) the median number of citations received by papers in the Hirsch core. See the article linked from my A-index explanation.

NB there are many other measures explained in that article, but by this point I gave up trying to understand them! I quote from the conclusion of that paper instead:

that many h-index variations, although being designed to overcome some of its supposed limitations, do indeed correlate quite heavily. This fact has made some researchers think that there is probably no need to introduce more h-index variations if it is not possible to prove that they are not redundant in real examples.”

The article also concludes: “h-index is quite dependant on the database that it is used and that, in general, it is much more difficult to compute those indices using Google Scholar than ISI Web of Science or Scopus.”

GScholar Metrics use different metrics again which sound like the h-index, but these are really aimed at publication level rather than the author… see: http://scholar.google.com/intl/en/scholar/metrics.html#metrics And indeed Scimago will give you an h-index for a publication, and others have calculated h-indexes and variations of h-indexes for departments and groups of authors.

So, which index has your vote?


September 12, 2012

UK research sector, publishing trends and facts from the Finch report

Writing about web page http://www.researchinfonet.org/publish/finch/

I have been tweeting all the "Finch report facts" that I found in this recent report on accessibility to research publications. This blog entry presents some of those "facts" back in a discussion of what it seems that the Finch report is saying about the UK research sector and publishing trends, when I look at those "facts" plainly and bring in other contexts and ideas.

I'm not commenting here on the recommendations of the Finch report, nor the debate about routes to open access, although I did pull together a Storify collection of reactions to the Finch report, in case you want to read more about those topics.

UK Researchers' productivity

The UK research sector has some particular characteristics. I tweeted:

Finch report fact p37: There are 250, 000 researchers in the UK & p38 'their rate of productivity is more than 50% above world average'

This rather depends on how you measure productivity!

I also tweeted:

Finch report fact p37: UK is successful at research publications but 'relatively weak in producing other kinds of outputs such as patents'

So perhaps the productivity referred to is really all about publication activity, and I went back to the report to check where the productivity fact came from: it's a paragraph all about the number of articles written by researchers, so it's most likely although not entirely clear that the productivity referred to is about numbers of articles. A footnote against this particular fact also states that:

"It should be be noted that it is sometimes argued that high rates of research productivity in the leading research countries are achieved in part by establishing dependency cultures in other countries."

Have UK researchers achieved high publication rates due to multiple author collaborations? Possibly.

Why are UK researchers achieving high publication rates? Is it driven by RAE and REF processes?

The UK's measures of research performance have centred around research outputs which might encourage UK researchers' productivity against this measure. Looking at the RAE 2008 data (Merit project) we can see that of the 222,177 outputs that were measured, 167,831 were journal articles. I'm rubbish at maths but even I can tell that's about 75%. I expect that for the sciences, the percentage of journal articles that make up their outputs for measurement is even higher.

Another couple of tweets, then:

Finch report fact p71: 120,000 articles by UK authors are published each year. According to p62, this is 6% of articles published worldwide

Finch report fact p62 'researchers in the UK comprise just over 4% of the global research community'...

So, UK researchers are publishing plenty of articles and contributing to scholarly knowledge worldwide on a larger scale then their numbers represent.

REF 2014 will be looking at impact as well as outputs, which brings a different dimension into the measurement since RAE 2008 and that might also affect UK researchers' activity in the future.

The potential effect of performance measurement mechanisms on actual performance is addressed in a RIN report on Communicating Knowledgefrom 2009, describing a bibliometric analysis of the outputs produced in 2003 and 2008 by a sample of authors who were included in those two RAEs. Amongst many other interesting findings, they reported a slight increase in the no. of publications per author in 2008 compared to 2003, but a significant increase in no. of multiple-author works. These are multi-institutional and international. They did not find an apparent difference in citation behaviours between the two time periods. All very interesting!

In REF2014 the assessment panels for the science, technology and medicine subjects will have citation data provided to them. On UK researchers' citation scores, I tweeted:

Finch report fact p38: citations to UK articles increased between 2006 and 2010 by 7.2% a year, faster than the world average of 6.3%

and:

Finch report fact p38: UK’s “share of the top 1% of most-highly-cited papers was second only to the US, at 13.8% in 2010.”

Not only are our researchers producing lots of articles, they are also producing highly cited articles. There have been numerous studies and debates about the value of citations as a measure of the quality and influence of research papers (my own main reservation is the difference in disciplinary practices around citation), but at any rate there is plenty of citation activity and evident attention for UK authored articles, according to citation measures.

In agreement with the findings of that 2009 RIN report and the footnote on the earlier fact about UK researchers' productivity in terms of numbers of research articles, I also found in the Finch report:

Finch report fact p71 Nearly half (46%) of the peer reviewed articles with a UK author published in 2010 also listed an author from overseas

I believe that multiple authorship and involvement of overseas authors could be significant in achieving those high citation rates. The more collaborations and network contacts or reach that a researcher has, the more people will be aware of that author's work in terms of its findings but also its quality, and so the more likely the work is to be cited by those contacts or indeed their contacts in turn.

An international scale

UK researchers are operating on a world stage, of course. There are other facts in the Finch report that give some context to the UK researchers' performance. I didn't tweet this quote from page 38 because it was too long(!), but I find it very significant:

...part of the explanation for the UK’s success is that it attracts internationally-mobile researchers. UK researchers are also more likely than those in almost any other major research nation to collaborate with colleagues overseas...

Even though the UK researchers are publishing a lot, researchers from other countries are also publishing a lot:

Finch report fact p37 Rise in the no. of UK-authored articles has not been as fast as in very high growth countries such as India and Brazil

So I think that those collaborations and multi-authored articles are very significant, and the international scale of research is one that favours the UK because it's known for its high quality research already. I really think that this is key to UK "success" in the context of citations, because those collaborations and networks occur due to the migration of internationally mobile researchers to the UK. It seems to me that international reach is a very important element of impact that UK research assessors should be interested in.

Meanwhile, according to the Finch report, the UK doesn't spend a great deal on research. Apparently, the UK ranked 16th for "research intensity" amongst OECD countries in an Elsevier report that is cited on page 38, in a footnote. In actual figures:

Finch report fact, p37: 28% of UK R&D is in HE Sector. UK is 'strongly dependent' on gov.t, charity & overseas funds ow.ly/c50pU...

Finch report fact p38: 09-10 UK total expenditure on R&D: £25.9bn of which £10.4bn from gov, £5.5bn of which from Research Councils & HEFCs

Perhaps the relatively high reliance on government and the HE sector to pay for our research is also part of the reason why the UK has been more successful at getting articles published than at producing patents and other kinds of research outputs.

Perhaps another reason why UK researchers are so much involved in publishing activity is that the UK is also a key player in the worldwide publishing industry:

Finch report fact, p15: UK publishers are responsible for 5000+ journal titles & 1/5 of articles published each year

The UK also seems to be playing an important role in the development of the online open access repositories landscape:

Finch report fact: US, Germany, & UK account for over 1/3 of repositories worldwide. There are 200+ UK repositories: 150 are institutional

And the UK publishes about 7% of open access journals:


Finch report fact, p32: Currently 7600+ open access journals listed in the DOAJ, from 117 countries: 533 in UK ow.ly/c4ZLa #oa


UK researchers do seem to have good access to published articles:

Finch report fact p47 93% of UK researchers had “easy or fairly easy access" to papers. Those without most often find a different item.

Finch report fact p48: Researchers are more likely to have problems accessing conference proceedings and monographs, than journal articles.

Although library expenditure in the UK is falling:

Finch report fact, p23: library expenditure in UK Unis fell from 3.3% to 2.7% as a proportion of total expenditure ow.ly/c4ZfC #oa

The Finch report also says on page 51 that "Access on its own does not necessarily make for effective communication" and although I know that the report is really referring to the role that publishers play in enhancing discoverability through their search platforms and other work, I also interpret it to mean that all those networks and collaborations of our authors are helping to ensure that they are building on the best research that is out there.

It seems to me that international reach is a very important element of impact that UK research assessors should be interested in.

Publishing trends

Open access is one of the changes to publishing that has taken place in recent years, as the worldwide web has enabled online access to scholarly content. It's the main focus of the Finch report, so there are lots of facts relating to it! There are at least two routes to making content available on open access: the gold route where authors pay a fee or "article processing charge" (APC) for the publisher to make the final version available to readers for free, and the green route where authors own copies are deposited into open access repositories, where readers can find it.

My first publishing trend "fact" is:

Finch report fact p39 in '09 OA journals accounted for 14% of articles published worldwide in medicine & biosciences, and 5% of engineering.

The report goes on to say that only 6-7% of articles published in 2009 were available in repositories. This looks as though the repositories are not as successful a route to open access as the OA journals. But the data is only for 2009, and only for limited subject areas. The report itself highlights that science technology and medicine account for 2/3 of OA journals:


Finch report fact, p33: 2/3 of OA articles are published by 10% of publishers: STM account for 2/3 of journals ow.ly/c4ZV8 #oa


At this point it is worth referring to Steven Harnad's blog post "Finch Fiasco in figures" because he's looked into all this in a much more scholarly way, and has a great graph (figure 6) showing the relative balance of green and gold open access availability of articles: it looks like he has very different data, but even in his graph, the balance looks worst for green OA in the biomedical sciences, so the Finch report should also present data across all the subjects, in the interest of objectivity.

On page 69, the Finch report suggests some reasons for the "low take-up of OA" in humanities and social sciences, and it seems clear to me from the reasons given that the report means the low take-up by publishers, ie that gold OA routes are not so readily available in these disciplines. The reasons suggested are: rate of publication and rate of rejection, length of articles, and the larger amount of material in a journal that is not an article and therefore would not bring in an article processing charge as income. Further, on p71 the Finch report refers to the tradition of the independent scholar remaining strong in the humanities: these researchers would have no mechanism through which to pay an APC.

Another trend that the Finch report refers to is the decline of the monograph:

Finch report fact: p44 refers to decline of the monograph as print runs have shrunk, prices have risen & UK libraries spend less on books.

I've already included the fact about the relative decline in expenditure on libraries in UK universities, and the Finch report also points out another difference that electronic format makes in that it means VAT must be paid by Libraries, whilst printed versions don't attract VAT. I know that many of the libraries I have worked at have had their book budgets squeezed by rising journal subscription costs over the years, so I don't doubt that the monograph is not what it was. But I believe that the research monograph carries as much research credibility as it ever did, even if it is not attracting the same revenues for publishers.

A conclusion?

After meandering through these "facts", I'm pleased to see that the UK research sector is publishing so much and attracting so much attention worldwide, in relation to the amount of investment. I believe that we should keep up our international and collaborative efforts in order to sustain this, and we should also keep up our involvement in publishing activities, perhaps by investing in OA routes as this makes access fairer to all. The Finch report recommends that the UK support gold OA publication: perhaps it will as the RCUK policy seems to have followed this route.

Most of all though, I'm interested in what researchers will do. They are making decisions on where to publish what, with whom they will co-author and whether to deposit in a repository or not and all such things. The rest of us (publishers and librarians) are trying to respond to their need to communicate with each other, and to find out what each other are working on.


August 20, 2012

Is the LinkedIn “appearances in search” metric of interest to an academic author?

Follow-up to Who is interested in my online profile? from Library Research Support

LinkedIn recently emailed me details of who is looking at my profile. It reminds me of a previous blog post that I wrote, about who’s looking at my profile online: I often wonder if academic authors might find it valuable to track who is interested in their work.

LinkedIn told me how many profile views there have been in the last three months, how many “appearances in search” there have been, and who has been looking at my profile. I can see why it would be relevant for academic authors to see the details of others who have been looking at their profile: these might be other academics in the same field, so watching this measure is a bit like seeing who wants to listen to you at a conference. If, indeed, LinkedIn is a conference that you are attending!

I wondered what “appearances in search” meant, and found an explanation in some LinkedIn Q&As, that it is about my profile matching others’ search terms when they were not searching for my name specifically. Should academic authors be interested in this metric? I think probably not, and here is why!

I’m not 100% sure, but it seems to me that the “search” referred to must be the LinkedIn search box, on their own site. So these stats are also reflective of the amount of activity happening in LinkedIn. Since it’s not a dedicated, academic forum, our academics might not be too worried about LinkedIn activity.

If your discipline has some really active discussion groups on LinkedIn, or you wanted to generate interest in your work beyond the academic community and within the LinkedIn one (which is pretty large), then you might want to watch LinkedIn metrics more closely. You might want to see more of those search appearances being converted into profile views, as evidence that your work is relevant to that community, and as a channel to direct readers to your scholarly articles and other outputs. In order to do this, you would need to ensure that your profile describes your work accurately. But this is a good idea anyway, so I see no reason to pay attention to the number of “appearances in search”!

I blogged last time about Google Adwords but I must have had a free preview or something because I can’t find the same feature for free now. I often pop in to Google Analytics and Feedburner to see who is looking at my blog, and I regularly look at the stats for the Library’s Support for Research pages, and using these tools I can see who is looking at my site(s) and what keywords are bringing them there. These are far more rich and valuable to me than the LinkedIn stats, so I guess that they will be to academic authors, too.

But how nice of LinkedIn to send me the stats from time to time: it works for me as a reminder to update my profile!


Subscribe to this blog by e-mail

Enter your email address:

Delivered by FeedBurner

Find out more...

My recently bookmarked sites

Tweet tweet

Search this blog

Most recent comments

  • Oh yes, I'm writing that too! And tidying up my paperwork, plastering each piece with post–it notes … by Jenny Delasalle on this entry
  • A useful list, thanks Jen. I would add "it's never too early to start writing your handover document… by Emma Cragg on this entry
  • Yes, Google does find things very fast: I use it a lot to find sites that I know and regularly visit… by Jenny Delasalle on this entry
  • Mac OS has the ability to share Safari www bookmarks and other data, securely across multiple machin… by Andrew Marsh on this entry
  • Hi Peter, I see that you practice what you preach… and indeed the point that you make about being ac… by Jenny Delasalle on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXIV