Favourite blogs for Charlotte's Research blog

Blogs » Library Research Support

April 30, 2013

Leaving Warwick today

Today is my final day at the University of Warwick. I've had a great time, and worked with some wonderful colleagues. I've learnt a lot and I'm going on to do some freelance writing work, and explore a bit of Europe.

It is the end of an era and the start of another, quite exciting one for me. I do intend to blog again, from wordpress. I'll be at http://jennydelasalle.wordpress.com/ should anyone wish to follow me there, but of course my colleagues will make plans for this blog too, if you're interested in what Warwick's library is doing for researchers.

As the great Douglas Adams wrote "So long and thanks for all the fish!"


April 25, 2013

A highlight from UKSG: Altmetrics session with Paul Groth

Writing about web page http://www.slideshare.net/UKSG/2-groth-uksg2013-altmetricsstory

These are my notes from a very well presented session at UKSG. I've linked to the slideshare presentation, too.

You can't rank researchers or research, using altmetrics. (Yet, I'm thinking!)

Presence on social media does seem to correlate with other measures of performance, however: Birkholz et al's research indicates that researchers with a presence on LinkedIn have a higher h-index. (I googled for this. Closest I could find was: http://altmetrics.org/workshop2011/birkholz-v0/)

1 in 40 scholars are active on Twitter. (This comes from an article by Heather Piwowar in Nature: http://www.nature.com/nature/journal/v493/n7431/full/493159a.html )

Like me, Paul thinks that academics should use all kinds of metrics to tell a good story about their work. He spoke about citations, Mendeley scores, html accesses and F1000 recommendations as an example of what a researcher could include on a CV. The measures that matter most will vary according to discipline.

The challenge seems to be to tie together activity around a published paper, such as the author's blog post announcing it and the pre-print on arxiv, and then to present all the measures surrounding the whole activity for that output of the research.

Media stories often never cite or link to the original research.

The Journal of Ecology apparently now ask for a 140 character, tweetable abstract for each article they publish. (Nice idea!)

In short, there are some really interesting things to keep an eye on, when it comes to altmetrics.


April 10, 2013

Tracking a twitter hashtag's activity

I'm just back from a Librarians' and publishers' conference, called UKSG, which used the twitter hashtag #uksglive... and I thought I'd put that hashtag into an infographics tool called visual.ly

Here is the poster it created for me:



create infographics with visual.ly


Am I really interested in Naymz?

Follow-up to Which tools monitor your social media influence & impact? from Library Research Support

The quick answer now I've investigated a bit, is "no", but I would like to explain:

1) I don't want to maintain another professional profile there when I already invested in my LinkedIn profile, and all the people I could ever want to network with online are on LinkedIn but not on Naymz.

2) I had a look at my repscore on Naymz: the Leaderboard picks up on people who declare themselves as Jedi knights, so I can't take the network or the score too seriously.

3) The repscore dashboard would be interesting if I did use Naymz for networking, and if I connected all my relevant social network profiles to it. I can't actually do that, because I use some networking sites that Naymz doesn't monitor.

I could use Naymz to watch just Twitter and LinkedIn, and I could make some more effort to use and link up Facebook or other social networking sites that it does measure. What does it tell me about other networking sites? It gives me numbers for:

- Contacts: "The number of contacts/followers on this network". This is interesting, to see where I have the maximum potential reach. But it's not actual reach if the people I'm linked with on this network aren't active users of it and will never see my posts or activity there.

- Posts: "Your recent posts on this network". I'm not sure how recent: I don't recall ever posting on LinkedIn, yet it can find 9 posts. I post a lot on Twitter, but it can only find 35 posts.

- Replies: "Replies/Comments on your posts on this network". Since I don't make any on LinkedIn, then I can't compare the level of interaction of my network members on these two networks, but it would be a potentially useful measure if I did want to compare.

- Likes: "Likes/Shares of your posts on this network". As with replies, this could indicate the actual reach of my presence on a network better than merely the number of contacts I have. I'm not sure how it counts them, though. By putting scores for all of my network profiles into one place, I could compare the networks and decide which one represented best value for my efforts.

The Naymz dashboard gives a percentage rank too, but as this is only in comparison with other Naymz members, who I am not interested in, then it's not so useful for me.

There are other measuring tools than Naymz (I mentioned a couple of others in my blog post that this follows on from), and they might count the interactions in a way that you prefer, if you're looking for a tool to do this.

However, I think that my main reason for not wanting to use Naymz or one of the similar tools, is that I'm already convinced that Twitter is a good route for me to reach the people who I want to reach. If I wasn't sure that the people I wanted to reach were active on Twitter, or I wanted to reach more people who might prefer one of the other networking sites, then I'd be glad to use Naymz and its like, to make comparisons.


April 02, 2013

Which tools monitor your social media influence & impact?

Twitter is the main social media tool that I would recommend to researchers, when it comes to influence and impact. Ideally, I think it should be used alongside blogging in some way: either your own blog if you want to build an online identity and audience for yourself, or as a guest on others' blogs. Guest blogging is a great way of benefitting from others' hard work in gaining audience!

If Twitter is my main social media tool then any tool for measuring online/social media influence and impact will need access to my Twitter account. A quick look at the "Apps" section of my settings on Twitter reminds me of tools that I've once thought might be of value to researchers for the purpose of increasing, measuring and demonstrating the impact of their research. I've not had time to investigate these properly, but I thought that it might be worth sharing which ones I'm interested in, which are:

Naymz - "Manage and measure your reputation. Get rewarded!"

Klout - "Klout finds the most influential people on every topic"

Crowdbooster - "Measure and optimize your social media marketing."

I had a quick look back at these three and found that Crowdbooster now charges a fee: this might be worthwhile, if it covers the social media channels that you use, though it has different pricing mechanisms for different numbers of social media channels.

Naymz - wants to co-ordinate my Google account, Yahoo account, email, Twitter, Facebook and LinkedIn. These are big hitters but not specific to academia.

Klout - lots more options here than there were for Naymz, but none specifically academic.

There are actually lots of tools for measuring social media influence out there, but to find the right tool for you then you need to know what you want to measure. I'm interested in Twitter, website visitors and my blog, but not necessarily combining the scores for them, since they serve different purposes. I do need to investigate more...

For those interested in reading more, this piece from Imperial College has a great summary and table comparing the tools available for measuring and monitoring, in terms of the social media sources they monitor:

http://research20atimperial.wordpress.com/optional-content/evaluation-tools/

There is no substitute for trying things out for yourself, though, and finding out not only which aspects of your social media activity can be monitored by which tools, but also how they produce their scores and what this means for your own work.


March 11, 2013

Things to do when you're leaving!

I shall be leaving the University of Warwick: my last day here will be on 30th April. Many researchers must go through the process of leaving an institution. What are the basic things that need to be done to make this a smooth transition?

Here are the things that I've been trying to put in order, as I prepare to leave Warwick:

1) Use my personal e-mail account properly - unsubscribe from all that junk mail that always clogs up the inbox and start creating meaningful folders there.

2) Use Evernote to forward useful e-mails and copy useful files to. Tags on all kinds of items in Evernote are more efficient than trying to replicate folder structures in e-mail and desktops and cloud based document stores. But then again, duplication guards against loss and acts as a back up, so I'll probably create an archive file from Outlook, too.

3) I have lots of online accounts that are infrequently accessed so I need to transfer the useful accounts to my personal e-mail address because password reminders going to a dead e-mail account are of no use at all! Or, make a note of the passwords to avoid needing to use those reminders.

4) Add my Twitter handle to my signature on my work e-mails: this will stay with me even when my e-mail address changes, and so it's a way for people to find me later when they want to contact me and all they've got is an old e-mail I sent them from my Warwick account.

5) Update/complete my profile on LinkedIn: I haven't done this yet but intend to. If there is a hierarchy of social networking sites, then this one is at the top of my pile: then others can be updated in time, too.

6) Start a new blog: well I created one over on Wordpress but I haven't got started blogging there yet. I plan to write a farewell post here and introduce my new blog at a later date. Not every researcher will be using their institution's proprietary blogging system and so need to transfer in this way, of course. Since I'll have blog content across two sites, I've been clipping my blog entries into Evernote, as a way of collating and curating them.

There's bound to be more that I need to do, but that'll do for starters!


March 08, 2013

Thomson Reuters RefScan

Thomson Reuters have recently released a new free application, RefScan, for iPhone and iPad. It is designed to provide a quick way to collect your references on the move. This is similar to a number of applications for mobiles and tablets that utilise the inbuilt camera as a scanner for QR codes or barcodes. There are a few other mobile reference tools available and they were discussed on this blog last year.

RefScan logo

The main feature of RefScan is its ability to scan the Digital Object Identifier (DOI) attached to a journal article and to search for the article. Once found, the reference for it can be saved into your EndNote Web account.

The process can be seen below:

scan_process.jpg

The application also allows you to perform a basic search within the Web of Knowledge database, the results of which you can also directly import into EndNote Web. You can also add references manually.

I decided to test the application by trying to import a number of different articles from a variety of journals.

The first thing I noticed was that the scanning function is quite sensitive, blurring the image. I found it took several attempts to work successfully, which could become very frustrating. It appears to scan some typefaces more effectively than others and I found some serif fonts, like Time New Roman, scanned incorrectly and misread characters. Some typefaces needed more adjustments to make the scan as clear as possible, which could be problematic as DOI’s are usually included in a small font size on the page.

If the DOI does scan correctly, and the reference is found, the EndNote Web import works smoothly and effectively, including all the information usually imported from Web of Knowledge, such as abstracts.

As the application only searches for DOI’s within the Web of Knowledge database, the scanner only works with journals in that database, and (unless you already knew the title was in the system), you would not be able to tell this from the journal itself. You are also relying on the print copy to display the DOI for the article, which is not always done.

If RefScan finds no match, it gives you the option to manually add the reference details and import into EndNote Web. I would find this quite a fiddly and laborious process using a mobile phone keyboard (though better on an iPad). The manual reference gives you limited options, and the default type given to any manual reference is Journal article, and can’t be changed in the application.

I would also mention that every reference has to be imported separately, which could be time-consuming if you have a large number to add.

Manual reference

The other main issue I found was that my EndNote Web account could not be accessed directly in the application. This meant I could not easily check the reference once imported. Instead I needed to use a link to the EndNote Web mobile site in my browser. The problem with this method is that it requires you to log in to your account again, so the process is not seamless between RefScan and EndNote Web. Unfortunately currently EndNote Web does not have a standalone application for iPhone and iPad (though EndNote does have a browser application for iPad only), which limits its mobile effectiveness with RefScan.

Also I could download the application on my iPhone, but as it is only compatible with iOS 10.5.1 or greater, it would not work for Android users, or people with older technology.

Overall RefScan is an interesting idea, but is not yet developed or reliable enough for me to adopt regularly. The benefits of speed and mobility are currently outweighed by the frustration caused by its downsides. In fact I found it be more time-consuming to use the application than to make a note of the DOI, and find the article at a later date for import. Having said that, I would be very interested to see how the application evolves, as there is potential there for it to become far more useful.


Additional links

RefScan

EndNote Web support in the Library

EndNote for iPad


March 01, 2013

Being organised with your information: Evernote is one useful tool.

Follow-up to Recording information when literature searching from Library Research Support

Organising your information is something absolutely imperative for researchers. There are lots of tools out there to help, and researchers all like to organise things in their own ways, so no one tool is going to suit all researchers. This means that you either have to investigate every new thing (ridiculous!) or listen to someone else's experiences and bear things in mind for one day when you might have a new need. This blog post is about my experiences with tools for organising stuff. I have a new need because I am preparing to leave the University of Warwick (30th April). Another experience many researchers will go through, as changing institution is very much a part of career progression for many.

In the past, I have blogged a few ways to be organised with your references, and have had guest posts from colleagues about reference management tools before, too.

I wrote here recently about organising bookmarks using online tools, and how I liked Diigo. Once or twice on this blog I have also mentioned Evernote, and that's the tool I'm currently playing with because it's been recommended by researchers at Warwick.

What I like about Evernote is that I can forward e-mails into it, and I can set it up to mirror folders on my computer so that all my e-mails and documents are tagged and accessible in the same place. I can also clip items from the web and create a copy of them to refer to later, and of course I can use my tags for them too. So I can get material from whichever source all into the same tagging system which is great, because I keep having to replicate my folder structures and it's much better to have all my re-usable stuff in one place, and tagged properly. Tags are sooo much better than folders, too, because you can have more than one per item!

I've been busy clipping my own blog entries into Evernote because this is a bit of an open notebook, and all my entries are tagged: I use this material when preparing workshops and presentations. Now I can just look on Evernote for my own blog entries, emails etc, and stuff I've clipped that has been written by others, all on the same theme. Plus, if Warwick ever decides to delete this blog after I've left, I'll have my own copies of my blog entries on Evernote.

I can use Evernote in at least three different ways:

  1. By logging into my account on the web.
  2. By opening up the software installed on my computer.
  3. By using the Evernote App on my phone.

I think that all three are going to be handy.

I will probably still use my folders, but Evernote provides a way of collecting items for a particular project or need, whilst also creating copies of them which is a kind of back-up for me.

I do wish that I could import all my bookmarks into Evernote from Diigo: that doesn't seem to be possible. But perhaps it is better to use both tools. I expect that I'll maintain my web bookmarks collection for when I want to record a whole website, but perhaps use the Evernote web clipping tool for when it's a particular page that I'm interested in, and that will also help me to overcome the problem of broken links that I get with my bookmark collection, because if the content is clipped then I will still be able to read it!

There might even be times when I will want to both bookmark on Diigo and clip onto Evernote, because Evernote is my own store, but Diigo is a route I use to share stuff with others. Incidentally, I don't have lots of network contacts on Diigo itself as a way of sharing, but I Tweet things that I bookmark on Diigo. I don't actually use Diigo's own feature for tweeting an item because it involves too many clicks and steps from me each time I bookmark. Instead, I've used IFFT (If this then that) to set up a rule for Diigo items to get automatically tweeted. But that is all about sharing my information and this blog post is meant to be about organising it! So many tools serve more than one purpose.

I expect that for proper research articles that I collect, I'll always want to use a reference management tool as well. Because they can create proper references for me by importing metadata and they can create lovely formatted references for me when I write anything formal, too. And they can do networking and sharing things, and "shout about my profile" things too.

Evernote isn't quite the "one ring to rule them all" for me to organise my information. But that's probably a good thing because I'd hate to rely on one tool for everything and then have it crash or want to bill me a small fortune for using it! And by using other tools as well, I get the benefit of all their other features for sharing and profile-raising.


Being organised with your information: Evernote is one useful tool.

Follow-up to Recording information when literature searching from Library Research Support

Organising your information is something absolutely imperative for researchers. There are lots of tools out there to help, and researchers all like to organise things in their own ways, so no one tool is going to suit all researchers. This means that you either have to investigate every new thing (ridiculous!) or listen to someone else's experiences and bear things in mind for one day when you might have a new need. This blog post is about my experiences with tools for organising stuff. I have a new need because I am preparing to leave the University of Warwick (30th April). Another experience many researchers will go through, as changing institution is very much a part of career progression for many.

In the past, I have blogged a few ways to be organised with your references, and have had guest posts from colleagues about reference management tools before, too.

I wrote here recently about organising bookmarks using online tools, and how I liked Diigo. Once or twice on this blog I have also mentioned Evernote, and that's the tool I'm currently playing with because it's been recommended by researchers at Warwick.

What I like about Evernote is that I can forward e-mails into it, and I can set it up to mirror folders on my computer so that all my e-mails and documents are tagged and accessible in the same place. I can also clip items from the web and create a copy of them to refer to later, and of course I can use my tags for them too. So I can get material from whichever source all into the same tagging system which is great, because I keep having to replicate my folder structures and it's much better to have all my re-usable stuff in one place, and tagged properly. Tags are sooo much better than folders, too, because you can have more than one per item!

I've been busy clipping my own blog entries into Evernote because this is a bit of an open notebook, and all my entries are tagged: I use this material when preparing workshops and presentations. Now I can just look on Evernote for my own blog entries, emails etc, and stuff I've clipped that has been written by others, all on the same theme. Plus, if Warwick ever decides to delete this blog after I've left, I'll have my own copies of my blog entries on Evernote.

I can use Evernote in at least three different ways:

  1. By logging into my account on the web.
  2. By opening up the software installed on my computer.
  3. By using the Evernote App on my phone.

I think that all three are going to be handy.

I will probably still use my folders, but Evernote provides a way of collecting items for a particular project or need, whilst also creating copies of them which is a kind of back-up for me.

I do wish that I could import all my bookmarks into Evernote from Diigo: that doesn't seem to be possible. But perhaps it is better to use both tools. I expect that I'll maintain my web bookmarks collection for when I want to record a whole website, but perhaps use the Evernote web clipping tool for when it's a particular page that I'm interested in, and that will also help me to overcome the problem of broken links that I get with my bookmark collection, because if the content is clipped then I will still be able to read it!

There might even be times when I will want to both bookmark on Diigo and clip onto Evernote, because Evernote is my own store, but Diigo is a route I use to share stuff with others. Incidentally, I don't have lots of network contacts on Diigo itself as a way of sharing, but I Tweet things that I bookmark on Diigo. I don't actually use Diigo's own feature for tweeting an item because it involves too many clicks and steps from me each time I bookmark. Instead, I've used IFFT (If this then that) to set up a rule for Diigo items to get automatically tweeted. But that is all about sharing my information and this blog post is meant to be about organising it! So many tools serve more than one purpose.

I expect that for proper research articles that I collect, I'll always want to use a reference management tool as well. Because they can create proper references for me by importing metadata and they can create lovely formatted references for me when I write anything formal, too. And they can do networking and sharing things, and "shout about my profile" things too.

Evernote isn't quite the "one ring to rule them all" for me to organise my information. But that's probably a good thing because I'd hate to rely on one tool for everything and then have it crash or want to bill me a small fortune for using it! And by using other tools as well, I get the benefit of all their other features for sharing and profile-raising.


February 22, 2013

Bookmarking websites: switching from Delicious to Diigo

Writing about web page https://www.diigo.com/user/jennyality

I used to use Delicious.com to manage my bookmarks and have recently switched to Diigo. I didn't like the recent changes to Delicious: it limited the ways I could manage my enormous collection of bookmarks.

In the past, I have used the browser bookmarking tools of Explorer and Firefox but of course you can only access those bookmarks on one PC or one login profile, and with one browser. Also, I like using tags better than folders as a way to organise my bookmarks, because a site can only be in one folder but can have several tags.

My enormous collection on Delicious is the product of years of work. What I want to do with my collection is:

  1. Sort by private/public: Delicious frustrated me for some time by not supporting this, but Diigo has imported my bookmarks beautifully, carrying over these properties and it allows me to sort in this way and review which need to be private.
  2. Sort my tags by the number of sites bookmarked with them. In this way I can look at the tags that only have one site and decide if I really need the tag: Diigo points out that I have 423 tags which is way too many! Delicious used to do this and I was slowly sorting my tags, but it's apparently not offered any longer.
  3. Find links that are broken. Delicious definitely couldn't do this for me. Not sure yet if Diigo can. Some of my links are old and broken, I know. I do like that Diigo displays the date that the bookmark was added, though, as it gives me a clue about whether my link is going to be live now or not.
  4. Replace tags that are obsolete with ones that I currently use. Delicious used to do this, but not any more. Diigo enables me to edit my tags very easily and I'm busy deleting useless ones to make them more manageable and useful!

Other things that I like about Diigo now that I've switched:

  • It has an "advanced" view that enables me to tick which bookmarks I'm interested in and then apply the action I choose.
  • One of those actions is "generate report" which displays the selected bookmarks in a nice way to incorporate into a report.
  • Another action that is possible is to add tags to the selected bookmarks, as a collection.
  • It seems much more sophisticated that Delicious, and I like the additional functions that I am discovering.

What I don't like about Diigo are:

  • the adverts that get in the way
  • it takes two clicks to create a bookmark record: Delicious' tool was a bit quicker, but then I haven't explored the sticky note feature or any other aspects of Diigo yet.

In general though, I do like find these bookmarking tools very handy!


February 10, 2013

Is my book the most highly cited in its field?

To answer this, you need data on how many citations there are to your book and to others in your field. There are two sources of citation data for books, that I know of:

  1. Thomson Reuters' book citation index. Not everyone will have access to this, of course, as it's a subscription product.
  2. Google Scholar: this is available to everyone and is the source I've investigated.

A simple search for your book on Google Scholar will tell you how many citations there are. Note that G Scholar does try to collate records for all versions of your book, but for books available in many editions and reprints, then it might not be too successful at this!

Next, how do you know if your book is the MOST highly cited in your field? It's impossible to tell really, but a good clue is to invstigate the "related articles" link in the results of your search that brought you data about your book. This will find items that are similar to yours so therefore are likely to be in your field.

Within that list, there will be citations and journal articles as well as books. You can look through the results and spot books quite easily: look at how many times the books have been cited. If any are more highly cited than yours, then you know that your book can't be the most highly cited in your field, at least as far as GScholar is concerned. Whether or not you choose to trust their data on citations is a separate matter!

If none of those citations are anywhere near your citation count, then it would seem that there is a good chance that your book is one of the most highly cited in your field. You probably know some of the competitor books to yours: try searching for them on Google Scholar too, to check.

If you don't already know competitor books in your field then I recommend looking on the COPAC union catalogue at the record for your book, and clicking on the subject heading links from within that record to find books in the same subject category.

Best of luck!


February 04, 2013

Measures of journal article quality?

Writing about web page http://techcrunch.com/2013/02/03/the-future-of-the-scientific-journal-industry/

The TechCrunch blog post linked to is by the founder of Academia.edu and it discusses the possible contribution that journal article metrics could make, to academic publishing.

In order to interpret readership metrics provided by sites/services like the three mentioned in that post, Academia, ResearchGate and Mendeley, researchers should ask, "what is the level and quality of activity on these sites?" My experience is that there are a lot of students amongst those "researcher" numbers advertised. Students can be readers too, of course, but we need to be clear about what the metrics are actually telling us. Activity and membership varies from one site to another and from one discipline to another, of course, so researchers would need to investigate for themselves. If you're investigating and interpreting for yourself then you're not going to be entirely comfortable with others using such metrics to make some judgement about the quality of your work!

My previous blog post was about publishers who display reader metrics. I wish I had time to investigate them some more!

Mendeley's metrics used to be available for others to use through an API: as ImpactStory, once called TotalImpact were doing. That seems to me to be the most useful model for researchers: then they can follow readership metrics for their papers from all locations. In my opinion, collated stats are great for researchers to track which activity affects their readership numbers most: their paper featuring on their mate's Twitter feed, or professor x's blog, or being delivered at a conference.

But are reader numbers going to lead to a new way of assessing a journal article's quality? They would need to be available from all sources where the article is displayed: publishers, repositories and networking sites would all need to count reader accesses in the same way, and share their data publicly, so that they can be collated and displayed in a reliable and consistent way. They would need to become trusted and used by the researchers themselves. That is going to take a lot of time and effort, I believe, if all the discussion about citation metrics and altmetrics that I've seen is anything to go by.

January 25, 2013

Sharing metrics relating to articles

Writing about web page http://altmetrics.org/manifesto/

This time last year, PLoS started to display even more article level metrics. I felt intrigued by the openness about article downloads on PLoS. It led me to wonder whether other publishers were sharing such information so publicly.

Repository tools are available to display accesses for articles in the repository, as at the Bath University repository in this record of a journal article. Some repositories have chosen not to make such statistics publicly visible, however.

I don't see many publishers publicly displaying article level metrics like this, but publishers do sometimes showcase the most downloaded content publicly on their website. For example Springer journal home pages display their most downloaded articles: see the journal Artificial Intelligence

Other publishers share download statistics with the authors, although not publicly displaying them on article records. For example, Nature describe this as a benefit to their authors.

I'd be glad to hear of other examples of publishers displaying these download statistics. I think that authors should be able to monitor activity around their papers for themselves, and I wonder if there is a role for such statistics in helping our readers to ascertain the highest quality papers.

However, I am slightly cautious about download statistics being publicly visible because there are often many versions of a journal article available: on the author's website, on the publisher website and in a repository, for example. And I think that all of these versions should be available because this provides an insurance and archival possibility for authors as well as additional discovery and access options for readers, but that a focus on download statistics could lead authors to become wary of sharing their articles in so many places.

The Internet allows us to create lots of metrics about a researcher or a work, as evidenced by all the altmetrics activity (see my link, above). But just because a measure exists, should it be publicly visible? How should we use these new measures? My own answer is "with caution" and indeed that the author is best placed to make use of the download statistics, because the author will be best able to understand what they mean.


January 08, 2013

Some Twitter tips for the New Year

I've heard it said that:

You get the Twitter feed that you deserve!

The key to using Twitter effectively is to know who you want to listen to and be in discussions with. There is nothing inherently frivolous about Twitter itself, it's just that you do need to be brief and that can lead to spontaneity and frivolity but equally, you can spend a long time crafting a perfect 140 character tweet to express your idea in as brief a way as possible.

Twitter is a great way to get a summary or overview of what's going on in your field, if you follow people who do craft their tweets carefully. Twitter is not only a great way to listen to those people but also to interact with them: you can publicly tweet at people who you want to reach and you can tweet directly at people who follow you, for a private conversation.

If you can't find the right people then you could always start tweeting on your topic yourself, and others will find you. It's worth investigating the profiles of people who follow you on Twitter, to see if you might want to follow them back.

And if you find you're not following the right people after all, well you can clear out your twitter feed and unfollow people here or there. It's up to you to create and curate your own experience of Twitter!


December 12, 2012

Where is my most up to date profile?

Writing about web page http://scholar.google.co.uk/citations

I have blogged about author profile sites a number of times. I've investigated quite a few of them, and I've not properly invested in my own profile on any of them. So as a totally unscientific measure of the ease and useability of the many sites where I could have put my profile, I thought I would look at my own profile on these sites to see which one is the most complete, in terms of the listing of publications I have authored!

And the winner is: Google Scholar! http://scholar.google.co.uk/citations

The truly surprising thing here is that this is the profile site I have invested least in. I haven't even blogged about it properly!


December 05, 2012

What do publishers do for authors?

Is there an advantage to setting up your own journal or publishing your work online yourself? What do journal publishers actually do for authors? Since RCUK funded authors are soon to be paying large sums of money for OA publication of their articles, where is the value for that spend? This piece explores a little bit of what publishers do.

The Finch report has highlighted the need for publishers to be able to continue to invest in publishing innovations. On page 51, it states that

Access on its own does not necessarily make for effective communication.

and on p95 it says that

Quality assurance through peer review coupled with the wide range of discovery, navigation, linking and related services provided by publishers... are of critical importance to both authors and users of research publications.

Back in 1997, Fytton Rowland described four functions of a scholarly journal:

  1. dissemination - publishing and marketing activity.
  2. quality - this is where editorial, peer review and quality assurance come in.
  3. canonical version - a work that others can refer to. Involves archiving, issuing DOIs and ISSNs, etc.
  4. recognition & credit for the authors.

In my view, the recognition authors want is quite often tied to the dissemination and quality activity. If your peers don't know about your article (the dissemination hasn't been good enough) then the recognition and credit can't follow. If the journal you are published in is not one of the high quality ones then it follows that the audience and recognition you might get for being published there might be less. Although if your work is of high quality itself then it might help to raise the perceived quality of a publication.

Authors have told me that they want the following things from a publisher:

  1. To edit and improve their work.
  2. Bestow prestige on their work.
  3. Publicise their work & bring them an audience. The audience they want might be scholars or a broader reach, leading to "impact".
  4. Protect their work against plagiarism.
  5. A perpetual record of their work.
  6. Money: probably more applicable to book deals but for journals, at least the author won't want it to cost them a huge amount to publish.
  7. Timeliness: some authors want their work published as soon as possible.

I daresay that the list could grow a lot longer for some and be shorter for others, but essentially authors often have to balance their needs when choosing where to publish.

Earlier this year (2012) Jason Priem described a "de-coupled" journal" and how the journal system could be reformed to provide essential functions of:

  • archiving : relates to "canonical version", in Rowland's list above.
  • registration : relates to "recognition", above.
  • dissemination : also mentioned above.
  • certification : relates to the quality function, above.

The concept of a de-coupled journal is one where there is more variety in how each of the different functions are provided, so that they might not all come from the publisher. Eg archiving might be shared with repositories which store a preservation copy. Dissemination activity can be carried out by authors themselves. The online environment brings a variety of channels and services that authors can use, beyond the traditional publishing system.

I wanted to explore more of what publishers do:

Filter for quality: co-ordinating the peer review process

Editors provide one layer of a quality filter, and then the peer reviewers provide the next level. Editors and peer reviewers refine and polish articles for publication, so they also enhance articles in terms of their quality.

Managing a journal and co-ordinating the quality process is no small task, even when the peer reviewers and editors work for free. The authors need instructions, the editors benefit from tracking tools to monitor where peer reviewers are at in the process and to chase peer reviewers. Copyediting and proof reading tasts need to be carried out. Digital media or associated data might also need corrections and modifications to the way they display.

There are lots of experiments with the peer review process:

Is there a role for more post-publication peer review? eg F1000 offers this. Accessible science might need to be more peer reviewed than science that is only for sharing within the academic sphere, where researchers are able to assess quality for themselves owing to their expertise, whilst members of the public and amateur experts might not be as well able to assess the quality of articles they find.

Many journals publish articles with a comments field at the bottom, rather like on blogs, but relatively few articles attract worthwhile comments. Journals (eg PLoSONE) sometimes publish information on downloads, "tweets" and "likes" for their articles, so that readers can use those measures as post-publication quality markers, too.

Alternatively, peer review could take place even before an author submits an article: American Journal Experts offer a pre-submission peer review service, for a fee. It could save you time if you have the money to spend and the process is indeed rigorous and helpful, since they promise turnaround times of days.

Dealing with ethical concerns

Pre-publication, the ethical concerns could be said to be a part of the quality filtering process. Before publication, publishers:

  • issue instructions to authors
  • use editors and peer review to screen articles,
  • require authors to sign agreements.

Editors need to be experienced and knowledgeable in their field to identify ethical concerns. Scientific "mis-conduct" is not defined in exact terms and practices might vary. Ethical considerations might include:

  • the work of others is properly acknowledged, credited and referenced.
  • data should be accurate and preserved and accessible - as appropriate.
  • the article should be complete and publication well timed (eg results not being shared prematurely).
  • co-authorship is properly attributed.
  • confidentiality is respected and maintained.

Publishers are not the only filter for ethical considerations, of course: such issues are included in grant proposals to research funders and the process by which they are reviewed. Institutions might have ethical review panels to approve grant proposals even before they are submitted to the research funders.

After publication, publishers might use retractions or corrections to deal with ethical concerns. This is perhaps more of a service to readers than to authors, but it does help to maintain a journal's prestige if ethical matters are dealt with professionally.

ALPSP's Learned Publishing journal from April 2011 features an article about ethical considerations. Advice from the Committee on Publication Ethics (COPE) is particularly useful and well presented, with flowcharts.

Dissemination & discoverability

An earlier guest post on this blog, by Yvonne Budden, describes the importance of metadata to resource discovery. By providing good quality metadata, publishers are bringing readers to the article you have written, and helping you to find articles that you should be reading.

Search Engine Optimisation seems to me a "dark art" but it is important for scholarly articles to be discoverable through Google and Google Scholar: that's where a lot of researchers will be looking for stuff.

Some publishers are huge and they build and market their own discovery platforms for scholarly articles. Other publishers ensure that their content is indexed in others' discovery environments. Most publishers offer table of contents alerts.

Publishers have staff dedicated to marketing and sales, helping to ensure that their work reaches key target audiences. Perhaps in an Author-pays OA world, sales staff will be selling the services on offer to authors rather than the services offered to subscribers and readers. Marketing staff will be building the prestige of the publisher and journal brands.

Journal publishers should monitor the audiences for their publications and ensure that their material is discoverable in the places where people are looking for it, in the way(s) that they like to search.

International copyright protection?

In my view, authors are concerned that others should not copy their work without attribution but this is more a question of plagiarism. I don't think they mind about the actual copying so long as they are credited. With the RCUK policy on Open Access, the articles that they pay Gold OA fees for should be made available for others to copy for any purpose, as long as the work is properly attributed, using the so-called CC-BY licence. With such a licence, the copyright is not something to be protected.

I'm also not sure to what extent publishers pursue copyright internationally when they own it and don't licence copying, and I expect a variety of practice between publishers and from one nation to the next. After all, copyright law must vary on an international scale. So I'm leaving my big question mark in the heading of this piece!

Awards schemes that they run or sponsor

See my earlier blog post on Journal awards for examples of the kinds of award schemes that publishers might offer... or indeed put their journals forward for.

Awards act as a route to recognition but also as a way of building prestige of a journal if at the title level and from an external and prestigious source.

Open Access repository deposit

Research which has been funded by the Wellcome Trust has to have outputs deposited into Pubmed Central: authors who pay a fee for the Gold Open Access route, which the Wellcome Trust will pay for, can have publishers make this deposit on their behalf.

Publishers sometimes also allow authors to make deposits. The Sherpa ROMEO tool makes it easy to look up publishers' policies on repository deposit by authors, although authors really ought to keep copies of the agreements they sign with publishers as these will be the legally binding expectations, rather than the publisher's latest policy.

Summary

In summary then, it seems to me that publishers should be doing the following things for authors:

  • co-ordinate the editorial and peer review process to filter for quality and also polish works.
  • provide instructions and support to authors, peer reviewers and editors.
  • build the reputation and prestige of their titles through professional handling of ethical concerns.
  • provide quality metadata to the right search tools.
  • ensure that their content is easily discoverable on the web via search engines.
  • measure downloads and activity around articles: this could be used to enhance their dissemination activity but could also be used as a further mark of quality if displayed to readers.
  • adapt to the OA and copyright needs of researchers as authors and readers.
  • provide authors with clear agreements and keep SherpaROMEO's records up to date.
  • offer awards and put their journals forward for awards, by way of offering recognition for authors and building prestige for their journals.
  • invest in publishing innovations... which could be around any of the themes above.

It's quite daunting to think of setting up a journal and doing all this yourself. Do leave a comment and let me know all the things I've missed out!


November 08, 2012

"Just About" New YouTube channel of our tips for researchers

We are creating short video clips of the best tips we give to researchers in our information skills workshops, on literature searching and disseminating your research. The series is called "Just about" as the clips are about 3 minutes long and they are each about one particular tip.


October 09, 2012

Which index measures your research?

I recently compiled a little list of measures like the h-index, intended to measure the performance of individual authors. Have I missed out your favourite(s)? Which one(s) do you like and why? How should they be used? (Or not used!)

Personally, I like the h-index best because it is well established and relatively simple to understand. However, it needs to be expressed along with the date and the data source used. Any of these measures ought to be presented along with some explanation and/or examples of well known researchers' scores, to give it context. Sometimes, researchers are asked for their h-index, but if their g-index or m-index score is more impressive, then why not give that too?

h-index – an author with an index of h has published h papers each of which has been cited in other papers at least h times: there’s a great Wikipedia article about it. http://en.wikipedia.org/wiki/H-index

g-index – while the top h papers can have many more citations than the h-index would suggest, the g-index is the highest number g of papers that together received g squared or more citations. This means that the g-index score will be higher than that of the h-index. (I found this explanation at: http://www.researchtrends.com/issue1-september-2007/from-h-to-g/ )

m-index (aka m-quotient) - h/n, where n is the number of years since the first published paper of the scientist. (supposedly handy to differentiate between authors of different vintage in the same discipline)

contemporary h-index –where younger papers accrue higher weightings for each citation, as calculated (and documented) on Publish or Perish (http://www.harzing.com/pophelp/metrics.htm#hcindex).

hI-index – This takes account of co-authorship also documented on Publish or Perish.

hI, norm index – see Publish or Perish

hm-index – see Publish or Perish

AWCR - see Publish or Perish

AWCRpA - see Publish or Perish

AW-index - see Publish or Perish

i10-index – Gscholar “My Citations” gives me this score: “i10-index is the number of publications with at least 10 citations. The second column has the "recent" version of this metric which is the number of publications that have received at least 10 new citations in the last 5 years.”

n-index - Researcher's h-index divided by the highest h-index of the journals of his/her major field of study (n is the first letter of Namazi) Proposed in an article at: http://dx.doi.org/10.4103/0378-6323.62960

A-index – average no. of citations in the article set that makes up the Hirsch core. (Hirsch core is the set of articles whose citation scores count towards the h-index score.) The muddled explanation is mine but you can read about it properly at: http://eprints.rclis.org/bitstream/10760/13282/1/hIndexReviewAlonsoCabrerizoHerrera-Viedma.pdf An excellent article reviewing these measures, published in 2009.

R-index - square root of the sum of citations in the Hirsch core (in the same article linked above!)

m-index – (yes, it looks like another type of m-index entirely & probably explains why the measure listed above is also known as the m-quotient) the median number of citations received by papers in the Hirsch core. See the article linked from my A-index explanation.

NB there are many other measures explained in that article, but by this point I gave up trying to understand them! I quote from the conclusion of that paper instead:

that many h-index variations, although being designed to overcome some of its supposed limitations, do indeed correlate quite heavily. This fact has made some researchers think that there is probably no need to introduce more h-index variations if it is not possible to prove that they are not redundant in real examples.”

The article also concludes: “h-index is quite dependant on the database that it is used and that, in general, it is much more difficult to compute those indices using Google Scholar than ISI Web of Science or Scopus.”

GScholar Metrics use different metrics again which sound like the h-index, but these are really aimed at publication level rather than the author… see: http://scholar.google.com/intl/en/scholar/metrics.html#metrics And indeed Scimago will give you an h-index for a publication, and others have calculated h-indexes and variations of h-indexes for departments and groups of authors.

So, which index has your vote?


September 12, 2012

UK research sector, publishing trends and facts from the Finch report

Writing about web page http://www.researchinfonet.org/publish/finch/

I have been tweeting all the "Finch report facts" that I found in this recent report on accessibility to research publications. This blog entry presents some of those "facts" back in a discussion of what it seems that the Finch report is saying about the UK research sector and publishing trends, when I look at those "facts" plainly and bring in other contexts and ideas.

I'm not commenting here on the recommendations of the Finch report, nor the debate about routes to open access, although I did pull together a Storify collection of reactions to the Finch report, in case you want to read more about those topics.

UK Researchers' productivity

The UK research sector has some particular characteristics. I tweeted:

Finch report fact p37: There are 250, 000 researchers in the UK & p38 'their rate of productivity is more than 50% above world average'

This rather depends on how you measure productivity!

I also tweeted:

Finch report fact p37: UK is successful at research publications but 'relatively weak in producing other kinds of outputs such as patents'

So perhaps the productivity referred to is really all about publication activity, and I went back to the report to check where the productivity fact came from: it's a paragraph all about the number of articles written by researchers, so it's most likely although not entirely clear that the productivity referred to is about numbers of articles. A footnote against this particular fact also states that:

"It should be be noted that it is sometimes argued that high rates of research productivity in the leading research countries are achieved in part by establishing dependency cultures in other countries."

Have UK researchers achieved high publication rates due to multiple author collaborations? Possibly.

Why are UK researchers achieving high publication rates? Is it driven by RAE and REF processes?

The UK's measures of research performance have centred around research outputs which might encourage UK researchers' productivity against this measure. Looking at the RAE 2008 data (Merit project) we can see that of the 222,177 outputs that were measured, 167,831 were journal articles. I'm rubbish at maths but even I can tell that's about 75%. I expect that for the sciences, the percentage of journal articles that make up their outputs for measurement is even higher.

Another couple of tweets, then:

Finch report fact p71: 120,000 articles by UK authors are published each year. According to p62, this is 6% of articles published worldwide

Finch report fact p62 'researchers in the UK comprise just over 4% of the global research community'...

So, UK researchers are publishing plenty of articles and contributing to scholarly knowledge worldwide on a larger scale then their numbers represent.

REF 2014 will be looking at impact as well as outputs, which brings a different dimension into the measurement since RAE 2008 and that might also affect UK researchers' activity in the future.

The potential effect of performance measurement mechanisms on actual performance is addressed in a RIN report on Communicating Knowledgefrom 2009, describing a bibliometric analysis of the outputs produced in 2003 and 2008 by a sample of authors who were included in those two RAEs. Amongst many other interesting findings, they reported a slight increase in the no. of publications per author in 2008 compared to 2003, but a significant increase in no. of multiple-author works. These are multi-institutional and international. They did not find an apparent difference in citation behaviours between the two time periods. All very interesting!

In REF2014 the assessment panels for the science, technology and medicine subjects will have citation data provided to them. On UK researchers' citation scores, I tweeted:

Finch report fact p38: citations to UK articles increased between 2006 and 2010 by 7.2% a year, faster than the world average of 6.3%

and:

Finch report fact p38: UK’s “share of the top 1% of most-highly-cited papers was second only to the US, at 13.8% in 2010.”

Not only are our researchers producing lots of articles, they are also producing highly cited articles. There have been numerous studies and debates about the value of citations as a measure of the quality and influence of research papers (my own main reservation is the difference in disciplinary practices around citation), but at any rate there is plenty of citation activity and evident attention for UK authored articles, according to citation measures.

In agreement with the findings of that 2009 RIN report and the footnote on the earlier fact about UK researchers' productivity in terms of numbers of research articles, I also found in the Finch report:

Finch report fact p71 Nearly half (46%) of the peer reviewed articles with a UK author published in 2010 also listed an author from overseas

I believe that multiple authorship and involvement of overseas authors could be significant in achieving those high citation rates. The more collaborations and network contacts or reach that a researcher has, the more people will be aware of that author's work in terms of its findings but also its quality, and so the more likely the work is to be cited by those contacts or indeed their contacts in turn.

An international scale

UK researchers are operating on a world stage, of course. There are other facts in the Finch report that give some context to the UK researchers' performance. I didn't tweet this quote from page 38 because it was too long(!), but I find it very significant:

...part of the explanation for the UK’s success is that it attracts internationally-mobile researchers. UK researchers are also more likely than those in almost any other major research nation to collaborate with colleagues overseas...

Even though the UK researchers are publishing a lot, researchers from other countries are also publishing a lot:

Finch report fact p37 Rise in the no. of UK-authored articles has not been as fast as in very high growth countries such as India and Brazil

So I think that those collaborations and multi-authored articles are very significant, and the international scale of research is one that favours the UK because it's known for its high quality research already. I really think that this is key to UK "success" in the context of citations, because those collaborations and networks occur due to the migration of internationally mobile researchers to the UK. It seems to me that international reach is a very important element of impact that UK research assessors should be interested in.

Meanwhile, according to the Finch report, the UK doesn't spend a great deal on research. Apparently, the UK ranked 16th for "research intensity" amongst OECD countries in an Elsevier report that is cited on page 38, in a footnote. In actual figures:

Finch report fact, p37: 28% of UK R&D is in HE Sector. UK is 'strongly dependent' on gov.t, charity & overseas funds ow.ly/c50pU...

Finch report fact p38: 09-10 UK total expenditure on R&D: £25.9bn of which £10.4bn from gov, £5.5bn of which from Research Councils & HEFCs

Perhaps the relatively high reliance on government and the HE sector to pay for our research is also part of the reason why the UK has been more successful at getting articles published than at producing patents and other kinds of research outputs.

Perhaps another reason why UK researchers are so much involved in publishing activity is that the UK is also a key player in the worldwide publishing industry:

Finch report fact, p15: UK publishers are responsible for 5000+ journal titles & 1/5 of articles published each year

The UK also seems to be playing an important role in the development of the online open access repositories landscape:

Finch report fact: US, Germany, & UK account for over 1/3 of repositories worldwide. There are 200+ UK repositories: 150 are institutional

And the UK publishes about 7% of open access journals:


Finch report fact, p32: Currently 7600+ open access journals listed in the DOAJ, from 117 countries: 533 in UK ow.ly/c4ZLa #oa


UK researchers do seem to have good access to published articles:

Finch report fact p47 93% of UK researchers had “easy or fairly easy access" to papers. Those without most often find a different item.

Finch report fact p48: Researchers are more likely to have problems accessing conference proceedings and monographs, than journal articles.

Although library expenditure in the UK is falling:

Finch report fact, p23: library expenditure in UK Unis fell from 3.3% to 2.7% as a proportion of total expenditure ow.ly/c4ZfC #oa

The Finch report also says on page 51 that "Access on its own does not necessarily make for effective communication" and although I know that the report is really referring to the role that publishers play in enhancing discoverability through their search platforms and other work, I also interpret it to mean that all those networks and collaborations of our authors are helping to ensure that they are building on the best research that is out there.

It seems to me that international reach is a very important element of impact that UK research assessors should be interested in.

Publishing trends

Open access is one of the changes to publishing that has taken place in recent years, as the worldwide web has enabled online access to scholarly content. It's the main focus of the Finch report, so there are lots of facts relating to it! There are at least two routes to making content available on open access: the gold route where authors pay a fee or "article processing charge" (APC) for the publisher to make the final version available to readers for free, and the green route where authors own copies are deposited into open access repositories, where readers can find it.

My first publishing trend "fact" is:

Finch report fact p39 in '09 OA journals accounted for 14% of articles published worldwide in medicine & biosciences, and 5% of engineering.

The report goes on to say that only 6-7% of articles published in 2009 were available in repositories. This looks as though the repositories are not as successful a route to open access as the OA journals. But the data is only for 2009, and only for limited subject areas. The report itself highlights that science technology and medicine account for 2/3 of OA journals:


Finch report fact, p33: 2/3 of OA articles are published by 10% of publishers: STM account for 2/3 of journals ow.ly/c4ZV8 #oa


At this point it is worth referring to Steven Harnad's blog post "Finch Fiasco in figures" because he's looked into all this in a much more scholarly way, and has a great graph (figure 6) showing the relative balance of green and gold open access availability of articles: it looks like he has very different data, but even in his graph, the balance looks worst for green OA in the biomedical sciences, so the Finch report should also present data across all the subjects, in the interest of objectivity.

On page 69, the Finch report suggests some reasons for the "low take-up of OA" in humanities and social sciences, and it seems clear to me from the reasons given that the report means the low take-up by publishers, ie that gold OA routes are not so readily available in these disciplines. The reasons suggested are: rate of publication and rate of rejection, length of articles, and the larger amount of material in a journal that is not an article and therefore would not bring in an article processing charge as income. Further, on p71 the Finch report refers to the tradition of the independent scholar remaining strong in the humanities: these researchers would have no mechanism through which to pay an APC.

Another trend that the Finch report refers to is the decline of the monograph:

Finch report fact: p44 refers to decline of the monograph as print runs have shrunk, prices have risen & UK libraries spend less on books.

I've already included the fact about the relative decline in expenditure on libraries in UK universities, and the Finch report also points out another difference that electronic format makes in that it means VAT must be paid by Libraries, whilst printed versions don't attract VAT. I know that many of the libraries I have worked at have had their book budgets squeezed by rising journal subscription costs over the years, so I don't doubt that the monograph is not what it was. But I believe that the research monograph carries as much research credibility as it ever did, even if it is not attracting the same revenues for publishers.

A conclusion?

After meandering through these "facts", I'm pleased to see that the UK research sector is publishing so much and attracting so much attention worldwide, in relation to the amount of investment. I believe that we should keep up our international and collaborative efforts in order to sustain this, and we should also keep up our involvement in publishing activities, perhaps by investing in OA routes as this makes access fairer to all. The Finch report recommends that the UK support gold OA publication: perhaps it will as the RCUK policy seems to have followed this route.

Most of all though, I'm interested in what researchers will do. They are making decisions on where to publish what, with whom they will co-author and whether to deposit in a repository or not and all such things. The rest of us (publishers and librarians) are trying to respond to their need to communicate with each other, and to find out what each other are working on.


August 20, 2012

Is the LinkedIn “appearances in search” metric of interest to an academic author?

Follow-up to Who is interested in my online profile? from Library Research Support

LinkedIn recently emailed me details of who is looking at my profile. It reminds me of a previous blog post that I wrote, about who’s looking at my profile online: I often wonder if academic authors might find it valuable to track who is interested in their work.

LinkedIn told me how many profile views there have been in the last three months, how many “appearances in search” there have been, and who has been looking at my profile. I can see why it would be relevant for academic authors to see the details of others who have been looking at their profile: these might be other academics in the same field, so watching this measure is a bit like seeing who wants to listen to you at a conference. If, indeed, LinkedIn is a conference that you are attending!

I wondered what “appearances in search” meant, and found an explanation in some LinkedIn Q&As, that it is about my profile matching others’ search terms when they were not searching for my name specifically. Should academic authors be interested in this metric? I think probably not, and here is why!

I’m not 100% sure, but it seems to me that the “search” referred to must be the LinkedIn search box, on their own site. So these stats are also reflective of the amount of activity happening in LinkedIn. Since it’s not a dedicated, academic forum, our academics might not be too worried about LinkedIn activity.

If your discipline has some really active discussion groups on LinkedIn, or you wanted to generate interest in your work beyond the academic community and within the LinkedIn one (which is pretty large), then you might want to watch LinkedIn metrics more closely. You might want to see more of those search appearances being converted into profile views, as evidence that your work is relevant to that community, and as a channel to direct readers to your scholarly articles and other outputs. In order to do this, you would need to ensure that your profile describes your work accurately. But this is a good idea anyway, so I see no reason to pay attention to the number of “appearances in search”!

I blogged last time about Google Adwords but I must have had a free preview or something because I can’t find the same feature for free now. I often pop in to Google Analytics and Feedburner to see who is looking at my blog, and I regularly look at the stats for the Library’s Support for Research pages, and using these tools I can see who is looking at my site(s) and what keywords are bringing them there. These are far more rich and valuable to me than the LinkedIn stats, so I guess that they will be to academic authors, too.

But how nice of LinkedIn to send me the stats from time to time: it works for me as a reminder to update my profile!