All 9 entries tagged Impact
View all 19 entries tagged Impact on Warwick Blogs | View entries tagged Impact at Technorati | There are no images tagged Impact on this blog
April 10, 2013
Tracking a twitter hashtag's activity
I'm just back from a Librarians' and publishers' conference, called UKSG, which used the twitter hashtag #uksglive... and I thought I'd put that hashtag into an infographics tool called visual.ly
Here is the poster it created for me:
create infographics with visual.ly
Am I really interested in Naymz?
Follow-up to Which tools monitor your social media influence & impact? from Library Research Support
The quick answer now I've investigated a bit, is "no", but I would like to explain:
1) I don't want to maintain another professional profile there when I already invested in my LinkedIn profile, and all the people I could ever want to network with online are on LinkedIn but not on Naymz.
2) I had a look at my repscore on Naymz: the Leaderboard picks up on people who declare themselves as Jedi knights, so I can't take the network or the score too seriously.
3) The repscore dashboard would be interesting if I did use Naymz for networking, and if I connected all my relevant social network profiles to it. I can't actually do that, because I use some networking sites that Naymz doesn't monitor.
I could use Naymz to watch just Twitter and LinkedIn, and I could make some more effort to use and link up Facebook or other social networking sites that it does measure. What does it tell me about other networking sites? It gives me numbers for:
- Contacts: "The number of contacts/followers on this network". This is interesting, to see where I have the maximum potential reach. But it's not actual reach if the people I'm linked with on this network aren't active users of it and will never see my posts or activity there.
- Posts: "Your recent posts on this network". I'm not sure how recent: I don't recall ever posting on LinkedIn, yet it can find 9 posts. I post a lot on Twitter, but it can only find 35 posts.
- Replies: "Replies/Comments on your posts on this network". Since I don't make any on LinkedIn, then I can't compare the level of interaction of my network members on these two networks, but it would be a potentially useful measure if I did want to compare.
- Likes: "Likes/Shares of your posts on this network". As with replies, this could indicate the actual reach of my presence on a network better than merely the number of contacts I have. I'm not sure how it counts them, though. By putting scores for all of my network profiles into one place, I could compare the networks and decide which one represented best value for my efforts.
The Naymz dashboard gives a percentage rank too, but as this is only in comparison with other Naymz members, who I am not interested in, then it's not so useful for me.
There are other measuring tools than Naymz (I mentioned a couple of others in my blog post that this follows on from), and they might count the interactions in a way that you prefer, if you're looking for a tool to do this.
However, I think that my main reason for not wanting to use Naymz or one of the similar tools, is that I'm already convinced that Twitter is a good route for me to reach the people who I want to reach. If I wasn't sure that the people I wanted to reach were active on Twitter, or I wanted to reach more people who might prefer one of the other networking sites, then I'd be glad to use Naymz and its like, to make comparisons.
April 02, 2013
Which tools monitor your social media influence & impact?
Twitter is the main social media tool that I would recommend to researchers, when it comes to influence and impact. Ideally, I think it should be used alongside blogging in some way: either your own blog if you want to build an online identity and audience for yourself, or as a guest on others' blogs. Guest blogging is a great way of benefitting from others' hard work in gaining audience!
If Twitter is my main social media tool then any tool for measuring online/social media influence and impact will need access to my Twitter account. A quick look at the "Apps" section of my settings on Twitter reminds me of tools that I've once thought might be of value to researchers for the purpose of increasing, measuring and demonstrating the impact of their research. I've not had time to investigate these properly, but I thought that it might be worth sharing which ones I'm interested in, which are:
Naymz - "Manage and measure your reputation. Get rewarded!"
Klout - "Klout finds the most influential people on every topic"
Crowdbooster - "Measure and optimize your social media marketing."
I had a quick look back at these three and found that Crowdbooster now charges a fee: this might be worthwhile, if it covers the social media channels that you use, though it has different pricing mechanisms for different numbers of social media channels.
Naymz - wants to co-ordinate my Google account, Yahoo account, email, Twitter, Facebook and LinkedIn. These are big hitters but not specific to academia.
Klout - lots more options here than there were for Naymz, but none specifically academic.
There are actually lots of tools for measuring social media influence out there, but to find the right tool for you then you need to know what you want to measure. I'm interested in Twitter, website visitors and my blog, but not necessarily combining the scores for them, since they serve different purposes. I do need to investigate more...
For those interested in reading more, this piece from Imperial College has a great summary and table comparing the tools available for measuring and monitoring, in terms of the social media sources they monitor:
http://research20atimperial.wordpress.com/optional-content/evaluation-tools/
There is no substitute for trying things out for yourself, though, and finding out not only which aspects of your social media activity can be monitored by which tools, but also how they produce their scores and what this means for your own work.
August 20, 2012
Is the LinkedIn “appearances in search” metric of interest to an academic author?
Follow-up to Who is interested in my online profile? from Library Research Support
LinkedIn recently emailed me details of who is looking at my profile. It reminds me of a previous blog post that I wrote, about who’s looking at my profile online: I often wonder if academic authors might find it valuable to track who is interested in their work.
LinkedIn told me how many profile views there have been in the last three months, how many “appearances in search” there have been, and who has been looking at my profile. I can see why it would be relevant for academic authors to see the details of others who have been looking at their profile: these might be other academics in the same field, so watching this measure is a bit like seeing who wants to listen to you at a conference. If, indeed, LinkedIn is a conference that you are attending!
I wondered what “appearances in search” meant, and found an explanation in some LinkedIn Q&As, that it is about my profile matching others’ search terms when they were not searching for my name specifically. Should academic authors be interested in this metric? I think probably not, and here is why!
I’m not 100% sure, but it seems to me that the “search” referred to must be the LinkedIn search box, on their own site. So these stats are also reflective of the amount of activity happening in LinkedIn. Since it’s not a dedicated, academic forum, our academics might not be too worried about LinkedIn activity.
If your discipline has some really active discussion groups on LinkedIn, or you wanted to generate interest in your work beyond the academic community and within the LinkedIn one (which is pretty large), then you might want to watch LinkedIn metrics more closely. You might want to see more of those search appearances being converted into profile views, as evidence that your work is relevant to that community, and as a channel to direct readers to your scholarly articles and other outputs. In order to do this, you would need to ensure that your profile describes your work accurately. But this is a good idea anyway, so I see no reason to pay attention to the number of “appearances in search”!
I blogged last time about Google Adwords but I must have had a free preview or something because I can’t find the same feature for free now. I often pop in to Google Analytics and Feedburner to see who is looking at my blog, and I regularly look at the stats for the Library’s Support for Research pages, and using these tools I can see who is looking at my site(s) and what keywords are bringing them there. These are far more rich and valuable to me than the LinkedIn stats, so I guess that they will be to academic authors, too.
But how nice of LinkedIn to send me the stats from time to time: it works for me as a reminder to update my profile!
March 02, 2012
Who is interested in my online profile?
Writing about web page http://uk.news.yahoo.com/who%E2%80%99s-been-looking-at-your-facebook-page--can-you-find-out-.html
This recent news article on Yahoo inspired me to have a little look at what I can find out about people interested in my work online: I already get e-mails from Academia.edu whenever someone googles me and clicks to see my academia.edu profile.
I have never before explored Google's Adwords: I'm not a commercial organisation(!), but it has an interesting "Keyword tool" that you can use for free. I gave it my name as a phrase, and it told me that on average there are 22 searches per month, over the last 12 months for my name. It also came up with 2 keyword ideas: "information science" and "scholarly writing".
You can also give Google Adwords a URL, so I gave it the one for this blog: this time there were 98 keyword ideas. I think that the idea is that you could pay for your advert to appear whenever someone searches for such keywords. Which I'm not going to do, but it could also be an interesting tool for researchers who are looking for keywords to enhance their searching! They could give it the URL or title of a paper of particular interest and see what is suggested, if they are struggling to come up with ideas for themselves.
LinkedIn also tells me how many people have viewed my profile there in the last 90 days, and how many times my profile has shown up in search results. Not quite so high as the figure from Adwords, but then it's only about LinkedIn. On LinkedIn I can see the profiles of the people who viewed my profile, along with some anonymous users... there is more information for those prepared to pay for it, too.
Academia.edu has a "stats dashboard" which tells me how many profile views and document views I've had in the last 30 days: even fewer than LinkedIn. I can also see what country the views were from and which referring site and keyword led them to find my profile/article.
Now of course, there is also Google Analytics which can tell me how many people have viewed my blog, and Twitter and Hootsuite between them can give me an idea of who is following me and how many people click on the links I share and so on and so forth... and if I had the time to track all of this then I might be able to see whether/which blog posts and tweets and activities in general are having some kind of impact... but still, I'm just happy that Google Adwords has suggested some words associated with my work interests!
January 17, 2012
Open Access briefing paper from SCONUL
Writing about web page http://www.sconul.ac.uk/news/OAbriefing/OA_impact_briefing.pdf
Alma Swan's latest briefing paper for the Research Libraries UK and SCONUL is available online. It has some great little graphs showing the citation advantage of open access publication for those in Engineering, Clinical Medicine and the Social Sciences. Also, a case study of the effect on citations of deposit in an institutional, open access repository, of an author's works.
The paper also explains the value of an open access repository in supporting the impact of research work, making the scientific findings and resources available to the public, helping to engage lay people in "citizen science" projects like Galaxy Zoo.
The briefing also discusses the value of OA to a knowledge-based economy, and it is a great, brief overview of all these topics.
September 27, 2011
Recognition for your research
Writing about web page http://www.rcuk.ac.uk/research/Pages/ResearchOutcomesProject.aspx
Researchers are increasingly expected to be able to demonstrate their worth, and to have their research measured. There are different parties interested in measuring research, and the emphasis of each seems to vary:
- The Research Excellence Framework (REF) excercise which determines UK government funding via HEFCE (Higher Education Funding Council for England)
- University management who want their departments to score well in the REF, but also to ensure that they are getting the best out of their staff (or the best staff!) and possibly also that the University scores well in published rankings.
- The Research Councils UK: these guys are mandating open access publishing and good data management practice, and they are looking for how research will have impact. Their "Research Outcomes Project" lists the following types of outcomes that institutions are to report on:
· Publications
· Other Research Outputs
· Collaboration
· Communication
· Exploitation
· Recognition
· Staff Development
· Further Funding
· Impact
This list certainly gives food for thought about how research generates such outcomes.
It is also notable that it is the institution which is to report on such outcomes and reporting is not entirely down to the Principal Investigator.
June 28, 2010
Impact in the Context of the REF
Writing about web page http://www.kcl.ac.uk/iss/support/ref/june2010
On Friday last week I attended an event at King's College London, all about REF measurement of research impact.
General plans
David Sweeney spoke about the need to justify public expenditure and that this is worth doing because we believe in research, and in "new knowledge deployed for the benefit of society". Selective funding will maximise the contribution of research, and we need to consider how we should select...
David recommended that anyone interested in what the REF will be measuring should start with the document released by HEFCE just before the election... I'm not sure exactly which document that was! In any case, they are still waiting for results of their impact pilot, and in particular the feedback from the peer review panels which will take until some time in July.
David's "Next steps" slide listed:
- Completing the impact pilot exercise
- Setting up the expert panels
- Equalities and diversity steering group
- Development of the submissions system - to be overseen by an expert steering group
- Establishing formal institutional REF contacts.
Pilot process
Graeme Rosenberg spoke next, about the progress of the impact pilot exercise. Graeme gave a definition of "impact" in the REF context as including: "economic, environmental, social, health, cultural, international, quality of life & others" They are definitely NOT looking for academic impact.
This impact must arise from research work specifically rather than from broader work that academics engage in, and it should be about actual impact that has already occurred rather than projected or potential impacts. The RCUK might be concerned with future impacts, through the grant application process, but the REF is concerned with historical impact.
On the day, there was some confusion as to whether it is best to concentrate on the impact of an individual researcher, or on a portfolio of research which might actually belong to a group, when writing up case studies. Graeme's point was simply that those who submit to the REF ought to consider carefully which is appropriate for your context. Graeme also made a point that the REF is about assessment rather than measurement of research impact.
The pilot looked at 5 different disciplines, and each institution which took part was asked to provide an "impact statement" about a broad range of activities and case studies which had a word limit of 1500 words each. Each institution was asked to provide 1 case study per 10 FTE staff, with a minimum of 2 case studies for the smallest departments.
Research included in the case studies had to have taken place at the submitting institution: this has implications because researchers do move on, and their work would count towards their previous institution's submission, if that institution had kept a record of their research and its impact! Or failing that, if they were at least successful in attempting to reach former staff to ask for their contribution to a case study. (NB retired staff's previous work could count in this context?) Narrative evidence had to be supported by indicators and references to external sources, eg links to websites or names of people (external to the institution) to contact for a reference.
The Pilot's panels were made up of both researchers and "users", and each case study was looked at by at least 4 people from each panel, preferably by two of each type.
Theory of research measurement/assessment
The next speaker was Claire Donovan, who spoke about the Australian experience of backing away from measurement/assessment (Research Quality Framework) with a change of government. Claire introduced some interesting concepts, speaking about a "relevance gap" between the research that society most needs and that which academics produce, and about "Triple bottom line accounting" which takes into account social and environmental consequences, as well as the economic ones. Claire also spoke about "Technometrics" and "Sociometrics" and she amused the audience by saying that the latter had been said to make alchemy look good!
4 definitions which had been proposed in the Australian RQF that never happened remain in the current ERA, and they look very useful (to me) for anyone considering what the impact of their research might be/have been and looking to gather evidence of such:
1) Social Benefit: Improving quality of life, stimulating new approaches to social issues; changes in community attitudes and influence upon developments or questions in society at large; informed public debate and improved policy-making; enhancing the knowledge and understanding of the nation; improved equity; and improvements in health, safety and security.
2) Economic Benefit: Improved productivity; adding to economic growth and wealth creation; enhancing the skills base; increased employment; reduced costs; increased innovation capability and global competitiveness; improvements in service delivery; and unquantified economic returns resulting from social and public policy adjustments.
3) Environmental Benefit: Improvements in environment and lifestyle; reduced waste and pollution; improved management of natural resources; reduced consumption of fossil fuels; uptake of recycling techniques; a reduced environmental risk; preservation initiatives; conservation of biodiversity; enhancement of ecosystem services; improved plant and animal varieties; and adaptation to climate change.
4) Cultural Benefit: Supporting greater understanding of where we have come from, and who and what we are as a nation and society; understanding how we relate to other societies and clutures; stimulating creativitity within the community; contributing to cultural preservation and enrichment; and bringing new ideas and new modes of experience to the nation.
Claire's main point was for complexity in whatever measurement/assessment of research takes place.
Putting together a University's return
The next three speakers were from Universities who had taken part in the pilot and they spoke about their approaches to/issues identified by submitting their impact statements and case studies. Afternoon breakout sessions included similar information, so they are also described here. I attended the ones on Clinical Medicine and Social work and social policy.
Approaches:
- HoD identifies who to approach and ask to submit a case study: ask for twice as many as needed and then choose the best.
- Academics write the materials themselves but then they are re-drafted by the Uni's "research support" team.
- Involve the press office in the writing of case studies.
- Set up a writing team to create the case studies, steered by a group of senior academics who provided stories and contacts for the writing team to interview, and supported by a reading group of external people.
- Training of academic staff who write the materials.
- "Impact champions" from amongst the academic community to encourage others in the right direction.
-
“Ebullient re-writes in marketing speak don’t go down well with academics”!
Issues:
- Too much focus on the problem that the research was intended to address and the actual benefit/impact is not properly described.
- Repeated iterations were needed for almost all case studies, in both speakers' experiences.
- Impact statements were harder to write than case studies and there was much overlap in the way the documents were written up.
- The process creates a tendency to focus on impacts that can be measured, rather than on those which matter most.
- Attribution and provenance of impact is time consuming to demonstrate. (gathering of evidence!)
- Cannot guarantee confidentiality of HEFCE materials (because of FOI) so could not include impacts in commercial sector where industrial sensitivities were involved.
- Belief that Universities don't create impact, they contribute to it.
- Difference between activity and impact: evidence tends to be about activity, rather than about impact. Is this "interim impact" ie not prospective but not yet historical either?
- Patchy provision of evidence by the academics: eg dates and times they sat on government research committees, or the contribution their research made to a white paper or green paper is not properly referenced.
- Negative findings can also have impact: how/whether to write these up? (NB none admitted to it on the day!).
- Sometimes the research might have had impact (or have been expected to) but political (or other reasons) got in the way.
- If you're only writing 1000 words then you have to pitch it in relatively simple terms: and some of the panel are "users" rather than academics, so the academic discipline's writing style is not always appropriate... but this is a balance because some panel members are academics as well and even the "users" are not entirely lay audience.
Some found that subject knowledge was vital in supporting a department in writing case studies, whilst others felt that the lay person or different discipline person added a valuable perspective. Most case study writing seems to employ social scientists' skills. The subjective nature of the selection and then panel process makes scientists nervous.
Panel review process
The panel process was then described by 2 chairs of the Clinical Medicine (Alex Markham) and English Lang & Lit panels (Judy Simons).
Panels were made up of distinguished individuals because it is important that the community should value the panel & therefore the process. It is good for the academic sector to create these kinds of case studies: Alex Markham was formerly director of Cancer Research UK and when they told the public what they were doing, doubled their income!
General tips, some of which relate nicely to Claire's definitions above:
- What did the institution do to generate the impacts cited?
- “Reach and significance” balance the two: hundreds of cases of athletes foot, or just a handful of people who wouldn’t have survived otherwise? Both have benefit and impact, but which is most important?
- Question of how far down the pathway of having an impact is the case study? (ie the point Graeme made about projected or historical impact).
- Panel chose to err on the side of positivity.
- Qn of a specific objective which was achieved : this is a much more impressive impact than an unintentional one. Intended impact gets credit! It didn’t just happen without being planned. (NB this needs to be balanced with point about dwelling too much about what the research was intended to address.)
- Avoid “look at what I’ve done in my distinguished career” style of a case study as this is captured in other elements of the REF.
- Does impact equal engagement? Or benefit? How to measure? Hard or soft indicators? Can’t tell if audience go away thinking about the performance or not…
- Research might not be ground-breaking, but it might be “transportive".
- Articulating the contribution: use language to reflect the character of the discipline!
- Give hard supportive evidence: name names and provide dates,
- Impact criteria for the humanities based on the BBC framework for public value.
- Feeding the creative economy – consider the publishing industry.
- Preserves national cultural heritage through partnerships…
- Extend global/national knowledge base (beyond academia).
- Contribute to policy development
- Enrich/promote regional communities and their cultural industries.
- Alex Markham later said in a breakout session that it was obvious when contributions had not been read: he recommended a "reading group" to go through the material before it is submitted.
RCUK Research Outcomes Project
Before the day ended, Sue Smart spoke about this project to create “a single harmonised process allowing us to demonstrate the impact of cross-council activities”. RCUK also have to demonstrate to UK government what the UK is getting from its investment in research. I've written about this elsewhere previously, so I'm not going into detail about it here. David Sweeney contributed to the closing comments to say that RCUK are looking to gather outcomes from all research, whilst REF is picking “the best” of research.
What's next?
HEFCE will give institutions pointers about what works in the case studies and what doesn’t in the autumn 2010. There will be a series of workshops because more work needs to be done with arts and humanities and soc scis in particular.
A broad framework will be devised, with scope for panels to tailor, in consultation with their communities.
June 22, 2010
Measuring impact
Writing about web page http://www.aslib.com/membership/resources/RAND_research.pdf
I wasn't able to attend the recent ASLIB event on Research Support, but it looks like it was a very good one. I particularly like this presentation and the slides in it about international practice in capturing research impact. I'm gearing up for a forthcoming event on the REF at Kings College London on Friday, so I guess that impact is on my mind!