All 18 entries tagged Events
View all 125 entries tagged Events on Warwick Blogs | View entries tagged Events at Technorati | View all 3 images tagged Events
October 23, 2012
Round up of activity
Writing about web page http://wrap.warwick.ac.uk/50529
The last month or so has been a busy time for WRAP; we have topped 7000 full text, open access articles and are busy working towards the next milestone and we have been busy contributing to a range of different blogs and other sources. So a quick round up of activity in honour of Open Access Week:
- State of the Nation : Finch, RCUK, OA and more - Contribution to the UKCoRR blog on a whole raft of issues arising from the task of implementing the RCUK policy and the Finch report.
- SEO analysis of WRAP, the Warwick University Repository - Our contribution to Brian Kelly's work on search engine optimisation of institutional repositories using the Majestic SEO tool.
- Open Repositories 2012 - A short write up on the Open Repositories conference earlier in the year for the CILIP Update magazine (self-archived to WRAP).
Also for Open Access Week the team are running drop in sessions in our Research Exchange so come along if you have any questions about repositories, open access, WRAP, electronic theses or anything else related! Additionally if you are part of the MOAC DTC or the Society of Biologykeep an eye out for the WRAP team speaking at event near you later in the week!
July 26, 2012
Open Repositories 2012 – Day Two (minus keynote)
Writing about web page http://or2012.ed.ac.uk/
The second day started early with my final workshop, the Place of Software in Data Repositories, this workshop focused on work that had been done by the Software Sustainability Institute on the role of software in the research process. Again illustrating the slow move away from the rewards for researchers being tied to traditional publications and an acknowledgment that research now is a diverse process and involve a huge array of skills. But for researchers to gain the rewards for their work there needs to be a systematic way of storing and making these things available. The host of the workshop, Neil Chue Hong from the University of Edinburgh, spoke briefly on a new idea of the 'software metapaper' as a way to cite all the different parts of a project. The 'metapaper' is a neat idea to get around one of the problems that has been discussed in terms of citing datasets, the fact that some journals limit the number of references you can use (which seems anti-intuitive to me but that's another blog post!). The 'metapaper' will include create a complete record of a project, citing within it any publications, methodology, datasets or software objects that might all be published in different places into a single citable object. The first journal of this kind, the Journal of Open Research Software is due to launch soon. The issue of the long term preservations of software arising form JISC funded projects was also mentioned as an issue that JISC is beginning to grapple with now.
Breakout groups were centred around the range of factors that needed to be considered when making software available in a repository. Which brought up many of the main issues that we had been discussing in relation to datasets, issues of versioning, external dependencies (software is not PERL code alone), drivers to deposit and trust issues can up again. Key amongst these challenges was the issue of sustainability and also of reuse of the software by the software customers. Much of the discussion centred around what exactly it was that you needed to archive for the curation of software, just the text document containing the code? Or would you need to host executable files and the associated virtual machine interfaces as well? What does a trusted repository look like? One interesting issue that also came out of this was the issue of needing to store malicious software, for training purposes and testing, but needing to make these items really clear in an open repository for fear of range of problems! This morning was a great eye opener on the range of questions specialist types of material can raise for repositories making it ever more clear that generalist repositories, like our institutional repository, may not be suitable to try and store everything.
The main conference started in the afternoon with a fantastic keynote opening by Cameron Neylon, the new director of advocacy at the Public Library of Science (PLoS). I had so much to say about this, it has it's own post (to follow)!
Following the Keynote was the 'Poster Minute Madness', a brilliant idea for presenting posters at a conference and a way to get people excited about the content of the posters. A surprisingly nerve-wracking experience for presenters despite being only 60 sections long! (Our poster can be found in WRAP, self-archived on the way to the conference.) As before when I last saw this at OR10 in Madrid, I was blown away by the range of activities being undertaken by repositories around the world and the exciting projects people are thinking up! Highlights for me were:
- Brian Kelly and Jen Delasalle on social networks and repositories
- Chris Awre and others on history data management plans
- Helen Kenna and Karen Bates on Salford's digital archives
- QUT's poster on enhanced usage stats (very much our ideal situation)
But all of the posters were well worth the time taken to read them, I was just disappointed that I didn't get more time talking to people about my poster or talking to others about theirs! The poster reception followed the last events of the day at the stunning Playfair Library.
From here the conference started the parallel sessions, which as usual reminded me of being at a music festival where three of the bands you want to see are all playing at the same time on different stages! (Here I'll add a huge thank you to the organizers for videoing all the presentations so I could watch the ones I missed!) In the end I plumped for the sessions on the development of shared services, which gave an interesting view of a number of countries who are using national shared services as the base of their repository infrastructure. For every advantage of this kind of service I think of I'm reminded of the really rich, heterogeneous environment we have in the UK where every repository works a little differently for different people and I think it's worth the frustrations that always arise when you try to make the systems talk to each other! It was good to hear about the progress of the UK Repository Net+ project that looks like it has the potential to do a lot of good for repositories in the UK and the news from the World Bank of their aggressive Open Access policies is also really encouraging!
July 16, 2012
Open Repositories 2012 – Day One
Writing about web page http://or2012.ed.ac.uk/
This is the first of a series of blog posts on my reflections on the 7th International Conference on Open Repositories. I've split the post by the days of the conference mainly to avoid this being the longest blog post ever and to make it easier to refer to later.
Day one was taken up with half day workshops, a fantastic idea and allowed a level of interaction that some of the later sessions couldn't. All the workshops seemed to feature great discussions on relevant topics and a great comparison of different practises in different countries and institutions. My day one workshops were:
- ISL1: Islandora - Getting Started
- DCC: Institutional Repositories & Data - Roles and Responsibilities
- And an optional evening workshop on EDINA's Repository Junction Broker project.
The Islandora workshop was fascinating! I'd not seen very much of the software or it potential before and the workshop was a great introduction to everything about the software, from the architecture and underlying metadata to the different Drupal options for customising the front end. Their system of 'solution packs', Drupal modules that allow you to drop in functionality for different functionality and content types into the system is a great idea and allows the system a degree of flexibility not found in other systems yet (although the EPrints Bazaar might get there soon). They demo-ed a books solution pack for paged content as well as discussing forthcoming solution packs for institutional repository (IR) functionality and Digital Humanities projects. Islandora maintain a web-based sandbox environment to allow people to experiment which is wiped clean each evening which I'm looking forward to playing with as we scope new software for future projects. I also like the fact that the software is completely open source, following the replacement of Abbyy OCR software with the open source equivalent Tesseract. Islandora as the 'new' player in the market is managing to provide the same functionality that the other systems do with a collection of exciting add-ons, however I do see that as you add the extra functionality you are having to maintain a number of additional modules as well as the core software which could have resource implications down the line.
The afternoon workshop run by the Digital Curation Centrewas a nice mix of presentations on the current thinking of a number of projects from around the world and group debate on the weeks 'hot button' topic of Research Data Management (RDM). This topic was to come up time and again in the week as most of the talks and discussions touched on it at least a little. As the title suggested the main thrust of the discussion was around who was responsible for what! Discussions covered a range of topics and some of the messages that came out most strongly for me where:
- Use the discipline data centres as much as possible, no IR (data or otherwise) can, or should, do everything.
- Knowing where the other data centres are is essential.
- Try not to get bogged down trying to 'fix' everything first time, fix what you can and work on the rest later or you could end up doing nothing.
- Interesting point from Chris Awre at Hull, use the IR as a starting point for discussions to move the researcher's thinking from what you have to what they need.
- Try to get into the researchers workflows as early as possible as it makes creating the metadata easier for the researcher, which in turn helps the archive.
- Are repositories qualified to appraise the data deposited with them?
I'll admit that the whole area of RDM is a scary one but it was good to realise that there are both a, a lot of people out there feeling the same and b, a lot of assistance there for when its needed. The idea of just getting something in place and fixing the rest later feels a bit anti-intuitive to me but, on the other hand, it's what I've been doing with WRAP's development of the last two years, it's just that someone else had to take the first step!
The final workshop of the day was an informal one in the evening discussing the development of the EDINA's Repository Junction Broker project which is going to form part of the services offered by the UK Repository Net+. This discussions centred around the development of the extension of the middleware tool developed by EDINA to allow publishers to feed deposits directly into repositories as a service to researchers. As ever this sound like a fantastic idea and the debate was active and enthusiastic as the various stakeholders discussed how to make this work for both repositories and publishers. Certain as far as WRAP is concerned if what we need to do is get our SWORD2 endpoint up and running that that is what we have to do, the service offered by the Repository Junction are far too good to miss out on! I'll be watching this develop with interest....
More on day two soon....
July 09, 2012
JISC/DataCite Event
Writing about web page http://www.bl.uk/aboutus/stratpolprog/digi/datasets/workshoparchive/archive.html
Last Friday saw the second DataCite event, jointly hosted by JISC and the British Library, this one focussing on metadata for datasets. This is an area of interest for me not just because of the developments in research data management (RDM) that are starting to impact repositories but also because of my background in metadata. The organisers warned us that they were starting with the basics and getting increasingly complicated as the day went on, this was certainly true!
The sessions started with a very good introduction, from Alex Ball of the DCC, on some of the essential metadata needed for both data citation and also data discovery. As he put it the different between, known item searching (data citation) and speculative searching (data discovery). The needs of users undertaking both of these activities are fundamentally different but do have overlap. Through analysis of 15 schemas being used by data centres at the moment he highlighted 16 common metadata fields that appear in the majority of the schemas. None of these fields will come as much of a surprise to people creating and using metadata, but might be unfamiliar to the researchers who may have to create this metadata.
Elizabeth Newbold, from the British Library, spoke about the development of the DataCite schema, listing the essential/mandatory fields that they expect people to provide DataCite with in return for minting the DOIs. These fields mostly represent the fields for data citation as mentioned by Alex but DataCite is hoping that data centres will supply them with some of the additional metadata for discovery as well. This is key to the BLs other presentation from Rachel Kotarski who spoke about developments at the BL in transforming the DataCite metadata into MARC records for use in the main BL catalogue. Rachel spoke about a pilot project run to add the dataset metadata into a trial instance of Primo as a 'proof of concept' to assess whether users were looking for this kind of material and if so what kind of metadata did they want when trying to discover it. At least one JISC RDM project in the room now plans to send much more of their metadata to DataCite to allow better harvesting by the BL and it's certainly something we need to bear in mind when developing Warwick's services in this area.
David Boyd from the data.bris project laid out in detail how they are building on the new capacity for data storage at Bristol to build integrated services around data registration, publishing and discovery. This was an excellent insight into how one University has conceived the whole data model and highlighted some key areas of integration with other services that is possible with joined up processes. I particularly took away the details of the range of ways in which they are thinking about automating metadata creation to remove some of the burden on researchers. Michael Charno from the Archaeology Data Service gave some insight from one of the existing Data Centres, who have been in the game at lot longer than most, in a fascinating talk entitled '2000 years in the making, 2 weeks to record, 2 days to archive, too difficult to cite?'. The ADS model charges data creators/projects to host their data and presents the data free at point of access to the user. Currently one of their challenges is persuading users to reuse data and data loss, archaeology is inherently a destructive process so the records of the excavation are often the only evidence remaining at the end of the project. Michael pointed us all towards a set of guidance documents and toolkits used by the ADS to advise researchers on creating metadata but admitted they didn't have any evidence on the amount of use these tools got. Another area of work discussed was looking at the mappings between the current schema, developed in house for the ADS compares to the new DataCite schema.
The final two talks highlighted issues of interoperability with Steve Donegan from the STFC speaking about the difficulties of reconciling the variety of different schemas used by different environmental sciences as part of developing the NERC Data Centre. He highlighted the different metadata needs of scientists who want the raw data and government agencies who want the data at one level of analysis higher for policy decisions. Steve finished by discussing in some technical depth the challenge of making the NERC data complient to the INSPIRE, European standard. Finally David Shotton of the University of Oxford spoke on a range of projects at Oxford looking at the DataCite metadata. Firstly he has worked on a schema to make DataCite metadata available in RDF (new mapping, DataCite2RDF available in draft form http://bit.ly/N3VKsx) using a range of SPAR ontologies. He also spoke about a colleagues project to create a we form to help researchers generate DataCite metadata in an easily exportable XML format and finally on the importance of citing data in the reference list as well as in the text, allowing it to be picked up by services like Scopus and Web of Knowledge.
Discussions at the end of the day was centred around versioning and DOIs for subsets if datasets as well as the importance of keeping things machine readable as well as human readable! Overall the was a fascinating day that provided a little of everything, from clear guidance on the basics and essential metadata required for the basic functions to very complex topics showing how far good metadata can take you. Lots of food for thought for the development of our own services!
March 09, 2012
Web 2.0, Creative Commons and Orphan Works
Writing about web page http://www.rsp.ac.uk/
Today was the RSP's first webinar on 'Web 2.0, Creative Commons and Orphan Works' and as it was on a subject dear to the hearts of many of us here at Warwick we arranged to watch it as a group. Presented by Charles Oppenheim (Emeritus Professor of Information Science at Loughborough University) the webinar covered an array of current topics and concerns around the introduction and ever increasing use of collaborative tools and new licences.
The central theme, as Prof Oppenheim stated, was around the way copyright is affecting the way we use technology but more importantly how our use of technology is affecting copyright.
Discussions covered:
- Web2.0 as a novel challenges to the existing configuration of copyright law
- A closer look at 'performance rights' as an integral part of the process of disseminating recorded lectures
- JISC's excellent Web2Rights toolkitas a single source of guidance and advice on all areas of copyright, but primarily Web2.0 material
- Managing complaints (quickly take down, investigate, don't forget to apologise and offer credit or reimbursement as appropriate)
- Basics of Creative Commons Licences
- An important caveat that people can use creative commons licences when they do not have the rights to do so
- [One delegate alerted us all to a browser plug-in to help identify CC licenced material OpenAttribute, which I will be investigating]
- Change is coming, both in the form of the UK's response to the Hargreaves Review and in the EU with a directive on orphan works.
- Vicarious Liability can be argued so that the institution is liable even if the student is only using equipment provided by the University (a wifi hub for instance) but only if the institution can be proved to have had the "right, ability or duty" to control the actions of the student who violated copyright.
- Non-commercial is very much not the same as non-profit, a loss-making activity that takes any money at all is still a commercial activity.
- Creative Commons licences are definitely not just for artworks, but can cover anything and everything (except software with is better with a GNU licence)
A very interesting and thought provoking talk (as all Prof. Oppenheim's are) what this space for the write up of the second RSP copyright webinar on the topic of proposed changes to copyright law.
Thanks again to everyone involved in the webinar!
August 09, 2011
SHERPA RoMEO and Publishers – RSP Event
Writing about web page http://www.rsp.ac.uk/events/romeo-for-publishers/
Thankfully I'm new enough to the whole repository busy that I've never had to try to manage or populate an open access repository without the help of SHERPA's RoMEO service and I hope I'll never have to try! So an event presenting a number of new developments and the chance to engage with Publishers representatives was too good to miss out on!
The event itself gave two really clear messages: we are all on the same side and clarity is everything. The clarity message was raised again and again, all the various players in this community need clarity and consistency in who says what, what means what and what we can do with what (to badly paraphrase Bill Hubbard). Another message that came from both RoMEO and representatives of the Repository community (Enlighten Team Leader Marie Cairney) was that at the end of the day, as much as we care about Open Access, we don't mind being told 'no' as long as it's clear that that is what you are saying.
Some highlights from the sessions:
- "Change is coming" was the title of the latter part of Bill Hubbard's (Centre for Research Communication) presentation and highlighted the many areas (peer-review, end of the Big-Deal (?), social research tools (Mendeley etc.), demands for free access, cross-discipline research, possibility of institutions taking more control of the intellectual property produced by the institution and more) where we might be seeing change that affect the way we work in the next ten years. No doubt there will be others we haven't thought of yet.
- Azhar Hussain (SHERPA Services) continued the theme of opportunity by highlighting some interesting statistics for RoMEO. The service currently stands at 998 publishers covering 18,000+ journals and bringing in nearly 20,000 visits a month. Also highlighted was the growth of usage from within CRIS systems, something RoMEO is tracking closely.
- Mark Simon from Maney Publishing spoke about the reasons behind the companies decision to 'go green' as well as highlighting the fact that for Maney, as they broadly publish for learned societies, the copyright of published work often does not rest with Maney itself, but with the Society. Mark also highlighted the cost of their 'Gold OA' options (STM journals $2000, Humanities journals $800, Some tropical medicine journals $500) stating that the cost disparity was due to the cost of STM journals to produce and the fact that more people want to publish in STM journals.
- Marie Cairney (Enlighten, Glasgow University) spoke about some of the recent developments to Enlighten, including using the 'Supportworks' software to better track enquiries and embargoes. She also highlighted the changes to publisher policies over the years that have caused problems for her team, most of us can guess which ones she mentioned! Marie's final message was that the more clarity we can get on policy matters, the more deposits we can get.
- Jane Smith (SHERPA Services) spoke on a similar subject and touched on many of the common pitfalls that can occur when contacting Publishers to clarify policy. These included, no online policy, no single point of contact, two contradictory responses from different parts of the company and more. Jane ended with a plea for the publishers to let RoMEO know when their policy changes so they can get the information out as quickly as possible and for copyright agreements/policies to be written in clear English.
- Emily Hall from Emerald was up next. One point clearly highlighted from the outset was that Emerald was a 'green' publisher (it couldn't really have been any other colour!). Emily also spoke about the decision not to offer 'Gold OA' options (not felt to be good for the publisher or work for the discipline they mostly publish) and touched on issues with filesharing. (Trivia: Emerald's most pirated book 'Airport Design and Control 2nd Ed.') Emily did mention that Emerald haven't been able to 'see' the content in Mendeley (as of this morning listing more than 100 million papers) yet but they are looking for a way to do this. One thing that came out of the discussion at the end of the talk was an idea for publishers to return versions to authors with coversheets clearly indicating what they can and can't do with that version.
- Peter Millington (SHERPA Services) finished the presentations with a demonstration of a new policy creator tool developed to be used with RoMEO. This tool, based on the repositories policy tool created as part of the OpenDOAR suite of tools, would allow publishers to codify their policies into standardised language as a way of helping people to read and understand the policy of their publisher/journal. I for one hope publisher's start using this tool as standard. The prototype version of the tool is available now and can be found here.
The breakout session that followed the presentations asked us to consider four questions (and some of our answers):
- How can RoMEO help Publishers? (Track changes to policy, Visual flag for publishers to use on their websites to indicate the 'colour' of the journal, act as a central broker for enquiries so one service has a direct contact to the publisher that can be accessed by all creating a RoMEO Knowledge Base of all the enquiries for all repositories to use)
- How can Publishers help RoMEO? (Nominate a single point of contact, create a page for Repository Staff similar to their pages for 'Librarians', ways to identify academics (see previous blog post), clarity of policy)
- What message do Publishers have for Repository Administrators? (Thank you for the work done checking copyrights, don't be scared to talk to us, always reference and link back to the published item.)
- What message do Repository Administrators have for Publishers? (Clarity (please!), make it clear what is OA content on your website, educate individuals on copyright, communicate with us!)
A full run down of the answers to those four questions can be found at the link above.
The final panel discussion raised interesting questions that we didn't really find answers for! Issues on multimedia items in the repository; including datasets in the repository or finding ways to link the dataset repository to an outputs repository - DOI's for datasets (see the British Library's project on this topic); and the matter of what to do in the case of corrects and/or retractions being issued by publishers. The last one at least gave me some food for thought!
The event was another valuable day from the RSP featuring lively discussions on current situations and challenges facing the repository community and an invaluable opportunity to meet and have frank discussion with the Publishing Industry representatives. I think both groups got a lot out of the day along with the realisation that we have a lot more in common than might seem obvious at first glance.
August 08, 2011
IRIOS Workshop – Part Two: Comment and Workshops
Writing about web page http://www.irios.sunderland.ac.uk/index.cfm/2011/8/1/IRIOS-Workshop-Parellel-Sessions
One thing I took away from the workshop session was that both systems ROP and IRIOS were doing the right things and going in the right directions but weren't quite there yet. A big concern to me as an IR manager (and as a former Metadata Librarian) was that the IRIOS system creates yet more unique identifiers (see later in this entry for further discussion of unique IDs). Also automation of the project linking to outputs can't come fast enough, especially for services like WRAP where we spend a not inconsiderable amount of time tracking down funding information from the papers. However we could also benefit from taking information from systems such as this, which tie the recording of information about outputs much more closely to the money, which is always a motivator for people to get data entered correctly!
I think it is telling that more and more of these 'proof of concept' services are being developed using the CERIF dataformat (after R4R I'm looking forward to hearing about the MICE project early next month) but the trick with a standard is that it is only a standard if everyone is using it. I don't think we are quite there yet, I think this coming REF has been such an uncertain process so far that I think there is a lot more chance of CERIF being the main deposit format in the next REF. (If I'm still here for the next REF I'll have to reflect back on this and see if I was right!)
The afternoon of the work shop was taken up with a number of workshop discussions on a range of topics, below are a few of the notes I took in the two discussions I took part in. To see the full run down of all of the discussions please see the link above.
Universal Researcher IDs (URID)
It was generally accepted by all in the discussion that unique IDs for things, be they projects, outputs, researchers or objects were a good idea in terms of data transfer and exchange. They must be a good idea as there are so many different ones you can have (in the course of the discussion we mentioned more than eight current projects to create URIDs). Things are much easier to link together if they all bear a single identifier. However when it comes to people the added issue of data protection rears its head and can potentially hamper any form of identification if it is 'assigned' to the person. A way round this was suggested to allow people to sign up to identifiers, thus allowing those who wish to opt out to do so. Ethically the best route perhaps but unless a single service was designated we could end up with a system similar to the one we have now where everyone is signing up, but not using a whole array for services. The size of the problem is the size of the current academic community and global in scope. Some of the characteristics of URIDs we came up with were they just be; unique (and semantic free - previously mentioned privacy issues), have a single place that assigns them, have a sustainable authority file, not be tied to a role. One current service in place that fulfils many of the above criteria is the UUID service, however this falls down in that there is no register of assigned IDs so people can apply for multiple IDs if they forget them (and lets face it the likely hood of remembering a 128 number is kind of low) ... and we're back in the same situation again. I'm not sure there is a single perfect solution to this problem, though my life would be easier if there was!
REF Considerations
This was a free form discussion that covered the REF, REF preparations and 'Life after the REF' in various guises. HEFCE are currently tendering for the data to be used in the REF at the moment, needless to say the two services bidding are the expected two, Thomson Reuters and Scopus, but HEFCE will only be buying one lot of data. Bibliometrics were touched upon in relation to the REF, is it better to have two people select a really highly cited paper or choose two lower cited papers? Discussions on the HESA data, checking the data once it comes back from HESA, possibilities of mapping the future HESA data to the REF UoA for long term benchmarking rather than a single point hat goes out of date very quickly. Do people's CRIS systems really hold all of the data required for a return? What are the differences between the impact as measured/requested by HEFCE and the Impact measured by RCUK? Selection policy and training, the possibility of sector wide training, possible best practise mentioned in the idea to train a small core group of people who would handle all of the enquires centrally. Would it be possible for institutions to get the facilities data on a yearly basis rather than just before the REF and then have to try and chase people who may not remember/have left to try and verify the data?
One interesting comment from the discussion was the news that NERC, at least, has seen a big increase in the number of grant applications including a direct cost for Open Access funding. Interesting particularly is that there had been a number of comments made to me that researchers didn't want to do that are they feared making their grant application too expensive.
All in all the day was very interesting for me as an introduction to a 'world beyond publications' (as I was attending both for myself and for a member of our Research Support Services department) and as an indication of what we need to do to go forward.
IRIOS Workshop – Part One: The Presentations
Writing about web page http://www.irios.sunderland.ac.uk/index.cfm/2011/7/28/IRIOS-Workshop-Presentations
The IRIOS (Integrated Research Input and Output System) workshop at the JISC Headquarters was designed to demonstrate the preliminary look of a system designed to take information from the RUCKL funders on Grant awards and combine it with the information from University's IRs and CRIS systems. The event was attended by research managers, representatives of four RCUK funders and repository managers and all of the presentations can be seen at the link above.
The event kicked off with a video presentation from Josh Brown of JISC discussion the RIM (Research Information Management) programme of projects. One interesting statistic was that it is estimated the £85mill/year is spent on submitting grant proposals and administering awards. Once again the savings that can be realised with the use of the CERIF data format was raised and the point that REF submissions can be made to HEFCE in CERIF was highlighted as a sign of the growing importance of the standard. IRIOS was highlighted as a step towards a more integrated national system of data management. Josh closed with the news of a further JISC funding call to investigate further uses of CERIF due to be announced soon.
Simon Kerridge (Sunderland) was up next to discuss the landscape and background of the project and the need for interoperability and joined up thinking between a number of different University departments if we are to make the most of an increasingly competitive environment. He also spoke of the ways in which IRIOS might feed into the RMAS (Research Management and Administration System) project further enhancing the cloud based system. Simon finished by touching on the challenges (research data management and unique researcher IDs anyone) and opportunities for the future (esp. the JISC funding call).
Gerry Lawson (NERC) was up next with a whistle stop tour round the RCUK 'Grants on the Web' (GoW) systems. Starting with a stern warning that if the funders and institutions don't find a way to match up the data held by both parties commercial services will find a way to fill the gap (for example Elsevier's SciVal is already starting this process), thus putting both parties back into the situation where we have to buy our own data back. Other products are also making a start on this process, as can be seen in the UK PubMed Central's grant lookup tool. Gerry made the vital point that all of the information is available but linking it is going to take work. The RCUK 'Grants of the Web' system is a start in this process as it brings together all of the grants by all RCUK funders in a single system. The individual research councils then use this centralised data to populate their individual GoW interfaces with each interface being set up to the specifications of the individual research councils. With one exception, AHRC, grant data about individual funded PhDs is not included in the GoW systems due to the RCUK preference for handling funded PhDs through their network of Doctoral Training Centres. Gerry closed saying there was a real desire from the RCUK to start linking outputs with funding grants (and expanding into research data and impact measures) especially in relation to monitoring compliance with Open Access mandates. Challenges still remained; a need for a common export format (CERIF); authority files for people, projects, institutions; the issue of department structures within institutions changing over time etc.
Dale Heenan (ESRC), ably assisted by Darren Hunter (EPSRC), discussed the RCUK 'Research Outcomes Project' (ROP). The project was based on the ESRC's Research Catalogue (running on Microsoft Zentity 2.0) extended to meet the needs of the four pilot councils, AHRC, BBSRC, EPSRC, ESRC. (Worth noting that MRC and STFC use the e-Val system). The ROP system is designed to create an evidence base to demonstrate the economic impact of funded research and also designed to attempt to reduce duplication of effort. Upload of data can come from a range of stakeholders, grant holders (PIs or their nominated Co-Is), institutions, IRs etc. and can cover individual items or bulk uploads. Management Information is provided using Microsoft Reporting Services. Future challenges for the system include ways to automate the deposit of research outputs, development/adoption of standards such as CERIF, ways to pull data from external services like Web of Knowledge, PubMed Scopus etc.
The main presentation for the day is of the IRIOS demonstrator by Kevin Ginty and Paul Cranner (Sunderland). The IRIOS project is a 'proof of concept' demonstrator of a GoW like service using the CERIF dataformat and is based on the 'Universities for the North East' project tracking system (CRM). One feature of the service is that four levels of access (hidden, summary, read only, write) can be assigned to three distinct groups (global, individual, groups of users) allowing a fine level of dynamic control over the data contained in the system. All grants and publications have a unique ID that is automatically generated by the system and any edits mad in the current system do not feed back to the system that originated the data. Currently the system is only accepting manual linking of grant to output but there are plans to look into automation of this process. In the future it might be possible to import data from larger databases like Web of Knowledge but information gathered by the research councils indicates that only 40% of outputs are correctly attributed to the grant that funded the research.
If you would like to try the demonstrator version of the IRIOS system details on how to login can be found here.
Comments on the presentations and information on the workshops is to follow in part two.
February 03, 2011
e–Copyright and the Implications of the Digital Economy Act
Writing about web page http://www.nglis.org.uk/diary.htm
"e-Copyright and the Implications of the Digital Economy Act", an NGLIS evening event, Iron Duke, Mayfair, London.
This event, led by Professor Charles Oppenheim, focused on two major challenges of the moment; copyright in the digital environment, particularly web 2.0 applications and the newly passed Digital Economy Act. The format of the presentation was very much discursive so below I've tried to pull out some of the important points. It is worth reiterating Prof. Oppenheim's disclaimer; he is not a lawyer (and nor am I!) and thus neither of us can take any responsibility for anything you may choose to do with the information reported here.
- Copyright law has not kept up with the development of the internet and is in many cases arcane and not 'fit for purpose'. However people are either very risk averse or far too free, a law that is either viewed with contempt and/or ignored by a whole generation is obviously not working. There are two reviews of copyright law under way; the Hargreaves Committee set up by David Cameron following discussions with Google and a House of Commons Select Committee. However very little can be done to amend copyright law in this country as we are subject to EU directives in this area.
- Crucial thing that distinguishes Web2.0 is the fact that it includes an element of participation/interaction, this is where it causes issues in terms of copyright. In copyright law any item that is authored by multiple people (and it is impossible to distinguish who has written what) and you wish to copy it you must clear the copyright will all authors. In the case of Web2.0 just identifying the authors can be a struggle and if even one of the authors refuses permission you can not use the item. Wikipedia gets round this by asking all contributors to sign up to T&Cs that means that they assign copyright to Wikipedia and thus copyright is cleared through a single source. Worth considering if you are planning to create a wiki!
- When asking for permission to copy; No means no, Yes means yes and No reply means no! In law you cannot just say "Unless I hear otherwise I'll go ahead...", hence the problem people have had with the Google Books project. The issue of orphan works is also important here, this was meant to be addressed in the Digital Economy Act but that section was cut. This is an increasing problem due to the rising number of digitisation projects under way and with the desire to make archive material publicly available.
- The area where the internet gets people in trouble with copyright law is the protection copyright gives from the 'communication to the public', which means electronic communication to two or more people. Copyright also does not allow for 'format shifting', an essential area in terms of digital preservation, another indication it is not fit for purpose.
- The Digital Economy Act (DEA), which was passed in wash-up, has some very concerning provisions for Libraries and other services that offer free wi-fi access. The idea is for there to be a 'three-strikes' rule on the infringement of copyright, on the third infringement a persons internet access is stopped. Obviously this is more of a problem when it does not target individuals but libraries or local authorities! The Act has been passed but not currently implemented. The current government has made a commitment to maintain the law but when Nick Clegg asked people for the list of the laws they would most like to see repealed the DEA came top. The Act is currently undergoing a judicial review, bought by two of the country's top ISPs, to verify if the Act is even legal in its current form.
- 'Cloud computing' was another area discussed, in this case more in the terms of the Data Protection Act. While is very useful and valuable development in terms of technology if you are planning to take advantage of it to hold personal data you need to be aware that this could cause you to violate data protection. This is because under data protection you make the commitment to ensure that anyone you contract to or sub-contract to also have adequate data protection provisions/laws. Therefore as the US is not deemed to have good enough data protection laws if you contract an American cloud service to hold anything to do with personal information you have violated data protection. This is a problem with a number of countries and the fact that if you start trying to limit the geographical location of your data you are no longer getting the advantage of hosting it in the cloud in for first place.
All this was covered and much more, a very interesting evening and much food for thought! Although it was nice to hear that I'm not the only person who thinks that the current copyright law we have needs throwing out and people need to start again with a blank sheet of paper rather than constantly trying to amend a broken law. But in the light of the issues with the internet and the kind of global information flows we are currently seeing any new law has to be agreed and applied globally, focussing on the needs of everyone trying to use the information rather than just protecting commercial interests.
February 02, 2011
WRAP Reaches 4000 Items!
Writing about web page http://wrap.warwick.ac.uk
The Warwick Research Archive Portal (WRAP) aims to provide worldwide access to the outputs of Warwick researchers to raise the profile of the high quality research being undertaken at the University. Our collection of journal articles and PhD theses has been growing rapidly over the past twelve months and we have just made available the 4000th item in the database.
WRAP has doubled in size in just over a year and follows the news in October that WRAP was starting to see more than a 1000 visitors each weekday in the autumn term. Visitors are coming from more than a 150 different countries every month and mostly find content through Google.
WRAP’s 4000th item is:
Bruijnincx, P.C.A. and Sadler, P.J. (2009). Controlling platinum, ruthenium, and osmium reactivity for anticancer drug design. Advances in Inorganic Chemistry, 61, pp. 1-62. ISSN: 0898-8838 http://wrap.warwick.ac.uk/4143
Authors are encouraged to submit their journal articles to WRAP online at: http://go.warwick.ac.uk/irsubmit
Visit WRAP: http://wrap.warwick.ac.uk
Find out more about WRAP: http://www2.warwick.ac.uk/research/wrap