WRAP repository blog
  • Warwick Blogs
  • | Contact me
  • | About me
  • Sign in

All 7 entries tagged Research Data

No other Warwick Blogs use the tag Research Data on entries | View entries tagged Research Data at Technorati | There are no images tagged Research Data on this blog

July 26, 2012

Open Repositories 2012 – Day Two (minus keynote)

Writing about web page http://or2012.ed.ac.uk/

The second day started early with my final workshop, the Place of Software in Data Repositories, this workshop focused on work that had been done by the Software Sustainability Institute on the role of software in the research process. Again illustrating the slow move away from the rewards for researchers being tied to traditional publications and an acknowledgment that research now is a diverse process and involve a huge array of skills. But for researchers to gain the rewards for their work there needs to be a systematic way of storing and making these things available. The host of the workshop, Neil Chue Hong from the University of Edinburgh, spoke briefly on a new idea of the 'software metapaper' as a way to cite all the different parts of a project. The 'metapaper' is a neat idea to get around one of the problems that has been discussed in terms of citing datasets, the fact that some journals limit the number of references you can use (which seems anti-intuitive to me but that's another blog post!). The 'metapaper' will include create a complete record of a project, citing within it any publications, methodology, datasets or software objects that might all be published in different places into a single citable object. The first journal of this kind, the Journal of Open Research Software is due to launch soon. The issue of the long term preservations of software arising form JISC funded projects was also mentioned as an issue that JISC is beginning to grapple with now.

Breakout groups were centred around the range of factors that needed to be considered when making software available in a repository. Which brought up many of the main issues that we had been discussing in relation to datasets, issues of versioning, external dependencies (software is not PERL code alone), drivers to deposit and trust issues can up again. Key amongst these challenges was the issue of sustainability and also of reuse of the software by the software customers. Much of the discussion centred around what exactly it was that you needed to archive for the curation of software, just the text document containing the code? Or would you need to host executable files and the associated virtual machine interfaces as well? What does a trusted repository look like? One interesting issue that also came out of this was the issue of needing to store malicious software, for training purposes and testing, but needing to make these items really clear in an open repository for fear of range of problems! This morning was a great eye opener on the range of questions specialist types of material can raise for repositories making it ever more clear that generalist repositories, like our institutional repository, may not be suitable to try and store everything.

The main conference started in the afternoon with a fantastic keynote opening by Cameron Neylon, the new director of advocacy at the Public Library of Science (PLoS). I had so much to say about this, it has it's own post (to follow)!

Following the Keynote was the 'Poster Minute Madness', a brilliant idea for presenting posters at a conference and a way to get people excited about the content of the posters. A surprisingly nerve-wracking experience for presenters despite being only 60 sections long! (Our poster can be found in WRAP, self-archived on the way to the conference.) As before when I last saw this at OR10 in Madrid, I was blown away by the range of activities being undertaken by repositories around the world and the exciting projects people are thinking up! Highlights for me were:

  • Brian Kelly and Jen Delasalle on social networks and repositories
  • Chris Awre and others on history data management plans
  • Helen Kenna and Karen Bates on Salford's digital archives
  • QUT's poster on enhanced usage stats (very much our ideal situation)

But all of the posters were well worth the time taken to read them, I was just disappointed that I didn't get more time talking to people about my poster or talking to others about theirs! The poster reception followed the last events of the day at the stunning Playfair Library.

From here the conference started the parallel sessions, which as usual reminded me of being at a music festival where three of the bands you want to see are all playing at the same time on different stages! (Here I'll add a huge thank you to the organizers for videoing all the presentations so I could watch the ones I missed!) In the end I plumped for the sessions on the development of shared services, which gave an interesting view of a number of countries who are using national shared services as the base of their repository infrastructure. For every advantage of this kind of service I think of I'm reminded of the really rich, heterogeneous environment we have in the UK where every repository works a little differently for different people and I think it's worth the frustrations that always arise when you try to make the systems talk to each other! It was good to hear about the progress of the UK Repository Net+ project that looks like it has the potential to do a lot of good for repositories in the UK and the news from the World Bank of their aggressive Open Access policies is also really encouraging!


Yvonne Budden : 26 Jul 2012 12:38 |  Tags: Advocacy Events Open Access Or2012 Reflections Research Data |  Comments (0) |  Close comments |  Report a problem

Spinner  Please wait - comments are loading

July 16, 2012

Open Repositories 2012 – Day One

Writing about web page http://or2012.ed.ac.uk/

This is the first of a series of blog posts on my reflections on the 7th International Conference on Open Repositories. I've split the post by the days of the conference mainly to avoid this being the longest blog post ever and to make it easier to refer to later.

Day one was taken up with half day workshops, a fantastic idea and allowed a level of interaction that some of the later sessions couldn't. All the workshops seemed to feature great discussions on relevant topics and a great comparison of different practises in different countries and institutions. My day one workshops were:

  • ISL1: Islandora - Getting Started
  • DCC: Institutional Repositories & Data - Roles and Responsibilities
  • And an optional evening workshop on EDINA's Repository Junction Broker project.

The Islandora workshop was fascinating! I'd not seen very much of the software or it potential before and the workshop was a great introduction to everything about the software, from the architecture and underlying metadata to the different Drupal options for customising the front end. Their system of 'solution packs', Drupal modules that allow you to drop in functionality for different functionality and content types into the system is a great idea and allows the system a degree of flexibility not found in other systems yet (although the EPrints Bazaar might get there soon). They demo-ed a books solution pack for paged content as well as discussing forthcoming solution packs for institutional repository (IR) functionality and Digital Humanities projects. Islandora maintain a web-based sandbox environment to allow people to experiment which is wiped clean each evening which I'm looking forward to playing with as we scope new software for future projects. I also like the fact that the software is completely open source, following the replacement of Abbyy OCR software with the open source equivalent Tesseract. Islandora as the 'new' player in the market is managing to provide the same functionality that the other systems do with a collection of exciting add-ons, however I do see that as you add the extra functionality you are having to maintain a number of additional modules as well as the core software which could have resource implications down the line.

The afternoon workshop run by the Digital Curation Centrewas a nice mix of presentations on the current thinking of a number of projects from around the world and group debate on the weeks 'hot button' topic of Research Data Management (RDM). This topic was to come up time and again in the week as most of the talks and discussions touched on it at least a little. As the title suggested the main thrust of the discussion was around who was responsible for what! Discussions covered a range of topics and some of the messages that came out most strongly for me where:

  • Use the discipline data centres as much as possible, no IR (data or otherwise) can, or should, do everything.
  • Knowing where the other data centres are is essential.
  • Try not to get bogged down trying to 'fix' everything first time, fix what you can and work on the rest later or you could end up doing nothing.
  • Interesting point from Chris Awre at Hull, use the IR as a starting point for discussions to move the researcher's thinking from what you have to what they need.
  • Try to get into the researchers workflows as early as possible as it makes creating the metadata easier for the researcher, which in turn helps the archive.
  • Are repositories qualified to appraise the data deposited with them?

I'll admit that the whole area of RDM is a scary one but it was good to realise that there are both a, a lot of people out there feeling the same and b, a lot of assistance there for when its needed. The idea of just getting something in place and fixing the rest later feels a bit anti-intuitive to me but, on the other hand, it's what I've been doing with WRAP's development of the last two years, it's just that someone else had to take the first step!

The final workshop of the day was an informal one in the evening discussing the development of the EDINA's Repository Junction Broker project which is going to form part of the services offered by the UK Repository Net+. This discussions centred around the development of the extension of the middleware tool developed by EDINA to allow publishers to feed deposits directly into repositories as a service to researchers. As ever this sound like a fantastic idea and the debate was active and enthusiastic as the various stakeholders discussed how to make this work for both repositories and publishers. Certain as far as WRAP is concerned if what we need to do is get our SWORD2 endpoint up and running that that is what we have to do, the service offered by the Repository Junction are far too good to miss out on! I'll be watching this develop with interest....

More on day two soon....


Yvonne Budden : 16 Jul 2012 17:57 |  Tags: Advocacy Events Funders Open Access Or2012 Reflections Research Data |  Comments (0) |  Close comments |  Report a problem

Spinner  Please wait - comments are loading

July 09, 2012

JISC/DataCite Event

Writing about web page http://www.bl.uk/aboutus/stratpolprog/digi/datasets/workshoparchive/archive.html

Last Friday saw the second DataCite event, jointly hosted by JISC and the British Library, this one focussing on metadata for datasets. This is an area of interest for me not just because of the developments in research data management (RDM) that are starting to impact repositories but also because of my background in metadata. The organisers warned us that they were starting with the basics and getting increasingly complicated as the day went on, this was certainly true!

The sessions started with a very good introduction, from Alex Ball of the DCC, on some of the essential metadata needed for both data citation and also data discovery. As he put it the different between, known item searching (data citation) and speculative searching (data discovery). The needs of users undertaking both of these activities are fundamentally different but do have overlap. Through analysis of 15 schemas being used by data centres at the moment he highlighted 16 common metadata fields that appear in the majority of the schemas. None of these fields will come as much of a surprise to people creating and using metadata, but might be unfamiliar to the researchers who may have to create this metadata.

Elizabeth Newbold, from the British Library, spoke about the development of the DataCite schema, listing the essential/mandatory fields that they expect people to provide DataCite with in return for minting the DOIs. These fields mostly represent the fields for data citation as mentioned by Alex but DataCite is hoping that data centres will supply them with some of the additional metadata for discovery as well. This is key to the BLs other presentation from Rachel Kotarski who spoke about developments at the BL in transforming the DataCite metadata into MARC records for use in the main BL catalogue. Rachel spoke about a pilot project run to add the dataset metadata into a trial instance of Primo as a 'proof of concept' to assess whether users were looking for this kind of material and if so what kind of metadata did they want when trying to discover it. At least one JISC RDM project in the room now plans to send much more of their metadata to DataCite to allow better harvesting by the BL and it's certainly something we need to bear in mind when developing Warwick's services in this area.

David Boyd from the data.bris project laid out in detail how they are building on the new capacity for data storage at Bristol to build integrated services around data registration, publishing and discovery. This was an excellent insight into how one University has conceived the whole data model and highlighted some key areas of integration with other services that is possible with joined up processes. I particularly took away the details of the range of ways in which they are thinking about automating metadata creation to remove some of the burden on researchers. Michael Charno from the Archaeology Data Service gave some insight from one of the existing Data Centres, who have been in the game at lot longer than most, in a fascinating talk entitled '2000 years in the making, 2 weeks to record, 2 days to archive, too difficult to cite?'. The ADS model charges data creators/projects to host their data and presents the data free at point of access to the user. Currently one of their challenges is persuading users to reuse data and data loss, archaeology is inherently a destructive process so the records of the excavation are often the only evidence remaining at the end of the project. Michael pointed us all towards a set of guidance documents and toolkits used by the ADS to advise researchers on creating metadata but admitted they didn't have any evidence on the amount of use these tools got. Another area of work discussed was looking at the mappings between the current schema, developed in house for the ADS compares to the new DataCite schema.

The final two talks highlighted issues of interoperability with Steve Donegan from the STFC speaking about the difficulties of reconciling the variety of different schemas used by different environmental sciences as part of developing the NERC Data Centre. He highlighted the different metadata needs of scientists who want the raw data and government agencies who want the data at one level of analysis higher for policy decisions. Steve finished by discussing in some technical depth the challenge of making the NERC data complient to the INSPIRE, European standard. Finally David Shotton of the University of Oxford spoke on a range of projects at Oxford looking at the DataCite metadata. Firstly he has worked on a schema to make DataCite metadata available in RDF (new mapping, DataCite2RDF available in draft form http://bit.ly/N3VKsx) using a range of SPAR ontologies. He also spoke about a colleagues project to create a we form to help researchers generate DataCite metadata in an easily exportable XML format and finally on the importance of citing data in the reference list as well as in the text, allowing it to be picked up by services like Scopus and Web of Knowledge.

Discussions at the end of the day was centred around versioning and DOIs for subsets if datasets as well as the importance of keeping things machine readable as well as human readable! Overall the was a fascinating day that provided a little of everything, from clear guidance on the basics and essential metadata required for the basic functions to very complex topics showing how far good metadata can take you. Lots of food for thought for the development of our own services!


Yvonne Budden : 09 Jul 2012 09:31 |  Tags: Events Open Access Research Data |  Comments (0) |  Close comments |  Report a problem

Spinner  Please wait - comments are loading

August 08, 2011

IRIOS Workshop – Part Two: Comment and Workshops

Writing about web page http://www.irios.sunderland.ac.uk/index.cfm/2011/8/1/IRIOS-Workshop-Parellel-Sessions

One thing I took away from the workshop session was that both systems ROP and IRIOS were doing the right things and going in the right directions but weren't quite there yet. A big concern to me as an IR manager (and as a former Metadata Librarian) was that the IRIOS system creates yet more unique identifiers (see later in this entry for further discussion of unique IDs). Also automation of the project linking to outputs can't come fast enough, especially for services like WRAP where we spend a not inconsiderable amount of time tracking down funding information from the papers. However we could also benefit from taking information from systems such as this, which tie the recording of information about outputs much more closely to the money, which is always a motivator for people to get data entered correctly!

I think it is telling that more and more of these 'proof of concept' services are being developed using the CERIF dataformat (after R4R I'm looking forward to hearing about the MICE project early next month) but the trick with a standard is that it is only a standard if everyone is using it. I don't think we are quite there yet, I think this coming REF has been such an uncertain process so far that I think there is a lot more chance of CERIF being the main deposit format in the next REF. (If I'm still here for the next REF I'll have to reflect back on this and see if I was right!)

The afternoon of the work shop was taken up with a number of workshop discussions on a range of topics, below are a few of the notes I took in the two discussions I took part in. To see the full run down of all of the discussions please see the link above.

Universal Researcher IDs (URID)

It was generally accepted by all in the discussion that unique IDs for things, be they projects, outputs, researchers or objects were a good idea in terms of data transfer and exchange. They must be a good idea as there are so many different ones you can have (in the course of the discussion we mentioned more than eight current projects to create URIDs). Things are much easier to link together if they all bear a single identifier. However when it comes to people the added issue of data protection rears its head and can potentially hamper any form of identification if it is 'assigned' to the person. A way round this was suggested to allow people to sign up to identifiers, thus allowing those who wish to opt out to do so. Ethically the best route perhaps but unless a single service was designated we could end up with a system similar to the one we have now where everyone is signing up, but not using a whole array for services. The size of the problem is the size of the current academic community and global in scope. Some of the characteristics of URIDs we came up with were they just be; unique (and semantic free - previously mentioned privacy issues), have a single place that assigns them, have a sustainable authority file, not be tied to a role. One current service in place that fulfils many of the above criteria is the UUID service, however this falls down in that there is no register of assigned IDs so people can apply for multiple IDs if they forget them (and lets face it the likely hood of remembering a 128 number is kind of low) ... and we're back in the same situation again. I'm not sure there is a single perfect solution to this problem, though my life would be easier if there was!

REF Considerations

This was a free form discussion that covered the REF, REF preparations and 'Life after the REF' in various guises. HEFCE are currently tendering for the data to be used in the REF at the moment, needless to say the two services bidding are the expected two, Thomson Reuters and Scopus, but HEFCE will only be buying one lot of data. Bibliometrics were touched upon in relation to the REF, is it better to have two people select a really highly cited paper or choose two lower cited papers? Discussions on the HESA data, checking the data once it comes back from HESA, possibilities of mapping the future HESA data to the REF UoA for long term benchmarking rather than a single point hat goes out of date very quickly. Do people's CRIS systems really hold all of the data required for a return? What are the differences between the impact as measured/requested by HEFCE and the Impact measured by RCUK? Selection policy and training, the possibility of sector wide training, possible best practise mentioned in the idea to train a small core group of people who would handle all of the enquires centrally. Would it be possible for institutions to get the facilities data on a yearly basis rather than just before the REF and then have to try and chase people who may not remember/have left to try and verify the data?

One interesting comment from the discussion was the news that NERC, at least, has seen a big increase in the number of grant applications including a direct cost for Open Access funding. Interesting particularly is that there had been a number of comments made to me that researchers didn't want to do that are they feared making their grant application too expensive.

All in all the day was very interesting for me as an introduction to a 'world beyond publications' (as I was attending both for myself and for a member of our Research Support Services department) and as an indication of what we need to do to go forward.


Yvonne Budden : 08 Aug 2011 17:00 |  Tags: Advocacy Authors Events Open Access Ref Reflections Research Data Research Management |  Comments (0) |  Close comments |  Report a problem

Spinner  Please wait - comments are loading

IRIOS Workshop – Part One: The Presentations

Writing about web page http://www.irios.sunderland.ac.uk/index.cfm/2011/7/28/IRIOS-Workshop-Presentations

The IRIOS (Integrated Research Input and Output System) workshop at the JISC Headquarters was designed to demonstrate the preliminary look of a system designed to take information from the RUCKL funders on Grant awards and combine it with the information from University's IRs and CRIS systems. The event was attended by research managers, representatives of four RCUK funders and repository managers and all of the presentations can be seen at the link above.

The event kicked off with a video presentation from Josh Brown of JISC discussion the RIM (Research Information Management) programme of projects. One interesting statistic was that it is estimated the £85mill/year is spent on submitting grant proposals and administering awards. Once again the savings that can be realised with the use of the CERIF data format was raised and the point that REF submissions can be made to HEFCE in CERIF was highlighted as a sign of the growing importance of the standard. IRIOS was highlighted as a step towards a more integrated national system of data management. Josh closed with the news of a further JISC funding call to investigate further uses of CERIF due to be announced soon.

Simon Kerridge (Sunderland) was up next to discuss the landscape and background of the project and the need for interoperability and joined up thinking between a number of different University departments if we are to make the most of an increasingly competitive environment. He also spoke of the ways in which IRIOS might feed into the RMAS (Research Management and Administration System) project further enhancing the cloud based system. Simon finished by touching on the challenges (research data management and unique researcher IDs anyone) and opportunities for the future (esp. the JISC funding call).

Gerry Lawson (NERC) was up next with a whistle stop tour round the RCUK 'Grants on the Web' (GoW) systems. Starting with a stern warning that if the funders and institutions don't find a way to match up the data held by both parties commercial services will find a way to fill the gap (for example Elsevier's SciVal is already starting this process), thus putting both parties back into the situation where we have to buy our own data back. Other products are also making a start on this process, as can be seen in the UK PubMed Central's grant lookup tool. Gerry made the vital point that all of the information is available but linking it is going to take work. The RCUK 'Grants of the Web' system is a start in this process as it brings together all of the grants by all RCUK funders in a single system. The individual research councils then use this centralised data to populate their individual GoW interfaces with each interface being set up to the specifications of the individual research councils. With one exception, AHRC, grant data about individual funded PhDs is not included in the GoW systems due to the RCUK preference for handling funded PhDs through their network of Doctoral Training Centres. Gerry closed saying there was a real desire from the RCUK to start linking outputs with funding grants (and expanding into research data and impact measures) especially in relation to monitoring compliance with Open Access mandates. Challenges still remained; a need for a common export format (CERIF); authority files for people, projects, institutions; the issue of department structures within institutions changing over time etc.

Dale Heenan (ESRC), ably assisted by Darren Hunter (EPSRC), discussed the RCUK 'Research Outcomes Project' (ROP). The project was based on the ESRC's Research Catalogue (running on Microsoft Zentity 2.0) extended to meet the needs of the four pilot councils, AHRC, BBSRC, EPSRC, ESRC. (Worth noting that MRC and STFC use the e-Val system). The ROP system is designed to create an evidence base to demonstrate the economic impact of funded research and also designed to attempt to reduce duplication of effort. Upload of data can come from a range of stakeholders, grant holders (PIs or their nominated Co-Is), institutions, IRs etc. and can cover individual items or bulk uploads. Management Information is provided using Microsoft Reporting Services. Future challenges for the system include ways to automate the deposit of research outputs, development/adoption of standards such as CERIF, ways to pull data from external services like Web of Knowledge, PubMed Scopus etc.

The main presentation for the day is of the IRIOS demonstrator by Kevin Ginty and Paul Cranner (Sunderland). The IRIOS project is a 'proof of concept' demonstrator of a GoW like service using the CERIF dataformat and is based on the 'Universities for the North East' project tracking system (CRM). One feature of the service is that four levels of access (hidden, summary, read only, write) can be assigned to three distinct groups (global, individual, groups of users) allowing a fine level of dynamic control over the data contained in the system. All grants and publications have a unique ID that is automatically generated by the system and any edits mad in the current system do not feed back to the system that originated the data. Currently the system is only accepting manual linking of grant to output but there are plans to look into automation of this process. In the future it might be possible to import data from larger databases like Web of Knowledge but information gathered by the research councils indicates that only 40% of outputs are correctly attributed to the grant that funded the research.

If you would like to try the demonstrator version of the IRIOS system details on how to login can be found here.

Comments on the presentations and information on the workshops is to follow in part two.


Yvonne Budden : 08 Aug 2011 16:00 |  Tags: Cris Events Funders Interoperability Metadata Open Access Ref Research Data |  Comments (0) |  Close comments |  Report a problem

Spinner  Please wait - comments are loading

March 16, 2011

A Secure Future for Research Data

Writing about web page http://www2.warwick.ac.uk/alumni/knowledge/themes/03/secure_future/

Quick link to flag up my contribution to the Warwick Knowledge Centre‘s fortnightly theme of dealing with data.

A Secure Future for Research Data

They asked me for an 800 word article on the pros and cons of storing and accessing research data, which in my hands gained a slight open access slant to it!  A little bit ‘research data 101′ for any practitioner but aimed at being a sort introduction the kinds of issues raised by data, awareness raising being the name of the game at the moment.  Any questions or comments and I'd be happy to hear them!


Yvonne Budden : 16 Mar 2011 16:12 |  Tags: Open Access Research Data |  Comments (0) |  Close comments |  Report a problem

Spinner  Please wait - comments are loading

February 23, 2010

Highlights of UKCoRR meeting, Feb 2010

Last Friday I was at the UKCoRR members' meeting. As their Chair, I reported on my activities and announced speakers. As a repository manager, I learnt a lot from the other participants.

Louise Jones introduced the day, as the University of Leicester library were our hosts. They have recently appointed a Bibliometrician at Leicester and they're acquiring a CRIS to work alongside their repository. They have a mandate for deposit and Gareth Johnson's presentation later in the day about the repository at Leicester mentioned that they have more than enough work coming in, without the need for advocacy work to drum up deposits. I guess that the CRIS will come in handy for measuring compliance with the mandate!

Gareth's presentation also included some nice pie charts showing what's in their repository by type, and what's most used from the repository, by type and then again by "college" (their college is like a faculty). Apparently he had to hand-count the statistics for the graphs... well done Gareth!

Nicky Cashman spoke about her work at Aberystwyth and I found it interesting that one of their departments' research projects on genealogy has hundreds of scanned images of paper family trees that they are looking for a home for, at the end of their project. They don't require a database to be built around their data as they already built one, and they want to link from it to the scanned images. This sounds like a great example of the kind of work that the library/repository can do to support researchers with their research data. The problem is, though, that in order to host that kind of material in a repository there will be substantial costs, (cataloguing each item, storing it and preserving it) and these costs perhaps ought to have been included in the original research bid. Researchers ought to be thinking about such homes for their data at the beginning of their projects, rather than at the end.

Nick Sheppard spoke about his work on Bibliosight and using the data provided through Web of Science's Web Services. There was some discussion about the fact that you can't get the abstract out of WoS because they don't own the copyright in it in order to grant that we might use it...

Jane Smith of Sherpa demostrated some of the newer and more advanced features of RoMEO. I think that the list of publishers who comply with each funders' mandate is something that might be of use to researchers looking to get published. Also, the FAQs might be useful for new users of RoMEO.

I would like to see the Sherpa list of publishers who allow final version deposit enhanced to include which of them will allow author affiliation searching as well, so that we can find our authors' articles in final versions and put them into the repository. And another column to say whether the final versions are already available on open access or not, because I'd prioritise those not already available on open access.

One development that has been considered for SherpaRoMEO is that it should list the repository deposit policy at journal title level, because publishers often have different terms for different titles. However, in trying to develop such a tool, it has transpired that one journal might appear to have many copyright rights owners, when looking at the different sources of information about journal publishers. For instance, the society or the publisher who acts on their behalf might each claim the rights and each have different policies. Which rights owner's policy ought SherpaRoMEO to display?

Hannah Payne spoke about the Welsh Repository Network who have a Google custom search for all the welsh repositories which I like but would wish to see a more powerful cross-searching interface, and in the afternoon we did a copyright workshop that had also been run at one of the WRN events.

So there is plenty I can take away from the day.


Jenny Delasalle : 23 Feb 2010 15:36 |  Tags: Cris Research Data Sherpa Romeo Statistics Ukcorr |  Comments (0) |  Close comments |  Report a problem

Spinner  Please wait - comments are loading

February 2023

Mo Tu We Th Fr Sa Su
Jan |  Today  |
      1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28               

Visit the WRAP repository


Visit WRAP

Twitter Feed

New additions on twitter Go to 'Twitter / wrap_papers'

Twitter / wrap_papers
Twitter updates from WRAP Papers / wrap_papers.

  • wrap_papers: The rational choice approach to entrepreneurship: Mole, Kevin and Roper, Stephen (2012) The rational choic... http://t.co/COKUmmCf #wrap

Search this blog

Spinner

Tags

  • Open Access (26)
  • Advocacy (25)
  • Statistics (20)
  • Events (18)
  • Google Analytics (16)

Galleries

  • Slideshow of all galleries
  • Wordles (3 images)

Most recent comments

  • @Jackie, thanks! I'm very proud of the team and everything we have achived in the past year. Looking… by Yvonne Budden on this entry
  • That's an impressive amount of full text Yvonne. Congratulations to everyone at Warwick. by Jackie Wickham on this entry
  • In my opinion the DEA is a danger to digital liberties and should be thrown out, period Andy @ Lotto… by Andy on this entry
  • Has anyone tried an assessment using the suggested PIs– including the author of the paper? It seems … by Hannah Payne on this entry
  • Hi Yvonne I came across this article myself recently. And I was wondering how much of an issue this … by Jackie Wickham on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIII