October 21, 2015

Digital Humanities Databases: Help! There’s Too Much information!

Digital Humanities Databases: Help! There’s Too Much information!

Whilst at the DHOxSS those projects that we heard about most frequently involved databases. Databases are of course a significant undertaking and so whilst it is the desire of many scholars to organise their material into a database this is not always a practicable solution. Perhaps the first stage in your (my) database journey is to contribute to somebody else’s project in order to get a feel for what it is like.

One of the projects we heard about at length was the Incunabula Databasefrom Geri Della Rocca de Candal (Medieval and Modern Languages, Univ. Ox.). This incorporates 30,000 eds., in 450,000 copies, from 4000 libraries (more or less). Hence why these projects can be a major undertaking! If you are brave and have a big idea remember that the time and financial costs involve not only those of setting up the database but of maintaining it online.

Furthermore, because of the potential quantity of information that you can be handling, you need to think very carefully about exactly what information you want to include. More might usually be better, but what is practical and what are your priorities? What is most likely to be used and what is already available from other resources? Information which you might wish to include could be:

Very few catalogues include everything, although many more are now able to be interlinked and source information from each other, thus minimising the need for extra work and facilitating better or more uniform updating. Likewise it is your integration of data from a variety of sources that makes for good research. MEI (Material Evidence in Incunabula) is an especially useful resource since it contains many copies identified from other sources (sales catalogues etc.) that no longer exist, so you won’t find them in the library however long you search.

Why might you use these catalogues, and which catalogue is best for your needs? Although there is no substitute for visiting the library in person (the smell alone is usually worth a train journey), the above catalogues might be used from the comfort of your own desk to research any of the following:

  • distribution networks and cost (economic history)
  • reception and use (social history)
  • reconstruction of dispersed collections (intellectual history)

Which catalogue(s) are best for your needs depends on which of these fields, or any other, you are most interested in.

Remember that even if you never decide to build your own database, or become a card-carrying contributor to a project, you can still do more than simply make use of these catalogues for your own research (for mine, up-to-date and accurate archival catalogues are invaluable). People will always be happy to receive your corrections if you notice an error in the information they provide. They are putting the information up there because they want it to be available in an accurate and accessible form, so don’t be scared to say something.

Another major database project that we heard about at DHOxSS was Early Modern Letters Online, which is a pan-European endeavour, from Howard Hotson (History, Univ. Ox.). This project has a core of permanent staff supplemented by doctoral and postdoctoral interns and the smaller contributions of individual interested academics. I myself may contribute some of the letters of Thomas Johnson to this database. Letters particularly benefit from the database form of recording since the nature of epistolary documents is the disparate location of items belonging to a single exchange. This is something that can be rectified digitally without libraries having to give up their collections. The stated objectives of EMLO are:

  • to assemble a network
  • to design a networking platform
  • to support a network
  • studying past intellectual networks.

This database also incorporates the facility of users to comment on documents, facilitating further networking.

A final, smaller, but in some ways more challenging database that we heard about was Ociana from Daniel Burt (Khalli Research Centre, Univ. Ox.). That is, The Online Corpus of Inscriptions of Ancient North Arabia. I would draw your attention to the particular challenges of making this database searchable when the characters of the inscriptions are not currently available on any digital platform. The producers are in the process of creating their own digital keyboard, and of tagging the individual characters of inscriptions, in order to resolve this. The database is also linked to google maps in order to see the geographical position of the inscriptions being viewed, particularly useful since so many are in their original location (ie. on a cliff).

So remember:

  • Databases are HARD.
  • But that doesn’t mean they’re not useful, and possible!
  • I already find such resources essential to my research.
  • A good next step for getting to grips with them is to look at contributing to an already existent database, and in doing so learn more about how they work, are managed, and the objectives that are targeted in their creation.

Emil Rybczak (English) Univ. Warwick.


October 14, 2015

Summer School Experience: TEI: What? How? Why?

TEI: What? How? Why?

So what is TEI? TEI stands for the Text Encoding Initiative and is a consensus-based means of organising and structuring your (humanities) data for long-term digital preservation. It is well-understood, widely used, and prepares your data for presentation in a variety of digital formats. Originally developed in conjunction with manuscripts but flexible in application it can be used as a means of creating meta-data for an item, ie. title, author, provenance etc., or for transcribing the item itself. The excellent talk I heard at the DHOxSS on this topic was by James Cummings (ITS, Univ. Ox.).

You can tag elements such as <HI>this</HI> to control the presentation of your document on screen, or names such as <name>James Cummings</name> in order to make the document searchable. Another benefit is that compared to other computer languages it is easy to read as a human, and thus easy to work with if it is your first time behind the scenes of a ready-made document. The TEI handbook weighs more than a conference lunch, but one of the most attractive aspects of the language is that if when describing your text you find that what you want to describe isn’t in the handbook (which when you consider how fussy academics are is quite likely, and furthermore is why the handbook is so thick to begin with) you can simply invent your own tags:

<p><l><noun clause><definite article><capital>T</capital>he</definite article> <adjective>key</adjective> <noun>thing</noun></noun clause> <verb clause><unidentified grammatical element>to</unidentified grammatical element> <verb>remember</verb></verb clause><punctuation>.</punctuation></l></p> … is that you don’t have to tag everything! Creating a TEI document is not an end in itself, tempting as this may be to the more <gap/> amongst us. It is a means to preserve and describe your document in order to accomplish specific predefined ends. Are you interested in social awareness of the new world? If so, tag places. Are you interested in the regional creation of manuscripts? If so, tag local spellings. A recent project which has made extensive use of TEI in my field of early modern drama is Richard Brome Online as discussed by Eleanor Lowe (English and Modern Languages, Oxford Brookes) at the DHOxSS. In this resource there are particular tags for speakers, stage directions, proprs, or acts and scenes (TEI privileges sense over physical arrangement of the text in the source, ie. paragraphs over pages). These provide information both for displaying the text online, and making it searchable by the user.

If you’re not creating an online edition, which most of us currently aren’t, it is still helpful to understand TEI when one encounters these texts, as it is useful to understand what choices went in to the creation of the text you’re reading. It is furthermore useful to have some working knowledge of TEI for the small scale preservation of original manuscripts. It allows for accurate representation of the document for perusal after leaving the rare books room. The focus of your transcription might be on text or physical pre<stain “type=coffee”>sentation,</stain>but in either case the key to success in using TEI is consistency.

Clearly the tags I have been using are not necessarily ideal for all projects. Comprehensive guidelines are available at webpage (above). The ElEPHãT project, as discussed by Pip Willcox (Bodleian Libraries, Univ. Ox.), has with the Text Creation Partnership combined data from HathiTrust, ESTC (English Short Title Catalogue) and EEBO (Early English Books Online) in an extensive TEI project. Various transcriptions of the texts that they have worked on are available online at EEBO. Their work can be utilised via basic searching in order to find places or people of interest to your work, or developed by the user in order to conduct more particular research of their own devising. In a workshop session where we were left to play with the TEI documents available from the TCP I suggested that one could devise numerical tagging across their strong collection of alchemical texts in order to investigate the prevalence of sacred numerology in these works. Unfortunately this may have to wait for another day.

In conclusion:

What?

  • TEI is a language which can be used to preserve the data and metadata of early manuscripts and texts.

How?

  • The text is inputted manually or, if you’re lucky, via OCR, and marked up (tagged) according to a pre-considered range of purposes to which the data is likely to be put.

Why?

  • TEI separates the data from the interface and so ensures that the hard work put into digitising the documents won’t be made redundant once the interface becomes obsolete.
  • It is flexible and so can be tailored to the needs of any project.
  • It is widely understood allowing for sharing of data.
  • It can be read by both computers and humans and so is relatively easy to get to grips with.
  • It can be used for the smallest and largest of projects.
  • Trust me, its fun!

Emil Rybczak (English) Univ. Warwick.


October 07, 2015

Summer School Experience: Digital Approaches in Medieval and Renaissance Studies

Richard II of EnglandSummer School Experience: Digital Approaches in Medieval and Renaissance Studies

This is the first of a short series of posts inspired by my time at the Digital Humanities at Oxford Summer School 2015. This Summer school occurs annually in the beautiful setting of Oxford and runs a variety of different educational strands. These cater for all tastes, from those who are already technical adepts to those aimed at beginners. I, of course, attended one of the latter: Digital Approaches in Medieval and Renaissance Studies. Be warned! Choose carefully! Once you’ve committed to the Text Encoding Initiative, Digital Musicology, or as I did, digital monks, there’s no going back.

Fortunately, in my experience, there is no reason why you’d want to. Each day is organised with a series of classes or workshops around a particular theme in the strand which you are taking. Thus my days were focussed on:

  • Digital Imaging
  • Databases of early documents
  • TEI (computer code for manuscripts and early documents)
  • Oxford’s digitisation projects
  • Miscellaneous marriages between the medieval and digital.

These sessions frequently follow the problems encountered and addressed in the course of other people’s projects, which are presented as examples as to what digital tools you may wish to implement in your own research. Again, be warned! The Medieval and Renaissance classes give you far more exciting ideas and blue-sky plans for transforming your work than you learn the practicalities of how to implement. However, since these can be learned at a later date once you have identified what you want to do, this is not a bad thing.

In my following posts I shall provide a short digest of some of the lessons I learned at the DHOxSS, with examples from a variety of projects. My topics are:

  • TEI: What? How? Why?
  • Digital Humanities Databases: Help! There’s Too Much information!
  • Digital Photography as a Research Tool.

Of course the most important thing that happens at a conference or summer school is meeting so many potential colleagues; picking their brains as well as stealing their biscuits. Of all areas of research the digital humanities is one where it is more vital than ever to realise the importance of collaboration. You simply don’t have enough time (or in my case, skills) to be a master of both Richard II’s morning routine and create an app to identify which Plantagenet King enjoyed the same breakfast as you. If you are going to make a digital humanities project work it needs to be developed by a variety of people with a variety of skills – but remember: the project also has to be interesting to everybody involved. Techies aren’t there to facilitate your project; your project’s there so that techies can develop software that they find new and interesting. Well ideally both will be true.

I must thank Steve Ranford and Digital Humanities at Warwick for facilitating my attendance at this summer school. I have learned a lot, particularly how much I have to learn. Be inspired; devise a project; discuss it and panic; revise that project; try it out!

Emil Rybczak (English) Univ. Warwick.


April 17, 2015

Presenting irregular datasets

Writing about web page http://bbashakespeare.warwick.ac.uk

Most content management systems use some kind of administrator defined templating system, either site-wide or per page. Most often or not, this takes the form grid optionsof a grid arrangement. In deciding what template would be appropriate, you would look at the nature of data the page is to display, and an appropriate layout. This works fine with regular pages and datsets, but has become a recurring issue with datasets we've encountered in the digital humanities.

Humanities datasets are often sparse in parts, but have clusters or hotspots of data, often driven by data availability or the research process that identifies data of interest that deserves more granular capture into the dataset that would be time consuming and wasteful to do across the whole area of interest. Conforming data like this into a template means that one page can return a single result and the next, 30 or more. A classic "one size doesn't fit all" problem.

In the British Black and Asian Shakespeare, this very phenomenon occurs as the subjects of the database may feature actors with just a single performance - Stephen Beresford is director to just a single production of Twelfth Night in 2004, whereas Robert Mountford has seventeen seperate performances of interest in the dataset to his name. The default route to handle this variety is to use pagination, where at the bottom of the first page, you see how many unshown pages of results are tucked neatly behind a series of links. This technique sets a maximum reasonable number of records to display on a single page and a way of accessing any that spill over. I've become increasingly unsatisfied with this approach that comes at the loss of an overview. This project presented us with an opportunity to explore a different approach.

Inspiration was drawn from the Google Material Design philosophy, and a plugin was sought to enable layouts that were difficult to achieve with standard grid frameworks. masonry layoutMasonry turned out to be just the javascript plugin needed to help us maximise the use of space dynamically with columns, irregular sized content and responsive design to different device's screen sizes. You can see that this two-column layout maximises the space on the screen, and each block of content takes up as much space as it needs, without impacting the other column. This is the key feature of Masonry, as it removes a fundamental restriction of HTML that prevents such layouts with dynamic content. What it does mean, is that the user is expected to scroll if there is a large dataset to be displayed, but this is not the concern that it used to be with an 'above the fold' mindset.

bbas-lg.pngbbas-xs.png

This is a step in the direction of more intelligent page design and depending on user response, a route we will look to employ on future projects.

Interestingly, commercial offerings are looking to bring more AI to bear on these problems. Notably thegrid.io is making a stir, with websites that design themselves around the content submitted to them. This is an approach we're tracking with interest as surely content determining layout rather than the other way around is the sensible way to design websites.

Special thanks to our project developer and the designer for the British Black and Asian Shakespeare project, Hafiz Hanif.


July 13, 2014

Applying New Digital Methods in the Humanities, British Library, 27th June 2014

‘Applying New Digital Methods in the Humanities’ was a one-day workshop held at the British Library on 27th June 2014. The dhAHRC organising team brought together a wonderful mixture of librarians, journalists, software engineers and academics, both Digital Humanities experts and researchers applying digital methods. Jumping in at the deep end, the morning’s papers focused on Digital Humanities as a discipline. Professor Melissa Terras (UCL) gave an excellent keynote on ‘Digital Humanities Through and Through’, where she contested the idea that Digital Humanities was a field that was only a decade old, arguing that people have been using quantitative methods in the Humanities for centuries. Terras maintained that the research questions and critical awareness of Humanities scholars remain the same, only our tools and society have developed. Dr Jane Winters (Institute of Historical Research) then spoke of her experiences of ‘Big Data’ from three interdisciplinary projects. Winters was sensitive to the weaknesses of macro data and its ‘fuzziness’ with issues such as spelling, but demonstrated that data on such a large scale reveals changes which might have passed unnoticed, which in turn can inform new research questions.

The morning then took a decidedly medieval turn, with papers by Dr Stewart Brookes (KCL), Dr David Wrisley (American University of Beirut), and Dr Jane Gilbert (UCL), Paul Vetch (UCL) and Neil Jefferies (Bodleian). Brookes presented his team’s DigiPal Database of palaeography in Anglo-Saxon manuscripts. They have classified and logged all the separate elements of a manuscript letter and their variants, which allow scholars to identify a manuscript’s provenance, date and even scribe, as well as to view the evolution of practices. Interestingly, the methods behind DigiPal have now been extended to other languages from other centuries and have even been applied to examine paratextual elements such as illustrations. This was really useful in demonstrating quite how flexible the methods of Digital Humanities can be. Moving towards Francophone culture, Gilbert and Vetch spoke about how Digital Humanities techniques have been employed on the AHRC project ‘Medieval Francophone Literary Culture Outside France’. Their paper was appropriately named ‘I was but the learner; now I am the master’ and really put the accent on collaboration between the research and Digital Humanities teams. Their result was the creation of a novel method to analyse different physical versions of the same text. This was arguably the most helpful session of the day because it really underlined the importance of communication between different specialist skill sets. Remaining with manuscripts, Jefferies talked about the ‘Shared Canvas and IIIF’. This is a DMSTech project with numerous collaborators such as the British Library and the Bibliothèque nationale de France with a two-fold aim: maximum manipulation for the reader who can re-piece folded manuscripts and review multiple images with their own annotations, whilst maintaining minimum movement of data.

The afternoon started by focusing on the contribution of the ‘citizen’. Martin Foys (KCL) and Kimberly Kowal (British Library) recounted how they use crowdsourcing for Foys’ project ‘Virtual Mappa’. This was followed by Chris Lintott (Oxford) speaking about his experience in creating and continuing Zooniverse.org. Lintott and his team realised the human brain was far more accurate in identifying galaxies than a computer, and that human brain power could be harnessed for the benefit of science through the development of fun activities allowing an individual to contribute to the world of research. This has since been extended to multiple projects from a various disciplines and countries.

The last two papers by Jason Sundram (Facebook) and Rosemary Bechler (Open Democracy) moved the day to focus on digital methods and culture more generally. Sundram explained how he combined his passion for programming and classical music to analyse Hayden recordings, which in turn affect the performances of his quartet. Rosemary Bechler then ended the day with a keynote on how digital methods are driving a revolution in journalism that prioritises the audience. She contended that whereas the passive audience was told what to read in the past through the dominance of the front page, this is now being replaced by social media, which create ‘hubs of interest’ and a much more dynamic, dual relationship between journalist and reader, allowing for a transnational flow of information.

Although the day could have benefitted from a reverse programme order, offering a softer introduction to Digital Humanities, it was an incredibly useful experience. The combination of numerous research projects and various standpoints within the field of Digital Humanities was thought provoking. However, regardless of project or position, four key points were reiterated across the papers:

  1. You do not have to be a programmer to use Digital Humanities; programmers can be built into funding bids. However, make sure that they are employed for the right amount of time (i.e. so they are not overworked or twiddling their thumbs).
  2. Make your digital methods re-purposeful for other projects.
  3. Envisage how your data will be stored and archived after the end of the project.
  4. Most importantly, critical research questions continue to drive Humanities research, not the tools.


July 08, 2014

Making social bookmarking that bit more social

I have for a while used Diigo to track and organise my bookmarks, particularly in the digital humanities. My bookmarks are shared with the 'Academic Technology at Warwick' group on Diigo. I did want to add twitter as a channel to spread these finds with my followers and those tracking the #dhwarwick tag (which also feeds the front page of the Digital Humanities website).

if [diigo] then [twitter]

To stitch these two tools together, I have used a third tool, IFTTT. IFTTT is one of a number of tools that creates 'recipes' that allow activity in one service to trigger an action in another by granting this intermediary access to both accounts, and a set of criteria to trigger.

To work neatly, I also had to come up with a vocabulary that will help me organise my bookmarks and automatically generate sensible tweets. This is what I'm using:

  • Anything tagged with #dhwarwick in my Diigo account is the trigger to send a tweet.
  • A tweet is composed of "Just bookmarked this: {{Title}} {{Url}} tagged {{Tags}}" where the curly braces are replaced with the text from Diigo.
  • Because twitter is going to use the tags, it will include #dhwarwick which will be picked-up by twitter as a hashtag, and also feed the website.
  • I'll be making sure that things I bookmark and tag #dhwarwick are succinct to fit within the 140 character limit.
  • If I find a link via a prompt from someone, I also have a tag for this. I put 'via @twitterusername' in as a tag. This will reference them on twitter in the tweet too.

I've shared my IFTTT recipe if you want to see what's going on and do something similar:

IFTTT Recipe: Push selected bookmarks to twitter connects diigo to twitter


July 07, 2014

Conference Report: Applying New Digital Methods to the Humanities

Writing about web page http://www.dhcrowdscribe.com/

This one-day interdisciplinary workshop set out to question how knowledge creation and production could be advanced through employing existing and emerging tools available in the Digital Humanities. Conveniently based at the British Library, one of the most innovative centres of digital research, this event provided the opportunity for doctoral and early career researchers to learn more about the current research being undertaken in the Digital Humanities and how, as scholars, we can use these techniques to advance the creation and dissemination of knowledge. As a doctoral student interested in looking at new directions for my future research, I thought that this would be an ideal opportunity to learn more about the past, present and future directions of this exciting field.

The programme was varied and stimulating, covering a range of topics, including Big Data, mapping and visualisation methods, audience and database development. The highlight of the day was the keynote presentation from Professor Melissa Terras (Director of Digital Humanities, UCL) who offered some practical advice for scholars considering a Digital Humanities project. This was an interesting and thought-provoking summary of some of the issues that digital humanists face and the types of strategies that should be employed in order to ensure a successful project. One of the best pieces of advice Melissa offered, and one which recurred throughout the day, was to know what the end results and outcomes of the project are. The data, as she made clear, must always be the focus of the research, since it will have a much longer life-time than the tools themselves.

The rest of the event then turned to consider digital research tools and how they had been developed to address specific research questions. Dr Jane Winters (IHR) explored in her presentation the types of methods and tools available for Big Data and discussed some of the projects in which she has been involved, including her interdisciplinary work on the Digging into Linked Parliamentary Data project. Dr Stewart Brookes (Kings College, London) talked about his work on the Digipal Database of Anglo-Saxon Manuscripts and Dr David Wrisley (American University of Beirut) explored spatial data mapping of Medieval Literature. Dr Jane Gilbert (UCL) and Dr Paul Vetch (Kings College, London) presented on how they implemented digital resources for their project on Medieval Francophone Literary Cultures Outside France, Dr Martin Foys (Kings College, London) and Kimberly Kowal (British Library) spoke about the British Library Georeferencer project and the benefits of crowd sourcing, and Neil Jefferies (University of Oxford), presented on his projects Shared Canvas and IIIF, both of which have been implemented to address specific problems with the presentation of manuscripts on digital software.

One of the outcomes of these presentations was the clear need to create research tools which were ‘repurposable’, i.e. had a life-cycle beyond the specific project and could be made available for other people to use and adapt. However, one of the gaps in the event was that the presentations focused on a set of tools that had been developed with a very precise project in-mind. As a non-specialist, it very much felt as though the focus was on creating a new tool rather than implementing existing software to answer specific research objectives. I therefore felt that some of the discussions would have benefitted from a bit more practical advice about how to source and apply existing research methods. Moreover, whilst these presentations were thoroughly thought-provoking, they did draw attention to one of the big gaps in historian’s knowledge—programming and coding. It would have therefore been helpful to have heard more about how to encourage more interdisciplinary collaboration with software engineers and programmers and how to get these people involved in a funding bid.

One of the strengths of the event, however, was its broad emphasis on interdisciplinarity and cross-disciplinary collaboration. Some of the most stimulating papers of the day were from individuals not involved in Digital Humanities projects, but whose work with programming and crowd-sourcing had specific application to Digital Humanities research. This included Dr Chris Lintott’s (University of Oxford) paper on Zooniverse, which has led to internationally acclaimed digital projects and a stronger awareness of the impact that non-specialist audiences can make to research projects. The idea of ‘connecting communities’ was a theme picked up by Jason Sundram a software engineer who has worked for Facebook, and who spoke to the delegates about how he had been able to combine the performance, analysis and visualisation of Haydn String Quartets. The final speaker of the day was Rosemary Belcher, an editor for the website openDemocracy, who provided a powerful closing message about the importance of promoting content and connecting with the audience. The act of publishing, she argued, should be part of a bigger drive to expand and connect engaged audiences.

For a researcher only just thinking about the implications of Digital Humanities, this event was an excellent opportunity for me to explore the different ways in which digital research can make a positive impact on my own work. I found the day thoroughly stimulating and enjoyed hearing about the broad range of scholars currently employing these techniques. Since so much of the event was focused on ‘connecting communities’ it seemed particularly appropriate that one output of the event was the fantastic networking opportunity it provided. I am very grateful to the Digital Humanities / Academic Technology team at Warwick for the opportunity to travel to such an intellectually stimulating and highly-relevant workshop. It has also given me the much-needed opportunity to contextualise and consider my research within a wider interdisciplinary framework.

Naomi Pullin
Department of History

May 29, 2014

Making Sense of the digital humanities – Monash alliance event

Writing about web page http://monash.edu/news/show/making-sense-of-the-digital-humanities

Warwick researchers and students gathered early last Wednesday at the International Video Portal, Ramphal to discuss the digital humanities with Monash in a virtual workshop. Amber Thomas and David Beck from Academic Technology spoke at the event, which Monash has published in an write-up: Making Sense of the digital humanities.

Update:

Warwick has published in an write-up: Digital humanities interactive workshop success

The discussion continues on a shared Digital Humanities Discussion Forum | Monash and Warwick online forum.


May 23, 2014

Adding an image lightbox to sitebuilder from text

If you insert a photo into sitebuilder as a thumbnail, it neatly makes the image clickable to see a larger size of the image without having to use the HTML editor. I've recently seen a number of requests to be able to produce the same functionality from a text link, rather than an image.

Caveat: With a clickable thumbnail, you are giving the visitor a clear visual cue that when they click, they'll see something related to that image. With text, this is not the case, so try to use this in a context where the visitor is expecting this kind of behavior. References to images in text seems like an appropriate use of this, and is where the requests I've encountered have originated.

Instructions

  1. Create a link to the image on a text snippet like [6] using the links picker to navigate to photo gallery and choosing the right photo.
  2. With the newly created link still selected, click ‘link options’ >> ‘edit link’ and under the advanced tab find "Relationship Page to Target” drop down box. Set this to ‘lightbox’ and the image will convert your link to a lightbox link (by inserting a 'rel' attribute to your link code in your page's html).

February 12, 2014

Crowdsourcing and Citizen Science

transcribe bentham

I'm currently pulling together examples of crowdsourcing for a digital humanities working lunch on Friday 28th February. I'm also pulling together examples on citizen science for Warwick's Public Engagement Network. So my head is very much in the realm of connecting "the public" with the work of researchers.

Pat Thomson asked "who is the public in public engagement?" and makes the point that "public" doesn't have to mean lowest common denominator. "The Public" includes highly educated well informed adults who just happen not to work as academics. And there are a lot of people who come under that category. Melvyn Bragg has suggested the term "mass intelligensia" to describe this phenomena.

At the same time, we have the web. The web is fantastic at connecting together people with an interest in the long tail. You might not be able to attract 50 people to your talk on [insert niche research topic here ;-)], but put that paper in an open access repository, blog about it, tweet it, and you may find 50 readers who not only want to hear what you have to say but they may even be able to add their own expertise. They might even be on the other side of the world and this is the only way they could have found your work. This is the revolution in research, enabled by open access, that is waiting to blossom.

It's in the two way communication that some really interesting research models are emerging. There is a long tradition of science communication and science education, but citizen science takes it one step further. Citizen science can potentially do some of the work of science. Voluntary labour on a massive scale, collecting and analysing data for use by scientists.

Treezillais a site where anyone can submit information about the trees around them. Its ambition is no less than "making a monster map of Britain's trees". It's a fantastic example of crowdsourcing and citizen science in action. And of course I'm very keen to hear from those Warwick Researchers already doing citizen science!

The humanities can do crowdsourcing too. There are range of approaches in use, but one of my favourites is Transcribe Bentham.

Transcribe Bentham logo

Transcribe Bentham is an online transcription desk that has so far transcribed nearly 7,000 manuscripts, marked up for machine readability and scholarly analysis. As Causer and Terras describe in their recent paper, it is "indicative of a new focus in digital humanities scholarship: reaching out to encourage user participation and engagement, whilst providing tools which can be repurposed for others". If you think this is just a novelty, think again: the signs are that funders are appreciating the benefits of these new methodologies.

We'll be having a look at this and several other humanities-based crowdsourcing projects on 28th February, 12-2 in the Wolfson Research Exchange, Lunch Provided, please let me know if you're coming so we can provide enough food!

I'm really looking forward to running this session, and I'd love to hear from Warwick researchers using these approaches already.


amber (dot) thomas @ warwick.ac.uk

@ambrouk


December 2024

Mo Tu We Th Fr Sa Su
Nov |  Today  |
                  1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31               

Search this blog

Tags

Galleries

Most recent comments

  • Thanks for this short blog by Dave on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV