All 3 entries tagged Digital Humanities

No other Warwick Blogs use the tag Digital Humanities on entries | View entries tagged Digital Humanities at Technorati | There are no images tagged Digital Humanities on this blog

February 11, 2016

(De)coding Texts – Using the Text Encoding Initiative as a new way to teach the reading of texts

On 3rd March 2016 Alex Peck and Dr Clare Rowan of the Department of Classics and Ancient History will be hosting a two-hour long workshop for postgraduates, academics and teaching staff of the university that will demonstrate how TEI (the Text Encoding Initiative) can be used as an engaging and stimulating method for teaching and developing text-based skills. The workshop will provide a practical demonstration of how TEI can be used within a teaching environment and will encourage participants to reflect on how such a method could be implemented within a variety of academic disciplines.

To provide a flavour of what participants can expect from the workshop and of how TEI can be used in undergraduate teaching, Alex Peck has written a brief summary of the TEI based teaching project (De)Coding Classics that was piloted in the Department of Classics and Ancient History during the 2015/16 academic year.

(De)Coding Classics was a pilot teaching project that aimed to further develop the text-based skills of critical thinking and close reading through the use of computer coding, namely TEI. The project also looked to expose undergraduates to the world of Digital Humanities and to build an awareness of how many of their online resources in the field of Classics and Ancient History are created. This project was inspired by a similar teaching scheme that was pioneered in the UK by Dr Melodee Beals (the of Sheffield Hallam University now of Loughborough University), who had used TEI in her History module as an innovative and engaging method to help her undergraduate students engage effectively in the analysis of primary textual sources.

TEI, or the TExt Encoding Initiative, is the standard recognised means by which digitally annotated versions of texts or text corpora are created. TEi is extremely versatile, providing a highly detailed appendices of 'tags' and 'elements' that can be used to highlight and annotate a vast range of textual features. Its versatility is further demonstrated by the fact that new 'tags' and 'elements' are frequently added to reflect new trends in textual analysis or to expand the range of texts that can be encoded. This versatility renders TEI an effective tool for the teaching of text-based skills.

(De)Coding Classics was implemented in the second-year core module 'The Hellenistic World'. The module cohort was divided into two groups of approximately 35 students and each group was then led through a highly-practical two-hour seminar. The seminars were hosted in R0.39 as this is currently the only computer lab on campus that has Oxygen (the easy-to-use coding software that we had elected ti use in our teaching spaces) installed on 25 computers. The students were first given a brief introduction to TEI, shown how digital editions of texts are created and look like in their coded form, and encouraged to reflect on the similarities between traditional non-digital text annotation and the practice of coding. Following this relatively short introduction, the students were given a practical step-by-step demonstration on how to encode a simple text using TEI. Once the students had been given the basic skills needed to encode texts on their own, the students were individually assigned a text that had been taken from a complex anthology (the Archive of Hor), which they then proceeded to transcribe and code as they had done during the demonstration. They were instructed to identify and thus code the names of people or deities; topographical information; and statements or facts of historical significance. Once the students had identified these key features, they were then instructed to research them and to create a glossary entry for them based on the findings of this research. In order to conduct this research, the students were encouraged to use the wide range of online resources available in the field of Classics and to record these secondary sources using the department's accepted referencing style. A system was devised in order to avoid the possibility of multiple entries on a single topic or feature. Once the students had finished their glossary entries, and were thus armed with enough information to embark on a detailed interpretation of their respective texts, they were instructed to write a brief commentary. The coded text, commentary and glossary entries were then added to a tailor-made database on the module website. This provided them with an interactive finished product of their work.

Although only being a short pilot project that was limited to a single seminar slot, (De)Coding Classics illustrated the effectiveness of a practical digital approach to the teaching of texts. This particular method resulted in a greater engagement on the aprt of the students with regard to the texts they were assigned and thus comprehension of the utility and importance of textual sources for understanding the Hellenistic World. Moreover, the session was able to improve the students' level of digital confidence and competence, a factor that will undoubtedly be increasingly important in the years ahead. There is thus great potential for the incorporation of further elements of computer coding in the teaching and development of traditional textual as well as linguistic skills, and such a practice could become an important complementary toll in linguistic and textual teaching.


July 13, 2014

Applying New Digital Methods in the Humanities, British Library, 27th June 2014

‘Applying New Digital Methods in the Humanities’ was a one-day workshop held at the British Library on 27th June 2014. The dhAHRC organising team brought together a wonderful mixture of librarians, journalists, software engineers and academics, both Digital Humanities experts and researchers applying digital methods. Jumping in at the deep end, the morning’s papers focused on Digital Humanities as a discipline. Professor Melissa Terras (UCL) gave an excellent keynote on ‘Digital Humanities Through and Through’, where she contested the idea that Digital Humanities was a field that was only a decade old, arguing that people have been using quantitative methods in the Humanities for centuries. Terras maintained that the research questions and critical awareness of Humanities scholars remain the same, only our tools and society have developed. Dr Jane Winters (Institute of Historical Research) then spoke of her experiences of ‘Big Data’ from three interdisciplinary projects. Winters was sensitive to the weaknesses of macro data and its ‘fuzziness’ with issues such as spelling, but demonstrated that data on such a large scale reveals changes which might have passed unnoticed, which in turn can inform new research questions.

The morning then took a decidedly medieval turn, with papers by Dr Stewart Brookes (KCL), Dr David Wrisley (American University of Beirut), and Dr Jane Gilbert (UCL), Paul Vetch (UCL) and Neil Jefferies (Bodleian). Brookes presented his team’s DigiPal Database of palaeography in Anglo-Saxon manuscripts. They have classified and logged all the separate elements of a manuscript letter and their variants, which allow scholars to identify a manuscript’s provenance, date and even scribe, as well as to view the evolution of practices. Interestingly, the methods behind DigiPal have now been extended to other languages from other centuries and have even been applied to examine paratextual elements such as illustrations. This was really useful in demonstrating quite how flexible the methods of Digital Humanities can be. Moving towards Francophone culture, Gilbert and Vetch spoke about how Digital Humanities techniques have been employed on the AHRC project ‘Medieval Francophone Literary Culture Outside France’. Their paper was appropriately named ‘I was but the learner; now I am the master’ and really put the accent on collaboration between the research and Digital Humanities teams. Their result was the creation of a novel method to analyse different physical versions of the same text. This was arguably the most helpful session of the day because it really underlined the importance of communication between different specialist skill sets. Remaining with manuscripts, Jefferies talked about the ‘Shared Canvas and IIIF’. This is a DMSTech project with numerous collaborators such as the British Library and the Bibliothèque nationale de France with a two-fold aim: maximum manipulation for the reader who can re-piece folded manuscripts and review multiple images with their own annotations, whilst maintaining minimum movement of data.

The afternoon started by focusing on the contribution of the ‘citizen’. Martin Foys (KCL) and Kimberly Kowal (British Library) recounted how they use crowdsourcing for Foys’ project ‘Virtual Mappa’. This was followed by Chris Lintott (Oxford) speaking about his experience in creating and continuing Zooniverse.org. Lintott and his team realised the human brain was far more accurate in identifying galaxies than a computer, and that human brain power could be harnessed for the benefit of science through the development of fun activities allowing an individual to contribute to the world of research. This has since been extended to multiple projects from a various disciplines and countries.

The last two papers by Jason Sundram (Facebook) and Rosemary Bechler (Open Democracy) moved the day to focus on digital methods and culture more generally. Sundram explained how he combined his passion for programming and classical music to analyse Hayden recordings, which in turn affect the performances of his quartet. Rosemary Bechler then ended the day with a keynote on how digital methods are driving a revolution in journalism that prioritises the audience. She contended that whereas the passive audience was told what to read in the past through the dominance of the front page, this is now being replaced by social media, which create ‘hubs of interest’ and a much more dynamic, dual relationship between journalist and reader, allowing for a transnational flow of information.

Although the day could have benefitted from a reverse programme order, offering a softer introduction to Digital Humanities, it was an incredibly useful experience. The combination of numerous research projects and various standpoints within the field of Digital Humanities was thought provoking. However, regardless of project or position, four key points were reiterated across the papers:

  1. You do not have to be a programmer to use Digital Humanities; programmers can be built into funding bids. However, make sure that they are employed for the right amount of time (i.e. so they are not overworked or twiddling their thumbs).
  2. Make your digital methods re-purposeful for other projects.
  3. Envisage how your data will be stored and archived after the end of the project.
  4. Most importantly, critical research questions continue to drive Humanities research, not the tools.


July 07, 2014

Conference Report: Applying New Digital Methods to the Humanities

Writing about web page http://www.dhcrowdscribe.com/

This one-day interdisciplinary workshop set out to question how knowledge creation and production could be advanced through employing existing and emerging tools available in the Digital Humanities. Conveniently based at the British Library, one of the most innovative centres of digital research, this event provided the opportunity for doctoral and early career researchers to learn more about the current research being undertaken in the Digital Humanities and how, as scholars, we can use these techniques to advance the creation and dissemination of knowledge. As a doctoral student interested in looking at new directions for my future research, I thought that this would be an ideal opportunity to learn more about the past, present and future directions of this exciting field.

The programme was varied and stimulating, covering a range of topics, including Big Data, mapping and visualisation methods, audience and database development. The highlight of the day was the keynote presentation from Professor Melissa Terras (Director of Digital Humanities, UCL) who offered some practical advice for scholars considering a Digital Humanities project. This was an interesting and thought-provoking summary of some of the issues that digital humanists face and the types of strategies that should be employed in order to ensure a successful project. One of the best pieces of advice Melissa offered, and one which recurred throughout the day, was to know what the end results and outcomes of the project are. The data, as she made clear, must always be the focus of the research, since it will have a much longer life-time than the tools themselves.

The rest of the event then turned to consider digital research tools and how they had been developed to address specific research questions. Dr Jane Winters (IHR) explored in her presentation the types of methods and tools available for Big Data and discussed some of the projects in which she has been involved, including her interdisciplinary work on the Digging into Linked Parliamentary Data project. Dr Stewart Brookes (Kings College, London) talked about his work on the Digipal Database of Anglo-Saxon Manuscripts and Dr David Wrisley (American University of Beirut) explored spatial data mapping of Medieval Literature. Dr Jane Gilbert (UCL) and Dr Paul Vetch (Kings College, London) presented on how they implemented digital resources for their project on Medieval Francophone Literary Cultures Outside France, Dr Martin Foys (Kings College, London) and Kimberly Kowal (British Library) spoke about the British Library Georeferencer project and the benefits of crowd sourcing, and Neil Jefferies (University of Oxford), presented on his projects Shared Canvas and IIIF, both of which have been implemented to address specific problems with the presentation of manuscripts on digital software.

One of the outcomes of these presentations was the clear need to create research tools which were ‘repurposable’, i.e. had a life-cycle beyond the specific project and could be made available for other people to use and adapt. However, one of the gaps in the event was that the presentations focused on a set of tools that had been developed with a very precise project in-mind. As a non-specialist, it very much felt as though the focus was on creating a new tool rather than implementing existing software to answer specific research objectives. I therefore felt that some of the discussions would have benefitted from a bit more practical advice about how to source and apply existing research methods. Moreover, whilst these presentations were thoroughly thought-provoking, they did draw attention to one of the big gaps in historian’s knowledge—programming and coding. It would have therefore been helpful to have heard more about how to encourage more interdisciplinary collaboration with software engineers and programmers and how to get these people involved in a funding bid.

One of the strengths of the event, however, was its broad emphasis on interdisciplinarity and cross-disciplinary collaboration. Some of the most stimulating papers of the day were from individuals not involved in Digital Humanities projects, but whose work with programming and crowd-sourcing had specific application to Digital Humanities research. This included Dr Chris Lintott’s (University of Oxford) paper on Zooniverse, which has led to internationally acclaimed digital projects and a stronger awareness of the impact that non-specialist audiences can make to research projects. The idea of ‘connecting communities’ was a theme picked up by Jason Sundram a software engineer who has worked for Facebook, and who spoke to the delegates about how he had been able to combine the performance, analysis and visualisation of Haydn String Quartets. The final speaker of the day was Rosemary Belcher, an editor for the website openDemocracy, who provided a powerful closing message about the importance of promoting content and connecting with the audience. The act of publishing, she argued, should be part of a bigger drive to expand and connect engaged audiences.

For a researcher only just thinking about the implications of Digital Humanities, this event was an excellent opportunity for me to explore the different ways in which digital research can make a positive impact on my own work. I found the day thoroughly stimulating and enjoyed hearing about the broad range of scholars currently employing these techniques. Since so much of the event was focused on ‘connecting communities’ it seemed particularly appropriate that one output of the event was the fantastic networking opportunity it provided. I am very grateful to the Digital Humanities / Academic Technology team at Warwick for the opportunity to travel to such an intellectually stimulating and highly-relevant workshop. It has also given me the much-needed opportunity to contextualise and consider my research within a wider interdisciplinary framework.

Naomi Pullin
Department of History

April 2024

Mo Tu We Th Fr Sa Su
Mar |  Today  |
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30               

Search this blog

Tags

Galleries

Most recent comments

  • Thanks for this short blog by Dave on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV