All 2 entries tagged Technology
View all 216 entries tagged Technology on Warwick Blogs | View entries tagged Technology at Technorati | There are no images tagged Technology on this blog
February 11, 2016
(De)coding Texts – Using the Text Encoding Initiative as a new way to teach the reading of texts
On 3rd March 2016 Alex Peck and Dr Clare Rowan of the Department of Classics and Ancient History will be hosting a two-hour long workshop for postgraduates, academics and teaching staff of the university that will demonstrate how TEI (the Text Encoding Initiative) can be used as an engaging and stimulating method for teaching and developing text-based skills. The workshop will provide a practical demonstration of how TEI can be used within a teaching environment and will encourage participants to reflect on how such a method could be implemented within a variety of academic disciplines.
To provide a flavour of what participants can expect from the workshop and of how TEI can be used in undergraduate teaching, Alex Peck has written a brief summary of the TEI based teaching project (De)Coding Classics that was piloted in the Department of Classics and Ancient History during the 2015/16 academic year.
(De)Coding Classics was a pilot teaching project that aimed to further develop the text-based skills of critical thinking and close reading through the use of computer coding, namely TEI. The project also looked to expose undergraduates to the world of Digital Humanities and to build an awareness of how many of their online resources in the field of Classics and Ancient History are created. This project was inspired by a similar teaching scheme that was pioneered in the UK by Dr Melodee Beals (the of Sheffield Hallam University now of Loughborough University), who had used TEI in her History module as an innovative and engaging method to help her undergraduate students engage effectively in the analysis of primary textual sources.
TEI, or the TExt Encoding Initiative, is the standard recognised means by which digitally annotated versions of texts or text corpora are created. TEi is extremely versatile, providing a highly detailed appendices of 'tags' and 'elements' that can be used to highlight and annotate a vast range of textual features. Its versatility is further demonstrated by the fact that new 'tags' and 'elements' are frequently added to reflect new trends in textual analysis or to expand the range of texts that can be encoded. This versatility renders TEI an effective tool for the teaching of text-based skills.
(De)Coding Classics was implemented in the second-year core module 'The Hellenistic World'. The module cohort was divided into two groups of approximately 35 students and each group was then led through a highly-practical two-hour seminar. The seminars were hosted in R0.39 as this is currently the only computer lab on campus that has Oxygen (the easy-to-use coding software that we had elected ti use in our teaching spaces) installed on 25 computers. The students were first given a brief introduction to TEI, shown how digital editions of texts are created and look like in their coded form, and encouraged to reflect on the similarities between traditional non-digital text annotation and the practice of coding. Following this relatively short introduction, the students were given a practical step-by-step demonstration on how to encode a simple text using TEI. Once the students had been given the basic skills needed to encode texts on their own, the students were individually assigned a text that had been taken from a complex anthology (the Archive of Hor), which they then proceeded to transcribe and code as they had done during the demonstration. They were instructed to identify and thus code the names of people or deities; topographical information; and statements or facts of historical significance. Once the students had identified these key features, they were then instructed to research them and to create a glossary entry for them based on the findings of this research. In order to conduct this research, the students were encouraged to use the wide range of online resources available in the field of Classics and to record these secondary sources using the department's accepted referencing style. A system was devised in order to avoid the possibility of multiple entries on a single topic or feature. Once the students had finished their glossary entries, and were thus armed with enough information to embark on a detailed interpretation of their respective texts, they were instructed to write a brief commentary. The coded text, commentary and glossary entries were then added to a tailor-made database on the module website. This provided them with an interactive finished product of their work.
Although only being a short pilot project that was limited to a single seminar slot, (De)Coding Classics illustrated the effectiveness of a practical digital approach to the teaching of texts. This particular method resulted in a greater engagement on the aprt of the students with regard to the texts they were assigned and thus comprehension of the utility and importance of textual sources for understanding the Hellenistic World. Moreover, the session was able to improve the students' level of digital confidence and competence, a factor that will undoubtedly be increasingly important in the years ahead. There is thus great potential for the incorporation of further elements of computer coding in the teaching and development of traditional textual as well as linguistic skills, and such a practice could become an important complementary toll in linguistic and textual teaching.
July 07, 2014
Conference Report: Applying New Digital Methods to the Humanities
Writing about web page http://www.dhcrowdscribe.com/
This one-day interdisciplinary workshop set out to question how knowledge creation and production could be advanced through employing existing and emerging tools available in the Digital Humanities. Conveniently based at the British Library, one of the most innovative centres of digital research, this event provided the opportunity for doctoral and early career researchers to learn more about the current research being undertaken in the Digital Humanities and how, as scholars, we can use these techniques to advance the creation and dissemination of knowledge. As a doctoral student interested in looking at new directions for my future research, I thought that this would be an ideal opportunity to learn more about the past, present and future directions of this exciting field.
The programme was varied and stimulating, covering a range of topics, including Big Data, mapping and visualisation methods, audience and database development. The highlight of the day was the keynote presentation from Professor Melissa Terras (Director of Digital Humanities, UCL) who offered some practical advice for scholars considering a Digital Humanities project. This was an interesting and thought-provoking summary of some of the issues that digital humanists face and the types of strategies that should be employed in order to ensure a successful project. One of the best pieces of advice Melissa offered, and one which recurred throughout the day, was to know what the end results and outcomes of the project are. The data, as she made clear, must always be the focus of the research, since it will have a much longer life-time than the tools themselves.
The rest of the event then turned to consider digital research tools and how they had been developed to address specific research questions. Dr Jane Winters (IHR) explored in her presentation the types of methods and tools available for Big Data and discussed some of the projects in which she has been involved, including her interdisciplinary work on the Digging into Linked Parliamentary Data project. Dr Stewart Brookes (Kings College, London) talked about his work on the Digipal Database of Anglo-Saxon Manuscripts and Dr David Wrisley (American University of Beirut) explored spatial data mapping of Medieval Literature. Dr Jane Gilbert (UCL) and Dr Paul Vetch (Kings College, London) presented on how they implemented digital resources for their project on Medieval Francophone Literary Cultures Outside France, Dr Martin Foys (Kings College, London) and Kimberly Kowal (British Library) spoke about the British Library Georeferencer project and the benefits of crowd sourcing, and Neil Jefferies (University of Oxford), presented on his projects Shared Canvas and IIIF, both of which have been implemented to address specific problems with the presentation of manuscripts on digital software.
One of the outcomes of these presentations was the clear need to create research tools which were ‘repurposable’, i.e. had a life-cycle beyond the specific project and could be made available for other people to use and adapt. However, one of the gaps in the event was that the presentations focused on a set of tools that had been developed with a very precise project in-mind. As a non-specialist, it very much felt as though the focus was on creating a new tool rather than implementing existing software to answer specific research objectives. I therefore felt that some of the discussions would have benefitted from a bit more practical advice about how to source and apply existing research methods. Moreover, whilst these presentations were thoroughly thought-provoking, they did draw attention to one of the big gaps in historian’s knowledge—programming and coding. It would have therefore been helpful to have heard more about how to encourage more interdisciplinary collaboration with software engineers and programmers and how to get these people involved in a funding bid.
One of the strengths of the event, however, was its broad emphasis on interdisciplinarity and cross-disciplinary collaboration. Some of the most stimulating papers of the day were from individuals not involved in Digital Humanities projects, but whose work with programming and crowd-sourcing had specific application to Digital Humanities research. This included Dr Chris Lintott’s (University of Oxford) paper on Zooniverse, which has led to internationally acclaimed digital projects and a stronger awareness of the impact that non-specialist audiences can make to research projects. The idea of ‘connecting communities’ was a theme picked up by Jason Sundram a software engineer who has worked for Facebook, and who spoke to the delegates about how he had been able to combine the performance, analysis and visualisation of Haydn String Quartets. The final speaker of the day was Rosemary Belcher, an editor for the website openDemocracy, who provided a powerful closing message about the importance of promoting content and connecting with the audience. The act of publishing, she argued, should be part of a bigger drive to expand and connect engaged audiences.
For a researcher only just thinking about the implications of Digital Humanities, this event was an excellent opportunity for me to explore the different ways in which digital research can make a positive impact on my own work. I found the day thoroughly stimulating and enjoyed hearing about the broad range of scholars currently employing these techniques. Since so much of the event was focused on ‘connecting communities’ it seemed particularly appropriate that one output of the event was the fantastic networking opportunity it provided. I am very grateful to the Digital Humanities / Academic Technology team at Warwick for the opportunity to travel to such an intellectually stimulating and highly-relevant workshop. It has also given me the much-needed opportunity to contextualise and consider my research within a wider interdisciplinary framework.