Earlier this week I attended a talk by Ian Gregory of the University of Lancaster, on the way GIS and the digital humanities more generally can be useful in reading historical and literary texts. Gregory has a background in Geography and especially GIS, and now teaches in a History department. The talk concerned how it was possible to digitise literary and historical texts, and then map them in different ways.
The first example was from the Mapping the Lakes project (and beta enhanced version here) and concerned the geographical references in Gray and Coleridge’s travels in the English lake district. These were texts of only 20,000 words, and most of the coding was done manually. While it allowed the production of some interesting maps, useful to illustrate, it still relied on close reading and human involvement.
The second example used the resources of www.histpop.org, a source of mortality data from 1801-1937, and c. 13 million words. Obviously with a resource of that size it would require a team of people, and/or a huge amount of time to analyse this. But using a range of digital techniques, Gregory and his colleagues had been able to trace trends and patterns in the data – specific locations, prevalence of location with ‘unusually common’ words. This allowed them to map – figuratively and literally – the textual resources of the archive. There is more about the Lancaster Spatial Humanities project available here.
In my research for The Birth of Territory I made quite a lot of use of dictionaries, concordances, lexicons and other bibliographical tools. These didn’t substitute for close reading, but were aids in the process, showing me some of the places I needed to look. Many of these were compiled in pre-digital days, and the labour that went into them was enormous. I did use some digital tools in this work. I can imagine some really interesting potential projects using some of these techniques, especially as they interface with mapping. For example, I made quite a lot of use of the Monumenta Germaniae Historica, a massive, multi-volume collection of documents from the Holy Roman Empire and medieval church pertaining to the Empire. It’s an enormous resource, and I found it very hard to navigate. The same goes for the Patrologiae Cursus Completus, a series that runs into hundreds of volumes, collecting writing from the early centuries of the church in Latin and Greek, most of which is also on Google books (but very hard to search or navigate). Sometimes single works have been digitised to add scholarship – Aquinas’s Summa Theologica, for example, exists as a website and cd-rom, but it’s not exactly user-friendly – even the menus are in Latin. Some of Ockham’s writings are only being published online, and the work Werner Stark has done with variant versions of Kant’s lectures will exist in book form only for a part, with a digital resource alongside. A map of the places mentioned in the different versions of Kant’s geography lectures would, for example, make an interesting project.
There is a certain irony in that the best talk I’ve heard on the potential for this work was at a University without a Geography department, but in part that was the point – it was organised by Warwick’s digital humanities project to show the potential uses to non-Social Scientists, even if my experience is that the Social Sciences are generally much less clued-up on this that is sometimes imagined. As people like my friend and sometime-collaborator Jeremy Crampton have long shown, these kinds of tools can be used for critical, non-reductive processes and research. With much of the talk by Ian Gregory it seemed best suited as an aid to close, critical reading, rather than its replacement. I’m not sure how much I will do with this, but it did raise some very interesting ideas.