I wasn’t able to post about yesterday’s Digital Humanities session immediately because of a lack of free wifi, so I saved it for this morning. In the first paper in Monday afternoon’s Digital Humanities session, Hayim Lapin and Yael Netzer spoke about developing a Canonical Text Service for Biblical and Rabbinic texts. A CTS is a set of stable ways to refer to texts or part of texts, a URN (Uniform Resource Identifier). Lapin talked about XML, TEI, and Epidoc, while Netzer talked about hierarchical structure of references that humans use, compared with the arbitrary and not-human-readable DOI. The idea for a CTS began with the Homer multitext project in 2009. Perseids which hosts the texts they have been working on has an API, and Netzer also provided examples of the query commands that can be used with the CTS. The Alpheios interface allows one to see Hebrew and English side by side, and click on one a word in one language and have the equivalent in the other light up. Creating tools that can interact with one another means we can stop reinventing the wheel, and build upon the open data provided by others. Lapin gave the example of five different projects to transcribe and publish in print the oldest Mishnah, which is in a library in Budapest. He asked why we proceed in this manner that requires replication of the same work.
Gaia Lembi spoke about a project that she worked on with Michael Satlow on Inscriptions of Israel/Palestine (IIP). These inscriptions are important for a variety of reasons, including history and the development of ancient languages, but also because they provide evidence of religious views that had been censored and so might not be known from other texts. Challenges include the fact that some inscriptions are multilingual, the use of Epidoc with Hebrew and Aramaic, and the inclusion of substantial contextual information such as images and geographical data. Lembi presented as an example an ostracon found at Masada. One project aim was to link to other sites, such as Pleiades and Getty’s Art and Architecture Thesaurus. They have recently been updating the website to make it more user friendly. The search page has multimedia content, such as an interactive map, to contextualize inscriptions.
David Van Acker spoke about his ongoing doctoral work at Leuven, evaluating William Wickes’ treatise on accents in the Masoretic text. He found his way to this topic by looking at the conjunction ki, which is marked with a disjunctive accent in 506 out of 2050 verses. Only 2% of cases could be explained in terms of sentence length, and so he tried to add other variables, such as the accent which follows. He also tried filtering out the rules of the accent system, to focus in on the exceptions. Van Acker noted the importance of context, and yet also emphasized the difficulty of knowing what to consider an accent’s context. He also explored the possibility that the groupings of words within a sentence might be relevant. Future work might look at the possibility that interrogative sentences might indicate the use of pitch in corresponding ways. I wondered whether computer analysis could be applied to the shift from one symbol to another, since the frequency of intervals rather than the frequency of individual notes might be more instructive in evaluating the possibility that certain musical interpretations of the symbols are plausible. I cannot remember whether Bob Macdonald had done that.
The next presentation was about bibletraditions.org, given by Olivier-Thomas Venard. He considers this project the successor to the Ecole Biblique’s Jerusalem Bible. His presentation began with the dialectics of ancient, medieval and renaissance, and Gutenberg-era textuality. In the era of scriptio continua, it was necessary to know what a text said before reading it. He then demonstrated how the sigla on the text representing different versions move, so that one can see visually represented how they converge and diverge across verses. Venard compared it to a polyphonic musical score. After highlighting the inclusion of reception in authors and the arts, he spoke of it as a recapturing of the ancient liquidity or fluidity of scripture. Venard mentioned the capacity of digital editions to treat the tetragrammaton the way ancient manuscripts did, and also talked about Origen’s Hexepla and the way it was made possible by the new technology of the codex (bringing up an article on historyofinformation.com). The biblindex.info website and the Sinai Palimpsest project that was a focus of part of the morning session. Churches have historically gravitated towards having the Bible in one version and language, but in its very essence the Bible is fundamentally plural, bringing together multiple works and traditions. He showed a lovely psalter page from www.e-codices.unifr.ch. Glossae.net also has interesting and helpful features. One aim on bibletraditions.org is to surround the text with rich intertextual information. Not having paced his presentation well, Venard then jumped to the Gutenberg era, sharing an image from gutenbergdigital.de.
Jan Krans talked about the Amsterdam Database of New Testament Conjectural Emendations, and how it provides a great example of international collaboration. It used free software, although the data itself is not completely open. Half an hour before he presented, the transcription of Acts had gone live, and so he showed Acts 16 with the ECM text used. The database links to church fathers and other versions. He chose a verse in Acts that had 8 conjectures (most have none or 1). There turned out to be six thousand conjectures, far more than they expected, and there may be still more. They set up the database to make it easy to add additional material. Most conjectures date from before 1920, meaning that most of then can be found in books and articles that are no longer under copyright. Krans next turned to the reception history of conjectures, which they also incorporated into the database. The ECM actually incorporates a conjecture into its text in Acts 13:33. Krans recommended using the database not as an end in itself, but as a stimulus to new research. Krans hinted that the distinction between variant readings and conjectural emendations might be considered a problematic one, since ancient scribes found things odd in the text and sometimes responded by making changes reflecting their own conjectures. Krans also looked at the example of epiousion in the Lord’s Prayer, which doesn’t have variants in Greek because the word was not meaningful. In translations, however, we find a diverse array of renderings. If we proceed based on the assumption that texts make sense, emendations are perhaps unavoidable.
Finally, Joerg Roeder spoke about HyperNT, a database that is being developed of the reception history of the New Testament. HyperHamlet provides an example of what they are trying to do. Roeder spoke of the New Testament as a “hypertextual phenomenon,” going on to quote Winthrop Frye. The historical critical method has largely been blind to reception history, this beginning to change under Gadamer’s influence, and then that of Ulrich Luz in the field of New Testament Wirkungsgeschichte.
I’ll be presenting on Canon: The Card Game in the Digital Humanities session this morning. It was really cool when my friend Ken Schenck told me that his students play the game, and he had no idea that I invented it!