§ 1 Melissa Terras's monograph deals with the complex process of reading ancient documents from a humanities computing perspective. The book focuses on the methodological and technical issues behind the implementation of a system that helps historians read tablets written in Old Roman Cursive script found at the Roman fort of Vindolanda (dating from AD 90 to 120).
§ 2 The study describes a process whereby a set of computational modules developed to annotate and classify satellite images is repurposed to support the reading of the Vindolanda tablets. A specific humanities problem is therefore being used to refine an existing computing system in unexpected ways. The results are encouraging enough to allow the research team to foresee further developments of the system in the near future.[1] The book is a carefully written and focused demonstration of the fruitful connection between various disciplines and perspectives of analysis: from papyrology and cognitive psychology to Artificial Intelligence and statistical linguistics; from palaeography and image-processing to lexicography and pattern recognition.
§ 3 Techniques of knowledge elicitation (such as the Think Aloud Protocols) form the basis of the methodological analysis at the centre of the author's investigation. These are used to gain an understanding of how expert papyrologists produce hypotheses for reading from images of texts, starting with the identification of individual letters and ending up with the completion of meaningful sentences. The claim is that even quite idiosyncratic styles and attitudes towards reading a text can still be brought back to a unifying set of procedures. The aim is thus to grasp the structure of this common high-level cognitive process that allows the interpretation of more-or-less damaged tablets by domain experts.
§ 4 The study is based on a rather modest case study?the corpus of
tablets is quite small, and it is studied only by a few experts. But the
implications of Terras's thesis are far reaching. In essence, they address the
issue of how we build new knowledge on old, a version of McCarty's question,
how [do] we know what we know?
(2005, 25). As stated in the foreword by A.K.
Bowman and J.M. Brady, this book aims to find some middle ground between
thinking either that technical solutions to all such problems exist and it
is simply a case of finding the right set of existing standard procedures,
or that it is impossible for scientific procedures to address the problems
of really difficult material because the palaeographer/historian is him- or
herself unsure what he or she wants to see and to
interpret.
(vii)
§ 5 The knowledge elicitation exercises, therefore, have been undertaken with
double purpose: to develop a model of cognitive reading that can be used to
implement a computational system able to read words (or, to quote Terras's
daring statement, able to replicate
the experts' behaviour
(16)), and, at the same time, to get insight into the
way papyrologists actually work. In other words, the scope is both computational
and epistemological. While print scholarship, focused on very fine but static
publication formats, has tended to omit the documentation of the process that
allows experts to make hypotheses and interpretations, the study behind Image to Interpretation gives a balanced and comprehensive
account of its method, taking its failings and learning curve into
consideration.
§ 6 There are some issues. As the author herself has pointed out, the observation of the experts at work is based mainly on verbal recordings and therefore lacks the possibly complementary exploration of how these experts analyse visual clues. While the former method highlights the (at least partially) conscious conjectural activities of the papyrologist, the latter could have offered further insight into the less conscious performance of image-based analysis—into the "visual" abstraction that happens backstage. Some hints of this appear in the way experts assign different degrees of importance to certain graphical features in different types of media: a stylus tablet (which is generally more abraded and less readable) as opposed to an ink one (which is generally easier to interpret).
§ 7 Nevertheless, this innovative knowledge elicitation experiment
allows the author to model the process of interpretation of the tablet images as
a complex cycle of interlocking elements
(54), where
[the process of reading] depends on the propagation of
hypothesis, and the testing of these regarding all available information
concerning a text. Reading a document is a process of resolution of
ambiguity, and depends on the interaction of all the different facets of
knowledge available to the expert.
(77)
§ 8 In applying this work to the Vindolanda tablets, Terras develops and uses a unique and detailed stroke-based paleographic XML encoding to describe individual strokes on the tablets and annotate digital images of the lines of script from the chosen corpus. These data, together with the guiding model developed from the knowledge elicitation exercises and some statistical linguistic information, are used to train and tailor what was originally a system designed for the interpretation of aerial satellite images: the GRAVA system developed by Paul Robertson (co-author of chapter 4, 5 and Appendix A of the book). Having been trained on an initial corpus of letter forms annotated in XML, the system is then able to compare stored models of known letter forms with new incoming strokes and to classify the latter based on its "knowledge" of the former. The assumption is that, beyond the individualities of hands and styles, scripts follow general rules of uniformity—after all ancient scribes did intend to produce recognisable scripts!
§ 9 The fact that the feature detection agent of the system relies on the XML
annotation of the images to identify the key components of the character images
may be considered a weakness. Indeed, for the image to be interpreted at all, a textual aid
is required—the task is far too repetitive, error-prone and time-consuming when
done manually. Enhancements at the image-processing level would therefore be
fundamental for wider adoption of this system. Moreover, the application as it
stands allows one cycle of training to create the letter models. It would be
desirable to make it more dynamic and open to what the author calls a
feedback mechanism
, by incorporating new knowledge that
the use of the application itself makes available.
§ 10 Although no particular technical competence is required to read and understand the book, the appendices furnish the reader with detailed technical information. They deal with the principles and algorithms of the GRAVA system (the cooperative nature of the stochastic Minimum Distance Length modular system, the semantic interaction of its agents and the approximation methods used to interpret complex images are explained conceptually and mathematically), as well as with the XML schema used to annotate stroke level images of letter forms (theoretically applicable to every script). These appendices also include illustrations of the full set of blobs of letter models and instances (very useful to grasp the challenge of generating models out of the idiosyncratic script patterns).
§ 11 Unique in its topic and approach, the research presented in this book promises
much for the further refinement of the specific system being used as well as for
the future reading of the Vindolanda tablets (especially if the system can be
made more intelligent
by the future developments foreseen in the book such as
incorporating grammatical and semantic information pertinent to the Latin of the
time). Since the application of script recognition to damaged documents
characterises much scholarship in the reading of ancient texts, moreover, this
book will be of interest to a far wider audience than papyrologists alone,
including many readers of Digital Medievalist. The
current level of refinement in the capture and processing of digital images
could integrate well with a trainable modular system for annotating and reading
ancient primary sources such as GRAVA. However, the package has not yet been
publicly released as an integrated desktop application.
§ 12 This monograph represents a future reference for research carried out in the humanities computing field. It elaborates scientifically on questions of method proper to the humanities while looking at specific palaeographical and historical issues, treating these as foundations for elaborating abstract computational models. In this, the book does not disregard the ambiguity of humanities research, but rather recognises it as being at the core of a cyclical and intertwined model of generating interpretations.
[1]. With this respect, see the developments of the AHRC-EPSCR-JISC Arts and Humanities e-Science Initiative: Image, Text, Interpretation: e-Science, Technology and Documents: (<http://esad.classics.ox.ac.uk/>) funded from 2007 to 2011.
McCarty, Willard. 2005. Humanities Computing. London: Palgrave.