§ 1 The term
early adopter originates with Everett Rogers’
1962 book Diffusion of Innovations, where it refers to the
second of five audiences for new ideas, technologies, or products. The five audiences
are innovators, early adopters, early majority, late majority, and laggards. These
five form a bell curve, with most people in the early majority or late majority.
Rogers’ statistical model of diffusion of innovation has been applied in many areas,
from public health to agriculture to the adoption of the internet. In the 2003
edition of his book, Rogers distinguishes between accounts of diffusion that focus on
people differences in innovativeness (that is, in determining the
characteristics of the different adopter categories) and the less common analysis of
innovation differences (that is, in investigating how the perceived
properties of innovations affect their rate of adoption) (Rogers 2003, 219). In using the term
early adopters to characterize
the relationship of medievalists to information technology, we should consider both
the needs and interests of medievalists and the properties of information technology
that are likely to have appealed to them. But first, there is a factual question:
were medievalists early adopters of information technology? The answer to this
question is, decidedly, yes.
§ 2 The Digital Humanities community that is represented, for example, by the annual Digital Humanities conference (e.g., DH2011, held at Stanford University) and the Blackwell Companion to Digital Humanities, traces its roots to the work of Father Roberto Busa, S.J.—who, in his foreword to the Companion, writes:
During World War II, between 1941 and 1946, I began to look for machines for the automation of the linguistic analysis of written texts. I found them, in 1949, at IBM in New York City. Today, as an aged patriarch (born in 1913) I am full of amazement at the developments since then; they are enormously greater and better than I could then imagine. Digitus Dei est hic! The finger of God is here! [….] In the course of the past sixty years I have added to the teaching of scholastic philosophy, the processing of more than 22 million words in 23 languages and 9 alphabets, registering and classifying them with my teams of assistants. Half of those words, the main work, are in Latin. (Busa 2004, xvi)
§ 4 To put this in historical context, Busa began looking for
machines that could automate
the linguistic analysis of written texts
before the U.S. Army contracted with the University of Pennsylvania for the
development of ENIAC, the first general-purpose electronic computer, in 1943. Fr.
Busa began his work on a concordance of the works of St. Thomas Aquinas in 1946, the
same year in which ENIAC was unveiled. He began working with IBM in 1949 and he
sample proof-of-concept, machine-generated concordance in
1951, two years before IBM delivered its first computer, the 701 (Winter 1999, 4-5). In terms of Rogers’ typology, this
really makes Busa himself an innovator, rather than an early adopter, since his
interactions with Thomas Watson and Paul Tasman at IBM demonstrably helped to shape
thinking at IBM about the possible uses of their equipment (see Tasman, qtd. in Winter, 3).
§ 5 There were other early adopters of computers in the humanities, especially for concordancing:
The first biblical concordance created by computer was the Complete Concordance of the Revised Standard Version of the Bible, edited by the Reverend John W. Ellison. When the full Revised Standard Version appeared in 1952, Ellison was already at work doing biblical research using the computer. In 1951, the American Philosophical Society granted him funding to attempt to trace the relationship of Gospel manuscript traditions by collating various versions by computer. Ellison reportedlydeplored the idea that scholars with two or three doctoral degrees apiece should sit around sorting words(Burton 1981b, 6). He believed that the concordance could be produced by computer, and chose the Remington Rand Company’s Univac, one of the first computers to accept alphabetic input. (Hotchkiss 1998-1999, section 13)
§ 7 In the 1960s, David Packard produced a machine concordance of the works of Livy. As Gabriel Bodard and Simon Mahony noted in the 2008 issue of Digital Medievalist,
Classicists have long been at the forefront of the Humanities in the use of computing for publishing, analysing, processing, and researching texts, objects, and data. This tendency can partly be explained with reference to two observations: (1) the complexity of the textual, historical, linguistic, material, and artistic sources that need to be considered in classical scholarship, and (2) the patchy coverage and fragmentary state of many of these same artefacts. (Bodard 2008)
§ 9 Biblical scholars, classicists, and medievalists have some shared characteristics, as early adopters: they work with a limited set of texts, substantial portions of which are in the roman alphabet and, historically, they have often taken a philological approach to their materials, focusing on words themselves, or a philosophical approach, focusing on concepts that could be tracked in words. From Fr. Busa’s point of view, philology and philosophy are directly connected:
Grammar is the foundation of philosophy. Philosophy aims at unifying synthesis of the whole cosmos. Examining those grammatical words is the only possible path leading to and documenting such a synthesis, when near to its goal. When I say that such hermeneutics is computerized, I mean computer assisted: the scholar makes the computer perform firstly all the operations of assembling, ordering, re-ordering, summarizing etc., and secondly all the searches for single data or groups of data which every heuristic strategy requires, one after the other. (Busa qtd. in Vanhoutte 2006)
§ 11 So, whether you are interested in philology or philosophy,
the concordance—an alphabetical list of the words contained in a body of text, with
some contextual information—is an extremely useful tool. As Ellison noted, though,
concordances can be mind-numbingly tedious to produce. Once computers could accept
alphabetical input, it was easy to see how such machines could be useful, at least
for some scholars of Latin texts. What probably required a bit more prescience was to
understand that, eventually, these early and enormous computers (the UNIVAC weighed
29,000 pounds) would produce reference tools that were less bulky and easier to use
than printed concordances. Busa, in his foreword to the Companion, reflects on what he calls the three phases of
miniaturization that characterized the development of his Index Thomisticus:
The first one lasted less than 10 years. I began, in 1949, with only electro-countable machines with punched cards. My goal was to have a file of 13 million of these cards, one for each word, with a context of 12 lines stamped on the back. The file would have been 90 meters long, 1.20 m in height, 1 m in depth, and would have weighed 500 tonnes. In His mercy, around 1955, God led men to invent magnetic tapes. The first were the steel ones by Remington, closely followed by the plastic ones of IBM. Until 1980, I was working on 1,800 tapes, each one 2,400 feet long, and their combined length was 1,500 km, the distance from Paris to Lisbon, or from Milan to Palermo. . . . I finished in 1980 (before personal computers came in) with 20 final and conclusive tapes, and with these and the automatic photocompositor of IBM, I prepared for offset the 20 million lines which filled the 65,000 pages of the 56 volumes in encyclopedia format which make up the Index Thomisticus on paper. The third phase began in 1987 with the preparations to transfer the data onto CD-ROM. The first edition came out in 1992, and now we are on the threshold of the third. The work now consists of 1.36 GB of data, compressed with the Huffman method, on one single disk. (Busa 2004, xvi-xvii)
§ 14 While Busa was working his way from millions of punch-cards to a single CD-ROM, other scholars were also experimenting with the increasing range of scholarly affordances offered by new developments in information technology. As Susan Hockey writes in her chapter on "The History of Humanities Computing" in the Blackwell Companion:
By the 1960s, other researchers had begun to see the benefits of working with concordances. A series of four articles by Dolores Burton in the journal Computers and the Humanities in 1981–2 attempted to bring these together, beginning with a discussion of the 1950s (Burton 1981a, 1981b, 1981c, 1982). Some of these researchers were individual scholars whose interests concentrated on one set of texts or authors. In the UK, Roy Wisbey produced a series of indexes to Early Middle High German texts (Wisbey 1963). In the USA Stephen Parrish's concordances to the poems of Matthew Arnold and W.B. Yeats introduced the series of concordances published by Cornell University Press (Parrish 1962). This period also saw the establishment of computing facilities in some major language academies in Europe, principally to assist with the compilation of dictionaries. Examples include the Trésor de la Langue Française (Gorcy 1983), which was established in Nancy to build up an archive of French literary material, and the Institute of Dutch Lexicology in Leiden (De Tollenaere 1973). (Hockey 2004, 4)
§ 16 As Hockey tells the story, the 1970s were a period of consolidation, marked by the creation of some key organizations that still help to organize the field, including the Association of Literary and Linguistic Computing (ALLC) and the Association for Computers and the Humanities (ACH). With respect to research and tool-building, some of this decade’s most important work was on concordancing programs, particularly COCOA (developed at the UK equivalent of an early supercomputer center, The Atlas Computing Lab), and its successor, the Oxford Concordancing Program (OCP).
§ 17 In the 1980s, the application of databases extended to new kinds of material—for example, with the gradual development of the Cantus database of medieval chants, inspired in part by the 1966 work of W. H. Frere, Antiphonale Sarisburiense. The first computational foray in this endeavor was the 1980 dissertation, at Catholic University, of Ronald Olexy, followed by some piloting work that found funding and institutional support starting in 1987. Cantus continues to the present day (now with institutional support from the University of Waterloo), and was among the projects discussed in the MARGOT conference. The 1980s also saw the near alignment of computational linguistics and humanities computing in a shared endeavor to develop non-proprietary markup for text, culminating in the founding of the Text Encoding Initiative in the late 1980s, as a project jointly supported by the ALLC, the ACH, and the Association for Computational Linguistics (ACL). That alignment ultimately failed, partly because the humanists became interested in text analysis, authorship attribution, and electronic editions, and partly because computational linguistics found an audience beyond the humanities in defense and commercial applications of speech analysis. Today, as Digital Humanities scholars become more interested in text-mining, the tools and techniques of computational linguistics (such as natural-language processing software) are once again becoming relevant, and these fields seem likely to align more closely in the years to come.
§ 18 From the point of view of literary computing, though, the
1990s were the heyday of experimentation with electronic scholarly editions.
Beginning a few years before the Web appeared in 1994, Kevin Kiernan’s
image-based edition of Beowulf (now in its third
technological incarnation and edition) exploited computer graphics to allow the
reader simulated access to original artifacts in the context of editorial apparatus,
transcripts, and collations (Kiernan 1991). At about
the same time, Peter Robinson began working on the
Wife of Bath’s
Tale, as the first installment in an electronic edition of The Canterbury Tales (Robinson
1998). Shortly thereafter, in 1993, the University of Virginia’s Institute
for Advanced Technology in the Humanities (IATH) engaged in a number of different
electronic scholarly editing projects—from Piers Plowman to
Uncle Tom’s Cabin, from Blake and Rossetti to Whitman and
Dickinson. Dug Duggan’s work at IATH on Piers Plowman,
published by The Society for Early English and Norse Electronic Text, using SGML and
then XML, is an example of philological computing developing into a full-blown
electronic edition (Duggan et al.,
1998-2008). This work directly informed the revision of the Guidelines provided
by the Modern Language Association’s Committee on Scholarly Editions, ultimately
resulting in the 2006 publication of Electronic Textual
Editing (Burnard et al.
§ 19 The late 1990s and the first decade of the 21st century have seen an expansion of capabilities in our computing environments, and a corresponding expansion of imagination among scholars who work with medieval materials. In this period, the Digital Archive of Medieval Music appeared, organized at Oxford University. Marion Roberts’ work on the Salisbury Cathedral, launched in 1998 at the University of Virginia, presents over 2,000 photographs of the interior and exterior of the cathedral, plus maps, a 3-D model that demonstrates the construction techniques used in building the cathedral, and teacher’s guides. McCormick et al.’s Digital Atlas of Roman and Medieval Civilization (DARMC), begun in 2007,
…makes freely available on the internet the best available materials for a Geographic Information Systems (GIS) approach to mapping and spatial analysis of the Roman and medieval worlds. DARMC allows innovative spatial and temporal analyses of all aspects of the civilizations of western Eurasia in the first 1500 years of our era, as well as the generation of original maps illustrating differing aspects of ancient and medieval civilization. (arts-humanities.net)
§ 21 Begun at about the same time, the Digital Mappaemundi (DM) project (work from which was presented at the 2010 MARGOT conference) also focuses on maps, though without the emphasis on GIS; instead, DM is meant to allow scholarly users to study medieval maps in the context of geographically oriented text resources, and to allow users to annotate the maps and provide descriptive metadata.
§ 22 Paleography has a long history as a method in medieval studies, and over the years, scholars have adopted new technologies to advance their investigations, beginning with photography, in the 19th century, and later including ultra-violet light, medical imaging technology, and fiber-optics (Prescott 1997). Digital studies of paleography have also become more common in recent years, and such studies follow a variety of methods, from statistical to optical, in order to classify, ascribe, or decipher ancient handwriting—see, for example, Derolez (2003), Ciula (2005), Terras (2006). Two of the papers at the MARGOT Conference (Cullin, Smit) discussed paleography, in quite different contexts and perspectives.
§ 23 Less familiar in purpose and method, but interesting nonetheless, are even more recent efforts to apply novel computational methods to explore completely new kinds of questions in medieval studies. For example, two French researchers applied mathematical analysis of networks to understand medieval social networks, using a database of agrarian contracts from the southwest of France, between 1240 and 1520 (Villa-Vialaneix 2007). And medievalists have collaborated with computer scientists at the University of Birmingham to perform agent-based simulations that are meant to
… explore the military-logistical context of the Battle of Manzikert in 1071. Manzikert is a key historic event in Byzantine history. The defeat of the Byzantine army by the Seljuk Turks, and the following civil war, resulted in the collapse of Byzantine power in central Anatolia. Given the key position this event takes within the collapse of Byzantine power, the lack of consensus between historians on the numbers of men involved at, or even the route taken by the Byzantine Army to, Manzikert is profound. (MWGrid)
§ 25 This elliptical history of digital medievalism, as well as the collection of essays that it introduces, amply demonstrate that medievalists are often early adopters of new technologies, and also that medievalists have benefitted, time and again, from their interactions with other fields. As the essays from the 3rd International MARGOT Conference show, medievalists continue to be interested in exploring what new technology can bring to some well-established scholarly practices and genres, including paleography, codicology, translation, dictionaries, corpora studies, and manuscript studies. However, these essays suggest that they are equally interested in new practices and new genres as well—for example, GIS, digital analysis of music, information modeling, image annotation, digital preservation, and virtual reality.
§ 26 Finally, along with the rest of the humanities, medieval studies faces the need to rethink its pedagogy for a new generation of students, to reconceive and redescribe its place in the 21st-century college and university, and to refine its strategies for funding research that becomes more expensive as it becomes more expansive. Digital projects, in any humanities discipline, tend toward collaborative, multi-institutional, and multi-disciplinary teams — a tendency that maps well to current funding patterns and priorities in many areas, but one that will also require researchers to look for support outside the limited number of funders who have traditionally supported research on medieval literature and culture. Happily, both the oldest and the most recent examples of medievalists as early adopters make it clear that this is a branch of the humanities that has great potential to be fruitful for partners in science and even in industry. Medievalists would do well to claim their history as innovators and to cite their repeated role, as early adopters, in the diffusion of innovations to broader audiences and more general purposes.
Arts-Humanities. Net entry on digital atlas of Roman and medieval civilization. http://www.arts-humanities.net/projects/digital_atlas_roman_medieval_civilization_darmc. Accessed June 28, 2011.
Bodard, Gabriel and Simon Mahony.
Though much is taken, much abides: Recovering
antiquity through innovative digital methodologies. Introduction to the
special issue. Digital Medievalist 4 . Accessed June
Burnard, Lou, Katherine O’Brien O’Keeffe, and John Unsworth, eds. 2006. Electronic textual editing. New York: Modern Language Association. http://www.tei-c.org/About/Archive_new/ETE/Preview/. Accessed June 28, 2011.
Burton, D. M (1981a). Automated concordances and word indexes: The fifties. Computers and the Humanities 15: 1–14.
Burton, D. M. (1981b). Automated concordances and word indexes: The early sixties and the early centers. Computers and the Humanities 15: 83–100.
Burton, D. M. (1981c). Automated concordances and word indexes: The process, the programs, and the products. Computers and the Humanities 15: 139–54.
Burton, D. M. (1982). Automated concordances and word indexes: Machine decisions and editorial revisions. Computers and the Humanities 16: 195–218.
Busa, Roberto A. Foreword: Perspectives on the Digital Humanities. In Schreibman, Susan, Ray Siemens and John Unsworth, eds. A companion to Digital Humanities, Oxford: Blackwell, 2004: xvi-xxi. http://www.digitalhumanities.org/companion/. Accessed June 28, 2011.
CANTUS: A database for Latin ecclesiastical chant. http://publish.uwo.ca/~cantus/index.html. Accessed June 28, 2011.
Ciula, Arianna. 2005. Digital palaeography: Using the digital representation of medieval script to support palaeographic analysis. Digital Medievalist 1 (Spring). http://www.digitalmedievalist.org/journal/1.1/ciula/. Accessed June 28, 2011.
De Tollenaere, F. The problem of the context in computer-aided lexicography. In A. J. Aitken, R. W. Bailey, and N. Hamilton-Smith (eds.), The computer and literary studies. Edinburgh: Edinburgh University Press, 1973: 25–35.
Derolez, Albert. 2003. The palaeography of Gothic manuscript books from the twelfth to the early sixteenth century. Cambridge: Cambridge University Press, 2003.
Digital Humanities 2011. https://dh2011.stanford.edu/. Accessed June 28, 2011.
Duggan, Hoyt, et al. 1998-2008. The Society for Early English & Norse Electronic Texts, Series A, Ann Arbor, MI: University of Michigan Press, 2000-2002, and Rochester, NY: Boydell and Brewer, 2004-2008. Series B, Ann Arbor, MI: University of Michigan Press, 1998-2001. Available volumes listed at http://www3.iath.virginia.edu/seenet/publications.html. Accessed August 23, 2011.
Gorcy, G. 1983. L'informatique et la mise en œuvre du trésor de la langue française (TLF), dictionnaire de la langue du 19e et du 20e siècle (1789–1960). In A. Cappelli and A. Zampolli, eds., The possibilities and limits of the computer in producing and publishing dictionaries: Proceedings of the European Science Foundation workshop, Pisa 1981. Linguistica Computazionale III. Pisa: Giardini. 119–144.
Hockey, Susan. 2004. The History of Humanities Computing. In Schreibman, Susan, Ray Siemens and John Unsworth, eds. A companion to Digital Humanities, Oxford: Blackwell, 2004: 3-19. http://www.digitalhumanities.org/companion/. Accessed June 28, 2011.
Hotchkiss, Valerie and Charles C. Ryrie. 1998-1999. Formatting the word of God. An exhibition at Bridwell Library Perkins School of Theology, Southern Methodist University, October 1998 through January 1999. http://smu.edu/bridwell_tools/publications/ryriecatalog/titlepg.htm. Accessed June 28, 2011.
Kiernan, Kevin S. 1991.
Digital image processing and the Beowulf manuscript. Literary and
Linguistic Computing 6: Special Issue on Computers and Medieval Studies
(Edited by Marilyn Deegan with Andrew Armour and Mark Infusino), 20-27.
McCormick, Michael, Guoping Huang, Kelly Gibson et al., eds. Digital atlas of Roman and medieval civilizations. http://medievalmap.harvard.edu/icb/icb.do. Accessed June 28, 2011.
MWGrid: Medieval warfare on the grid. . Accessed June 28, 2011.
Olexy, Ronald T. 1980. The responsories in the 11th-Century Aquitanian antiphoner Toledo, Bibl. Cap. 44.2. Ph.D. dissertation, Catholic University of America,.
Parrish, S. M. 1962. Problems in the making of computer concordances. Studies in Bibliography 15: 1–14.
Prescott, Andrew. 1997. The electronic Beowulf and digital restoration. Literary and Linguistic Computing 12: 185-95. http://llc.oxfordjournals.org/content/12/3/185.abstract. Accessed June 28, 2011.
Roberts, Marion. The Salisbury project. http://salisbury.art.virginia.edu/. Accessed June 28, 2011.
Robinson, Peter and Kevin Taylor. 1998. Publishing an electronic textual edition: The case of The Wife of Bath's prologue on CD-ROM. Computers and the Humanities 32.4 271-284. http://www.jstor.org/stable/30200468. Accessed June 28, 2011.
Rogers, E. M. 2003. Diffusion of innovations. 5th ed. New York: Free Press.
Schreibman, Susan, Ray Siemens and John Unsworth, eds. 2004. A companion to Digital Humanities, Oxford: Blackwell. http://www.digitalhumanities.org/companion/. Accessed June 28, 2011.
Terras, Melissa. 2006. Image to interpretation: Intelligent systems to aid historians in the reading of the Vindolanda texts. Oxford Studies in Ancient Documents. Oxford: Oxford University Press.
Vanhoutte, Edward. 2006. The presence of Busa. Humanist 20.165, August 24. http://www.digitalhumanities.org/humanist/Archives/Virginia/v20/0164.html. Accessed June 28, 2011.
Villa-Vialaneix, Nathalie and Romain Boulet. 2007. Clustering a medieval social network by SOM using a kernel based distance measure. In Proceedings of ESANN 2007; European Symposium on Artificial Neural Networks (April): 31-36. http://www.nathalievilla.org/IMG/pdf/presentation.pdf. Accessed June 28, 2011.
Winter, Thomas. 1999. Roberto Busa, S.J., and the invention of the machine-generated concordance. The Classical Bulletin 75:1: 3-20. http://digitalcommons.unl.edu/classicsfacpub/70/. Accessed June 28, 2011.
Wisbey, R. 1963. The analysis of middle High German texts by computer: Some lexicographical aspects. Transactions of the Philological Society, 62.1 (Nov.):28–48. http://onlinelibrary.wiley.com/doi/10.1111/j.1467-968X.1963.tb00999.x/abstract. Accessed June 28, 2011.