Skip to main content
New Tools for Exploring, Analysing and Categorising Medieval Scripts

Abstract

In this paper we introduce the numerical tools that have been developed in the context of the Graphem project, in order to automate or leverage several steps in the study of medieval writing samples. We first describe various kinds of features that have been extracted from the samples, and then present two graphical tools to compare writing samples according to the features that have been extracted.

Keywords

Palaeography, Pattern Recognition, Content-based Image Retrieval, Data Visualization

How to Cite

Cloppet, F., Daher, H., Églin, V., Emptoz, H., Exbrayat, M., Joutel, G., Lebourgeois, F., Martin, L., Moalla, I., Siddiqi, I. and Vincent, N., 2012. New Tools for Exploring, Analysing and Categorising Medieval Scripts. Digital Medievalist, 7. DOI: http://doi.org/10.16995/dm.44

Downloads

Download HTML

1122

Views

267

Downloads

3

Citations

Introduction

§ 1 The use of digital tools in the context of the medieval humanities has grown considerably over the past few years. Such tools can be of various kinds, from structuring ones — such as XML files or relational databases, which might be used to store and query catalogues or notes — to exploration environments, such has interactive geographical software.

§ 2 Graphem is a French research project that aims at bringing together palaeographers and computer scientists in order to study and develop tools to explore, analyse and categorise medieval scripts. Two complementary areas have been explored in this context. The first direction taken was to identify similar writings. In other words, given a writing sample, how to search and find the most similar items in a large set of samples. Secondly, we explored how to help the palaeographers organise writings in a consistent way, that is, how to globally structure a large set of writing samples. The assumption underlying this second task was that an automated or semi-automated organisation of writings would limit the impact of human factors and might help to lead to a unified and generally admitted structure, while several standard manual classifications of medieval scripts do currently coexist.

§ 3 Among others, there are two noteworthy recent original contributions of the Graphem project to the emerging field of digital palaeography. The first lies in its transverse nature, where various kinds of feature extraction, image retrieval and visualization methods are integrated in a small set of interoperable tools, thus becoming interchangeable and comparable. The second lies in the challenging tasks oriented to writing style. Several other works have been undertaken during the past few years, such as SPI, DamalS and Quill. The "System for Paleographic Inspections" (Aiolli et al. 1999, Aiolli et al. 2009) is a vertical set of tools, which allows for classification tasks based on character segmentation and the computation of a mean value of characters. While comparable due to its vertical nature, this project is much less transverse than Graphem, as it focuses, for instance, on a single kind of feature. "DamalS" (Hofmeister et al. 2009) might be the most comparable project, as it assembles three kinds of tools: XML transliteration, statistical data and image retrieval techniques. DamalS is oriented towards writer's hand recognition, which differs somewhat from our style-oriented recognition task. "Quill" (Aussems and Brink 2009) focuses on writer's hand identification, based on the ink direction and width. Once again, this tool concentrates on a single kind of feature and aims at identifying hands rather than styles. We should also cite the work by Wolf et al., which concerns the Cairo Genizah Manuscripts, consisting of a large set of spread writing fragments (Wolf et al. 2011). This work relies on the extraction of keypoints (which, in this case, mostly correspond to characters). These keypoints are being clustered to form compact dictionaries that serve as the basis for handwriting matching and palaeographic classification.

§ 4 This paper is organised as follows: we will first recall the basic steps from a physical writing sample to its computer usable representation. Second, we will focus on the various digital representations of writing samples that have been studied within our project, and discuss their respective benefits and drawbacks. We will then present our graphical tools. Lastly, we will summarize our findings.

Feature extraction: Towards a computer-consistent view of manuscripts

Basic principles

§ 5 Several steps must be followed in order to transform a physical writing sample into something that can be handled by a computer. The first obvious task consists in digitizing the sample, which produces a numerical picture. Computationally speaking, this picture consists of a set of pixels that are organised in a two-dimensional, usually rectangular, matrix.

§ 6 While such a step is necessary, it remains insufficient for the tasks we consider hereafter. First, we seek to compare large sets of samples. A direct, pixel-level comparison of pictures would be computationally expensive, as pictures commonly consist of tens of thousands of pixels; moreover, it would not make sense, as what we look for are similar writings, rather than fully similar pictures. A second step is consequently necessary, which extracts relevant data from these raw digital pictures. By relevant data we do sometimes mean a noticeable sub-part of the picture, but generally consider a more abstract piece of information, one that can be expressed numerically, such as the average slope of characters. Such a numerical fact is called a feature. The lower right-most picture in Figure 1 gives an example of numerical facts that have been computed based on the central digitized picture. Each number corresponds to the intensity of a given numerical measurement or feature. We can see that a large set of features are associated with a single picture. In some cases, a graphical representation of these features can be obtained, as on the upper right-most picture, which might be exploited directly by an expert of this kind of feature, but probably not by a palaeographer.

From a manuscript to a set of features.
Figure 1: From a manuscript to a set of features.

§ 7 An extremely large set of features can be built on a single picture, depending on the goals. Let us consider a case where colour is relevant, studying, for instance, a set of Picasso's paintings; in this case, two numerical features, corresponding to the percentage of blue or red pixels respectively, might be of interest. In the context of medieval scripts, more sophisticated features related to the shape, orientation or other aspects of writings will be computed and will serve as a basis to automatically compare writing samples. The idea behind this use of features comes from the fact that two identical pictures will be represented by identical feature values. Incidentally two similar pictures should also be represented by similar feature values (see figure 2). A satisfying set of features will thus be one that bears out this property, i.e. the values of which are strongly similar for similar writings, and which clearly differ for non similar writings. Two questions then arise. First, which families of features are relevant; and second, are computed features directly usable, or should they be filtered, in order to keep the most relevant ones with regard to our study of writings?

From writing comparison to features comparison.
Figure 2: From writing comparison to features comparison.

§ 8 Relevant kinds of features are of two sorts: first, computer science literature offers a large set of features, especially in the pattern recognition area, some of which are known to fit to problems similar to ours. Second, specific and novel ways to describe writing samples and to extract features accordingly might be developed. We must notice that the only former work related to the analysis of medieval documents (Aiolli et al. 1999) proposes classification procedures which are today not completely accepted by palaeographers. Both classical and specific features have been studied in the context of Graphem and will be presented in this section.

§ 9 Feature-based comparison relies on the assumption that two similar writings will be represented by two similar sets of features. In order to test the assumption, some pretreatment might be applied to the raw set of features, for instance by discarding some of them, or by weighting them. The reader will notice that a feature that has been assigned a weight of zero is implicitly discarded.

§ 10 If we ask whether the whole set of computed features is directly relevant, the answer is usually no. Three cases might be considered. In the first, all features might be relevant, but some of them might be over- or under-considered when evaluating the similarity of two samples. To illustrate this point, let us consider the case where one feature expresses a measurement in centimetres, and a second one corresponds to microns. We can clearly see that some scaling should be done to balance their relative importance. Second, some features might appear to be irrelevant, i.e. not related to the similarity between samples whichever way we consider them. They should then be discarded. Third, a very large number of features might be available, in which case using all of them to compute a similarity might be costly. A filter should then be applied, in order to circumscribe a reasonably small yet sufficient subset.

§ 11 In the next sections we introduce the various approaches and the corresponding features that have been studied so far in Graphem.

A statistical approach: Co-occurence matrices

§ 12 One of the first tools that has been studied is the co-occurrence matrix, which focuses on the texture of digitized pictures. The texture can be seen as the global feeling a picture gives, that is, how pixels globally organize. Such properties can be expressed by the similarity, according to a given criteria, of each pixel compared to its neighbours. Figure 3-left illustrates this concept. Each pixel of coordinates (x, y) is compared to its neighbour of coordinates (x+u, y+v). For instance, we might compare their colour. We might also focus on pixels that belong to the contour of characters, and compare the direction or curvature of this contour at these two points. We must notice that for each possible criterion we consider a limited set of possible values. The number of combinations (value at the first pixel / value at its neighbour) will thus be limited. Applying such a comparison to each possible pair of pixels, we can build a table that indicates how frequent each combination of values is. This is illustrated in Figure 3-centre, where each square corresponds to a given combination, its colour corresponding to the frequency of the combination. If we build several tables, each corresponding to a given value of u and v, we obtain a super-table, such as the one in Figure 3-right. Such a table is called a co-occurrence matrix. Each of its elements can be considered as a feature. This is illustrated in Figure 3-centre, where each square corresponds to a given combination. In the upper left corner, we find the case where the criterion is low on both (x, y) and (x+u, y+v) pixels; in the lower right corner, we find the case where it is high on both. The colour of each square corresponds to the frequency of the combination. Here the red colour indicates high frequency while green indicates low. On this particular matrix, we can see that the values at the two pixels considered are greatly correlated. If we build several tables, each corresponding to a given value of u and v, we obtain a super-table, such as the one in Figure 3-centre. The upper left element is the elementary matrix where u=0 and v=0. The lower right-most element corresponds to u=7 and v=7. Such a table is called a co-occurrence matrix. Each of its elements can be considered as a feature.

Co-occurrence matrices.
Figure 3: Co-occurrence matrices.

§ 13 Such matrices can serve as a good basis to compute the similarity between writing samples (Journet et al. 2005, Moalla et al. 2006). Nevertheless, they are usually large and once again their direct use is too costly to be considered. This issue is even more acute when each single sample gets considered through several criteria and thus represented by several matrices. Feature selection or dimensionality reduction strategies must then be applied, in order to reduce the number of features. These two techniques differ in the fact that feature selection tries to retain the most significant features (Guyon and Elisseeff 2003), while dimensionality reduction ought to replace the input features by a small set of new features, based on combinations of these input features, that keeps as much information as possible (Saul et al. 2006). We invite the interested reader to refer to the abundant literature on these two kinds of approaches. The main drawback introduced by feature selection or dimensionality reduction methods lies in the fact that the most efficient methods are supervised. Let us recall that supervised methods make their choices, given a set of annotated examples called the learning dataset, so that their output fits at best a given property of this learning dataset. In the case of feature selection or dimensionality reduction, supervised methods consider a feature as relevant if it helps to discriminate writings that are known to belong to different groups. In other words, they use an a priori knowledge on which writings are similar or not. Such an a priori knowledge is quite contradictory to one of our goals, in which we seek to find an unbiased, computer-forged classification of writing.

A wave-based approach: Curvelets

§ 14 In image analysis, one theory recognized to be close to the human visual system is that of wavelets. Generally speaking, wavelets are designed to split an original signal into several simpler ones with predefined properties. Those properties are, among others, good localization in space and frequency. In the context of our study, the original signal consists of a writing sample, and wavelets are a means to filter its content so as to highlight some regularities along a given axis. For instance, it can detect vertical straight lines. The main drawback of wavelets, compared to the human visual system, is their lack of directionality. Standard wavelets only handle two main directions, horizontal and vertical. This limit was solved by, among others, geometrical wavelets and especially the sub-kind called curvelets (Candes et al. 2006). Another great advantage of curvelets is that they are also well localized on contours of shapes.

A document and several directions analysed by curvelets.
Figure 4: A document and several directions analysed by curvelets.

§ 15 As we can see in Figure 4, information contained on contours of handwritings is extracted step by step depending on the directions currently analysed. This property allows us to extract orientation information on the one hand but also curvature information on the other. Curvatures are evaluated by computing the number of directions on which a point of a shape is detected. Indeed, a pixel detected on a single direction is considered to be on a straight line and a pixel detected on a large number of directions is considered to be on a high curvature point. Thus, we were able to construct a feature vector which is a co-occurrence matrix of couples (curvature, orientation) in the image (Joutel et al. 2008).

§ 16 Figure 5 presents two pictures and their respective co-occurrence matrices. Here in Figure 5 the feature vector is a co-occurrence matrix of pairs (curvature, orientation). Each pixel corresponds to a pair of values (curvature, orientation): its value indicates how many pixels of the source picture correspond to these values, a red pixel representing a high frequency. Here we can see that in the left-most pictures, the red pixels are rare, meaning that only a limited number of directions and curvatures can be observed, while in the right-most pictures, red pixels are much more numerous, meaning that there exists a greater variety of directions and curvatures in the writing sample.

§ 17 Based on these matrices, we were able to evaluate the notion of similarity between the writings. In order to provide a greater degree of freedom in the analysis, the user is allowed to modulate the weight of the common elements and distinctive parts between writings.

Two documents and their curvelet feature vectors.
Figure 5: Two documents and their curvelet feature vectors.

§ 18 Despite their strong mathematical background, curvelets are relatively easy to understand in their principle, and some elements of the resulting feature vector can be easily interpreted. Curvelets are also interesting in the way that they are not sensitive to scale factors. As a corollary, their use is computationally expensive. They are also sensitive to noise, and the digitized pictures must be pre-treated in order to remove meaningless elements, such as drawings or dropped initial capital letters.

A metrological approach: Freeman codes

§ 19 When speaking of writing as the content of an image, we implicitly introduce some knowledge about the content of this image. This will influence the characteristics used to identify and tag the content. Some visual attributes, giving some indication of the shape of the drawings and their distribution, have to be extracted. Several points of view can be considered. The simplest approach consists in looking at the contour of the dark zones. The examination has to be both local (at character level) and global (the whole sample) and to have some statistical significance. Here we study the local direction of the contours expressing the distribution of the slant with eight values, the local differences of direction expressing the angles and the evolution of the angle within the strokes. The direction of the angles as the direction of their sides is also a characteristic of the lines. At each point of the contour, the curvature is computed and the distribution gives eight values. These characteristics are extracted at the observation level but the analysis can also be done in a coarser way to get rid of the small details associated with the writing tool rather than with the characters written. The contour of the writing can be approximated by a sequence of straight-line segments. We characterize the writing by means of the length and direction of this set of segments. Thus, more than five hundred characteristics are extracted that can be grouped together to form fourteen sets interpreted as visual elements (Siddiqi and Vincent 2009, Siddiqi et al. 2009).

§ 20 Figure 6 illustrates how Freeman's codes are used to encode contours. Starting from a given pixel (e.g. the upper left one) and walking a given way (e.g. clockwise), the direction from one pixel to its immediate neighbour can be expressed through an integer from 0 to 7 (centre-left picture). The complete walk results in a vector (e.g. 44331122, meaning two steps down right, followed by two steps right, two up and two up right). For a given writing sample, we can then compute a list of contours and build a histogram of the relative frequency of each direction. Writing samples can thus be compared based on their histograms (right-most picture). In the context of a visual analysis, contours can be coloured using the direction from each pixel to its neighbour (centre-right picture).

Contours and Freeman codes.
Figure 6: Contours and Freeman codes.

§ 21 A distance between writings is then computed as a linear combination of distances corresponding to the different viewpoints. In other words, we can consider that a given sample is represented by a point in a fourteen-dimensional space. The closer two points are in this space, the more similar the two corresponding writing samples are. The weight of each dimension can be adjusted with respect to the aspects the palaeographer thinks are the most important in writing comparison cases. This manual adjustment allows the user to introduce some preference, or knowledge, in this process, while remaining relatively unbiased— at least much more unbiased that a purely visual expert comparison.

A centreline-tracking approach: The median axis

§ 22 In the previous study the observation level was very fine, standing at the pixel level. The characteristics were nevertheless statistical and encapsulated the global feeling the palaeographer might have while looking at the document. Another approach consists in trying to understand how the writing sample has been produced and what are the shapes involved. The most frequently appearing shapes may then characterize a sample. This approach aims at extracting elementary shapes that have some sense in the context of writing. Here, the vision is more global and stands at the stroke level. As the goal is not to recognize the writing but to understand the shapes that appear, segmentation into characters is of minor interest; segmentation into strokes is more relevant. The strokes are rather short and the change between two neighbouring strokes is often linked to a change in the direction of the line or to the change of width in the line. The width is associated with the points of the median axis. To identify the median axis we have developed a method applied to grey level images that extracts the median axis of the writing line without any reference to the contour. Starting from an extreme point, the line drawn is followed according to both the curvature of the already detected axis and the evolution of the grey level along the axis.

Median axis.
Figure 7: Median axis.

§ 23 In order to study the strokes that form characters, their median axis is computed. Traditional methods, such as skeletonizing, do not fit, as they lead to elements that are frequently smaller than strokes, due to crossings and/or alteration of pigments (pictures on the far left in Figure 7). A more robust strategy has been developed, that relies on the stroke direction (centre picture). Once each stroke has been identified, it can be highlighted by means of colourizing (picture on the right).

§ 24 The elements of the segmentation are extracted and they can be sorted according to their shapes. A graph colouring process enables sorting of the shapes and a codebook is built that figures the characteristics of a writing. In order to build this codebook, a clustering process is first conducted that splits shapes up into a limited number of coherent groups. This set of coherent groups forms the codebook. The comparison of codebooks associated with two writings leads to a comparison of the writings themselves. Codebooks are compared on a group-level basis: each group of the first codebook is paired with the most similar of the second, the global distance being related mostly to the distance of the less similar paired groups. As a consequence, a great role is given to what differs between writings.

Graphical exploration of writing samples

Content-based image retrieval

§ 25 Two end users' tasks have been identified in Graphem, the first consisting in offering a means to retrieve similar writing samples, and the second consisting in exploring the space of writings. Let us focus on the first task. Retrieving similar writing samples based on their digital pictures belongs to a field of computer science named Content Based Image Retrieval, or CBIR.

§ 26 Generally speaking, the simplest scenario of content based image retrieval is that of global example-based search: the user chooses an image example and the system determines the images of the base with the most similar visual appearance. The principle of this approach has been established by Ballard (Swain and Ballard 1991), and served as the fundamental principle of many systems that deal with natural images, like Qbic (Flickner et al. 1995), PhotoBook (Pentland et al. 1994), MARS (Rui et al. 1997) and KIWI system (Loupias and Bres 2001). All those systems are based on the same kinds of features (colours, shapes and textures).

§ 27 As we have seen in the previous section, we had to adapt or develop some writing-specific features. Based on this features, each sample is thus associated with an individual signature, the similarity between samples being computed on the basis of their signature. A CBIR tool has been developed, in which the user can propose a writing sample (called the query sample). The signature of this sample is computed, and a database of known samples is then searched in order to retrieve the samples with the most similar signature.

§ 28 For this tool to be efficient, both the signature and the way it is used in the similarity measure must be adequate. It is difficult to claim to be exhaustive in the description of handwritten shapes for the retrieval, so it is essential to work with expert users who are able to validate the measures that appear to be the most relevant.

Content-based image retrieval.
Figure 8: Content-based image retrieval.

§ 29 Figure 8 presents the interface of the CBIR tool. We can see in the upper-left corner the name of the query sample (in fact, the name of its digitized picture). This picture is displayed on the upper-right side. In the lower-left side, we can see the list of most-similar samples that have been retrieved in the known-sample database. We can note that the ten most similar pictures are available, together with their name and distance to the query sample. Some much less similar samples are also available (e.g. the 25th, 100th, etc.), in order to check for a global consistency of the similarity measure. Retrieved samples are displayed in the lower-right side (here: the third most similar sample). We can see in the middle-left subwindow, the set of features that have been used. Here, each feature is given the same weight, or significance degree. In this version of our tool, weights can be manually modified by the user, in order to adapt the resulting similarity measure to the samples currently considered.

§ 30 Once a satisfactory set of features has been identified, the user can still adapt the similarity measure. For this purpose, two ways of interacting have been implemented in our tool. First, the user can directly modify the weight (or signification degree) of each feature in the similarity measure. This can be done if the set of features is reduced, and if each feature has a relatively clear meaning from the user's point of view. A relevance feedback approach has also been proposed in order to produce more successful results. Relevance feedback consists in ranking the retrieved samples (Rui et al. 1997). Such a ranking can consist, for instance, of a three value feedback. Given the fact that retrieved samples are ordered from the most similar to the least with regard to the query sample, each of them can be associated with a manual score, indicating whether it is actually more similar, less similar, or correctly ranked. The weight of each feature can thus be automatically modified in order to take this feedback into account.

§ 31 Until now, the tools we developed have been tested using a set of known samples of over 800 images of the IRHT Medieval database. Several kinds of features have been exploited this way, the most recent being based on curvelets.

Spatial exploration

§ 32 From physical writing samples, we went to a digital, numerical representation, which allowed for automatic extraction and manipulation of features. At this point, we need to go back to a more physical world and let the palaeographer observe the computer's work and proposal. As the underlying way to compare writings consists in determining how near they are one to another, a spatial analysis can be considered a good opportunity. Spatial analysis is a quite common task, which consists in representing samples as points in a 2D or 3D space, each point corresponding to a sample, the distance between two points reflecting the dissimilarity between the two underlying samples. Nevertheless, basic tools lack interactivity. We thus developed a tool, named Explorer3D, in which we focus on interactivity.

A spatial organisation of writing.
Figure 9: A spatial organisation of writing.

§ 33 Based on a given set of features, a spatial view can be computed that reflects the similarities or dissimilarities observed among the feature-based representation of writings. Each point of this 3D view corresponds to a writing sample. For better readability, several facilities are offered. For instance, the writing sample can be displayed beside the corresponding point (See Figure 9-left).

§ 34 Input data consists of a set of features extracted from the digital samples. The spatial projection is computed according to these features. Several projection techniques are available, some considering that an a priori classification of samples is given, some considering only the purely digital facts (i.e. the features). An a priori classification of samples, such as linear discriminant analysis (Fisher 1936), will modify the projection, in order to move classes away from each other. Nevertheless, this option is quite contradictory in relation to the study's goals. Conversely, a blind projection generally produces a less interpretable space. In order to help the palaeographer to move into this space we propose several tools, among which are local zoom and interactive constraint definition.

§ 35 Local zooms can be performed that produce a local and thus more accurate organisation of writings (centre: a spherical selection of a subset of writings, right: the local organisation of this subset). A local zoom (Figure 9) consists in selecting a subset of (nearby) objects, and then projecting them in a new space, not considering the outer objects. Such zooms allow the user to go from a global observation to a local one that highlights more subtle structures in the space of writing samples.

§ 36 The interactive constraint definition (Martin et al. 2010) is a powerful, innovative tool, which allows the user to modify the projection step by step (Figure 10). To this purpose, the user can directly indicate in the 3D space that some pairs of samples are misplaced (they appear either too near or too far away) and should be moved accordingly. From such constraints, a new projection is computed, on which additional constraints might be given, and so forth. In order to detect such anomalies, several visual facilities have been developed. First of all, pictures are dynamically displayed in the 3D view when the mouse cursor passes over the corresponding point. Second, one can concentrate on local subsets of neighbours, for instance by hiding out-of-the-scope objects.

Visual exploration and interaction.
Figure 10: Visual exploration and interaction.

§ 37 Starting from a raw 3D view, the user can interact in order to introduce constraints, so as to move closer or apart some pairs of objects, based on a visual observation of the corresponding pictures. Constraints are viewed as links between pairs of objects (left-most picture). Once a set of constraints has been defined, a new 3D projection is computed based on them (centre picture). The user can iteratively add new constraints in order to reach a satisfying global organisation. Once such a state has been reached, some automated classification can be computed (right-most picture). Various clustering methods have been implemented. Once coherent groups have been produced, they can either be viewed at a high level of detail, colouring each object according to its group, or at a more synthetic level, hiding individual objects and replacing them with a representative object, which can currently be an ellipsoid or a convex envelope.

Summary

§ 38 In this paper we introduced several tools to help the palaeographer to study writings, both in terms of similar writings retrieval and in terms of global structuring. Amongst the relevant features that can be extracted from writing samples, we have presented four methods, namely co-occurrence matrices, curvelets, Freeman codes and median axis. While the first two methods focus on some global perception of the writing, the last is based on strokes and thus relies on a local perception. The third, which is based on character contour, also focuses on local facts, but is used at a more global level, by means of a image-level statistical synthesis.

§ 39 In order to exploit these features, we have proposed several interactive graphical tools. The first tool we introduced in this paper focuses on image retrieval: this tool retrieves from a database the writing samples that are similar to a given input writing sample. The user can interact with the tool in order to refine the similarity measure and thus improve the set of retrieved writings. The second tool we introduced allows for a spatial view of the global organisation of writings. Such a tool can help to study the organisation of writings at several levels, from the global to more local views. Several ways to interact are proposed that allow the user to estimate and adapt the spatial organisation of writings.

§ 40 In the near future, all these tools and methods will be integrated in a single platform, so as to offer to palaeographers a uniform access to their richness. Making this platform a powerful, comprehensible, and durable tool requires both fundamental and technical efforts. The next iteration of Graphem might consist of the definition and development of a semi-automatized transcription tool that supports various kinds of medieval writing styles. In the longer term, we believe that interaction tools should be further developed in order to allow for a comprehensive study of numerical features, in a way that would allow them to associate with some kind of orally explainable description of the writings, as suggested by Peter Stokes (Stokes 2009).


Acknowledgements

We gratefully acknowledge the support of the French National Research Agency MDCO Program under contract ANR-07-MDCO-006.

Works cited

Aiolli, Fabio et al. 1999. SPI: a System for Palaeographic Inspections. AIIA Notizie, 12(4): 34-38.

Aiolli, Fabio and Arianna Ciula. 2009. A case study on the System for Paleographic Inspections (SPI): challenges and new developments. Frontiers in Artificial Intelligence and Applications, 196: 53-66.

Aussems, Mark and Axel Brink. 2009. Digital palaeography. Codicology and palaeography in the digital age 2, ed. F. Fischer et al, 293-308. Norderstedt: Books on Demand.

Candes, Emmanuel et al. 2006. Fast Discrete Curvelet Transforms. Multiscale Modeling and Simulation, 5 (3): 861-99.

Fisher, Ronald. 1936. The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7: 179-88.

Flickner, Myron et al. 1995. Query by Image and Video Content: The QBIC System. IEEE Computer, 28 (9): 23-32.

Guyon, Isabelle and André Elisseeff. 2003. An introduction to variable and feature selection. Journal of Machine Learning Research, 3: 1157-82.

Hofmeister, Wernfried et al. 2009. Forschung am Rande des paläographischen Zweifels: Die EDV-basierte Erfassung individueller Schriftzüge im Projekt DamalS. In Codicology and Palaeography in the Digital Age, ed. M. Rehbein et al. 261-92. Norderstedt: Books on Demand.

Joutel, Guillaume, et al. 2008. A complete pyramidal geometrical scheme for text based image description and retrieval. In International Conference on Image Analysis and Signal Processing (ICIAR). 471-480. Springer.

Journet, Nicholas, et al. 2005. Ancient printed documents indexation: a new approach. Pattern Recognition and Data Mining The Third International Conference on Advances in Pattern Recognition, ICAPR 2005, Bath, UK, August 22-25, 2005: Proceedings. 513-522. Springer.

Loupias, Etienne and Stéphane Bres. 2001. Key points based indexing for pre-attentive similarities: The KIWI System, Pattern Analysis and Applications, Special Issue on Image Indexing, 4: 200-214.

Martin, Lionel, et al. 2010. Interactive and progressive constraint definition for dimensionality reduction and visualization. Advances in Knowledge Discovery and Management 2, ed. F. Guillet et al.: forthcoming. Springer.

Moalla, Ikram et al. 2006. Contribution to the discrimination of the medieval manuscript texts, Lecture Notes in Computer Science, 3872: 25-37.

Pentland, Alex P., Rosalind W. Picard, and Stan Sclaroff. 1995. Photobook: content-based manipulation of image databases. International Journal of Computer Vision, 18 (3): 233-54.

Rui, Yong, Thomas S. Huang, and Sharad Mehrotra. 1997. Content-based Image Retrieval with Relevance Feedback in MARS. Proceedings of IEEE Int. Conf. on Image Processing (ICIP'97). 815-818.

Saul, Lawrence K., et al. 2006. Spectral methods for dimensionality reduction. In Semisupervised learning, ed. O. Chapelle, B. Schölkopf and A. Zien. 293-308. Cambridge, MA: MIT Press.

Siddiqi, Imran et al. 2009. Contour based features for the classification of ancient manuscripts, Proceedings of the 14th Conference of the International Graphonomics Society (IGS'09), ed. J.G. Vinter and J-L Velay, 226-229. Dijon: Vidonne Press

Siddiqi, Imran and Nicole Vincent. 2009. A set of chain code based features for writer recognition. In Proceedings of the Tenth International Conference on Document Analysis and Recognition (ICDAR'09). 981-985. Los Alamos, CA: IEEE.

Stokes, Peter. 2009. Computer-aided palaeography, present and future. Codicology and palaeography in the digital age, ed. M. Rehbein et al. 309-338. Norderstedt: Books On Demand.

Swain, Michael J., and Dana H. Ballard. 1991. Color indexing, International Journal of Computer Vision, 7(1):11-32.

Wolf, Lior, et al. 2011. Computerized paleography: tools for historical manuscripts. IEEE International Conference on Image Processing.

Share

Authors

Florence Cloppet (Laboratoire d'Informatique, Université Paris Descartes (LIPADE))
Hani Daher (Laboratoire d'Informatique, Université Paris Descartes (LIPADE); Laboratoire d'Informatique en Image et Systèmes d'information (LIRIS), Institut National des Sciences Appliquées (INSA), Lyon)
Véronique Églin (Laboratoire d'Informatique en Image et Systèmes d'information (LIRIS), Institut National des Sciences Appliquées (INSA), Lyon)
Hubert Emptoz (Laboratoire d'Informatique en Image et Systèmes d'information (LIRIS), Institut National des Sciences Appliquées (INSA), Lyon)
Matthieu Exbrayat (Laboratoire d'Informatique Fondamentale d'Orléans (LIFO), Université d'Orléans)
Guillaume Joutel (Laboratoire d'Informatique en Image et Systèmes d'information (LIRIS), Institut National des Sciences Appliquées (INSA), Lyon)
Frank Lebourgeois (Laboratoire d'Informatique en Image et Systèmes d'information (LIRIS), Institut National des Sciences Appliquées (INSA), Lyon)
Lionel Martin (Laboratoire d'Informatique Fondamentale d'Orléans (LIFO), Université d'Orléans)
Ikram Moalla (Laboratoire d'Informatique en Image et Systèmes d'information (LIRIS), Institut National des Sciences Appliquées (INSA), Lyon)
Imran Siddiqi (Laboratoire d'Informatique, Université Paris Descartes (LIPADE))
Nicole Vincent (Laboratoire d'Informatique, Université Paris Descartes (LIPADE))

Downloads

Issues

Licence

Creative Commons Attribution 4.0

Identifiers

Peer Review

This article has been peer reviewed.

File Checksums (MD5)

  • HTML: 453f4896990df656d74cd32939d9764c