Undogmatic Literary Annotation with CATMA in: Annotations in Scholarly Editions and Research

Undogmatic Literary Annotation with CATMA in: Annotations in Scholarly Editions and Research

Introduction: Digital Literary Studies has long engaged with the challenges in representing ambiguity, contradiction and polyvocal readings of literary texts. This book chapter describes a web-based tool called CATMA which  promises a “low-threshold” approach to digitally encoded text interpretation. CATMA has a long trajectory based on a ‘standoff’ approach to markup, somewhat provocatively described by its creators as “undogmatic”, which stands in contrast to more established systems for text representation in digital scholarly editing and publishing such as XML markup, or the Text Encoding Initiative (TEI). Standoff markup involves applying numbers to each character of a text and then using those numbers as identifiers to store interpretation externally. This approach allows for “multiple, over-lapping and even taxonomically contradictory annotations by one or more users” and avoids some of the rigidity which other approaches sometimes imply. An editor working with CATMA is able to create multiple independent annotation cycles, and to even specify which interpretation model was used for each. And the tool allows for an impressive array of analysis and visualization possibilities.

Recent iterations of CATMA have developed approaches which aim to bridge the gap between ‘close’ and ‘distant’ reading by providing scalable digital annotation and interpretation involving “semantic zooming” (which is compared to the kind of experience you get from an interactive map). The latest version also brings greater automation (currently in German only) to grammatical tense capture, temporal signals and part-of-speech annotation, which offer potentially significant effort savings and a wider range of markup review options. Greater attention is also paid to different kinds of interpretation activities through the three CATMA annotation modes of ‘highlight’, ‘comment’ and ‘annotate’, and to overall workflow considerations. The latest version of the tool offers finely grained access options mapping to common editorial roles and workflows.

I would have welcome greater reflection in the book chapter on sustainability – how an editor can port their work to other digital research environments, for use with other tools. While CATMA does allow for export to other systems (such as TEI), quite how effective this is (how well its interpretation structures bind to other digitally-mediated representation systems) is not clear.

What is most impressive about CATMA, and the work of its creator – the forTEXT research group – more generally, is how firmly embedded the thinking behind the tool is in humanities (and in particular literary) scholarship and theory. The group’s long-standing and deeply reflective engagement with the concerns of literary studies is well captured in this well-crafted and highly engaging book chapter.

[Click ‘Read more’ for the full post!]

Fragmentarium: a Model for Digital Fragmentology

Fragmentarium: a Model for Digital Fragmentology

Introduction: One of the major challenges of digital data workflows in the Arts and Humanities is that resources that belong together, in extreme cases, like this particular one, even parts of dismembered manuscripts, are hosted and embedded in different geographical and institutional silos. Combining IIIF with a mySQL database, Fragmentarium provides a user-friendly but also standardized, open workspace for the virtual reconstruction of medieval manuscript fragments. Lisa Fagin Davis’s blog post gives contextualized insights of the potentials of Fragmentarium and how, as she writes, “technology has caught up with our dreams”. 

GROBID: when data extraction becomes a suite

GROBID: when data extraction becomes a suite

Introduction: GROBID is an already well-known open source tool in the field of Digital Humanities, originally built to extract and parse bibliographical metadata from scholarly works. The acronym stands for GeneRation Of BIbliographic Data.
Shaped by use cases and adoptions to a range of different DH and non-DH settings, the tool has been progressively evolved into a suite of technical features currently applied to various fields, like that of journals, dictionaries and archives.
[Click ‘Read more’ for the full post!]

When history meets technology. impresso: an innovative corpus-oriented perspective.

When history meets technology. impresso: an innovative corpus-oriented perspective.

Historical newspapers, already available in many digitized collections, may represent a significant source of information for the reconstruction of events and backgrounds, enabling historians to cast new light on facts and phenomena, as well as to advance new interpretations. Lausanne, University of Zurich and C2DH Luxembourg, the ‘impresso – Media Monitoring of the Past’ project wishes to offer an advanced corpus-oriented answer to the increasing need of accessing and consulting collections of historical digitized newspapers.
[…] Thanks to a suite of computational tools for data extraction, linking and exploration, impresso aims at overcoming the traditional keyword-based approach by means of the application of advanced techniques, from lexical processing to semantically deepened n-grams, from data modelling to interoperability.
[Click ‘Read more’ for the full post!]

Automatic annotation of incomplete and scattered bibliographical references in Digital Humanities papers

Automatic annotation of incomplete and scattered bibliographical references in Digital Humanities papers

The reviewed article presents the project BILBO and illustrates the application of several appropriate machine-learning techniques to the constitution of proper reference corpora and the construction of efficient annotation models. In this way, solutions are proposed for the problem of extracting and processing useful information from bibliographic references in digital documentation whatever their bibliographic styles are. It proves the usefulness and high degree of accuracy of CRF techniques, which involve finding the most effective set of features (including three types of features: input, local and global features) of a given corpus of well-structured bibliographical data (with labels such as surname, forename or title). Moreover, this approach has not only been proven efficient when applied to such traditional, well-structured bibliographical data sets, but it also originally contributes to the processing of more complicated, less-structured references such as the ones contained in footnotes by applying SVM with new features for sequence classification.

[Click ‘Read more’ for the full post.]

Diseño de corpus literario para análisis cuantitativos

Diseño de corpus literario para análisis cuantitativos

Introduction: In this article, José Calvo Tello offers a methodological guide on data curation for creating literary corpus for quantitative analysis. This brief tutorial covers all stages of the curation and creation process and guides the reader towards practical cases from Hispanic literature. The author deals with every single step in the creation of a literary corpus for quantitative analysis: from digitization, metadata, automatic processes for cleaning and mining the texts, to licenses, publishing and achiving/long term preservation.