OpenMethods Spotlights showcase people and epistemic reflections behind Digital Humanities tools and methods. You can find here brief interviews with the creator(s) of the blogs or tools that are highlighted on OpenMethods to humanize and contextualize them. In the first episode, Alíz Horváth is talking with Hilde de Weerdt at Leiden University about MARKUS, a tool that offers offers a variety of functionalities for the markup, analysis, export, linking, and visualization of texts in multiple languages, with a special focus on Chinese and now Korean as well.
Category: Enrichment
Enrichment refers to the activity of adding information to an object of enquiry, by making its origin, nature, structure, meaning, or elements explicit. This activity typically follows the capture of the object.
East Asian studies are still largely underrepresented in digital humanities. Part of the reason for this phenomenon is the relative lack of tools and methods which could be used smoothly with non-Latin scripts. MARKUS, developed by Brent Ho within the framework of the Communication and Empire: Chinese Empires in Comparative Perspective project led by Hilde de Weerdt at Leiden University, is a comprehensive tool which helps mitigate this issue. Selected as a runner up in the category “Best tool or suite of tools” in the DH2016 awards, MARKUS offers a variety of functionalities for the markup, analysis, export, linking, and visualization of texts in multiple languages, with a special focus on Chinese and now Korean as well.
Introduction: GROBID is an already well-known open source tool in the field of Digital Humanities, originally built to extract and parse bibliographical metadata from scholarly works. The acronym stands for GeneRation Of BIbliographic Data.
Shaped by use cases and adoptions to a range of different DH and non-DH settings, the tool has been progressively evolved into a suite of technical features currently applied to various fields, like that of journals, dictionaries and archives.
[Click ‘Read more’ for the full post!]
Introduction: Standardized metadata, linked meaningfully using semantic web technologies are prerequisites for cross-disciplinary Digital Humanities research as well as for FAIR data management. In this article from the Open Access Journal o-bib, members of the project „GND for Cultural Data“ (GND4C) describe how the Gemeinsame Normdatei (GND) (engl. Integrated Authority File), a widely accepted vocabulary for description and information retrieval in the library world is maintained by the German National Library and how it supports semantic interoperability and reuse of data. It also explores how the GND can be utilized and advanced collaboratively, integrating the perspectives of its multidisciplinary stakeholders, including the Digital Humanities. For background reading, the training resources „Controlled Vocabularies and SKOS“ (https://campus.dariah.eu/resource/controlled-vocabularies-and-skos) or „Formal Ontologies“ (https://campus.dariah.eu/resource/formal-ontologies-a-complete-novice-s-guide) are of interest.
Historical newspapers, already available in many digitized collections, may represent a significant source of information for the reconstruction of events and backgrounds, enabling historians to cast new light on facts and phenomena, as well as to advance new interpretations. Lausanne, University of Zurich and C2DH Luxembourg, the ‘impresso – Media Monitoring of the Past’ project wishes to offer an advanced corpus-oriented answer to the increasing need of accessing and consulting collections of historical digitized newspapers.
[…] Thanks to a suite of computational tools for data extraction, linking and exploration, impresso aims at overcoming the traditional keyword-based approach by means of the application of advanced techniques, from lexical processing to semantically deepened n-grams, from data modelling to interoperability.
[Click ‘Read more’ for the full post!]
Introduction: the RIDE journal (the Review Journal of the Institute for Documentology and Scholarly Editing) aims to offer a solution to current misalignments between scholarly workflows and their evaluation and provides a forum for the critical evaluation of the methodology of digital edition projects. This time, we have been cherry picking from their latest issue (Issue 11) dedicated to the evaluation and critical improvement of tools and environments.
Ediarum is a toolbox developed for editors by the TELOTA initiative at the BBAW in Berlin to generate and annotate TEI-XML Data in German language. In his review, Andreas Mertgens touches upon issues regarding methodology and implementation, use cases, deployment and learning curve, Open Source, sustainability and extensibility of the tool, user interaction and GUI and of course a rich functional overview.
[Click ‘Read more’ for the full post!]
The reviewed article presents the project BILBO and illustrates the application of several appropriate machine-learning techniques to the constitution of proper reference corpora and the construction of efficient annotation models. In this way, solutions are proposed for the problem of extracting and processing useful information from bibliographic references in digital documentation whatever their bibliographic styles are. It proves the usefulness and high degree of accuracy of CRF techniques, which involve finding the most effective set of features (including three types of features: input, local and global features) of a given corpus of well-structured bibliographical data (with labels such as surname, forename or title). Moreover, this approach has not only been proven efficient when applied to such traditional, well-structured bibliographical data sets, but it also originally contributes to the processing of more complicated, less-structured references such as the ones contained in footnotes by applying SVM with new features for sequence classification.
[Click ‘Read more’ for the full post.]
Introduction: In this article, José Calvo Tello offers a methodological guide on data curation for creating literary corpus for quantitative analysis. This brief tutorial covers all stages of the curation and creation process and guides the reader towards practical cases from Hispanic literature. The author deals with every single step in the creation of a literary corpus for quantitative analysis: from digitization, metadata, automatic processes for cleaning and mining the texts, to licenses, publishing and achiving/long term preservation.
Introduction: Hosted at the University of Lausanne, “A world of possibilities. Modal pathways over an extra-long period of time: the diachrony in the Latin language” (WoPoss) is a project under development exploiting a corpus-based approach to the study and reconstruction of the diachrony of modality in Latin.
Following specific annotation guidelines applied to a set of various texts pertaining to the time span between 3rd century BCE and 7th century CE, the work team lead by Francesca Dell’Oro aims at analyzing the patterns of modality in the Latin language through a close consideration of lexical markers.
Introduction: In this post, you can find a thoughtful and encouraging selection and description of reading, writing and organizing tools. It guides you through a whole discovery-magamement-writing-publishing workflow from the creation of annotated bibliographies in Zotero, through a useful Markdown syntax cheat sheet to versioning, storage and backup strategies, and shows how everybody’s research can profit by open digital methods even without sophisticated technological skills. What I particularly like in Tomislav Medak’s approach is that all these tools, practices and tricks are filtered through and tested again his own everyday scholarly routine. It would make perfect sense to create a visualization from this inventory in a similar fashion to these workflows.