Introduction: This article proposes establishing a good collaboration between FactMiners and the Transkribus project that will help the Transkribus team to evolve the “sustainable virtuous” ecosystem they described as a Transcription & Recognition Platform — a Social Machine for Job Creation & Skill Development in the 21st Century!
Introduction: This blog post not only presents a technique of measuring poetic meter and using it to plot distances between poets, but it also provides an insight into the theoretical and empirical process leading to those results.
Introduction: Apart from its buoyant conclusion that authorship attribution methods are rather robust to noise (transcription errors) introduced by optical character recognition and handwritten text recognition, this article also offers a comprehensive read on the application of sophisticated computational techniques for testing and validation in a data curation process.
Introduction by OpenMethods editor Klaus Thoden: This blog post describes how the National Library of Wales makes us of Wikidata for enriching their collections. It especially showcases new features for…
Introduction: This article explains the concept, the uses and the procedural steps of text mining. It further provides information regarding available teaching courses and encourages readers to use the OpenMinTeD platform for the purpose.
Introduction: Ecologists are much aided by historical sources of information on human-animal interaction. But how does one cope with the plethora of different descriptions for the same animal in the historic record? A Dutch research group reports on how to aggregate ‘Bunzings’, ‘Ullingen’, and ‘Eierdieven’ (‘Egg-thieves’) into a useful historical ecology knowledge base.
Introduction: This article describes the possibilities offered by the ggplot2 package for network visualization. This R package enables the user to use a wide variety of graphic styles, and to include supplementary information regarding vertices and edges.
Introduction: This article introduces a novel way to unfold and discover patterns in complex texts, at the intersection between macro and micro analytics. This technique is called TIC (Transcendental Information Cascades) allows analysis of how a cast of characters is generated and managed dynamically over the duration of a text.
Introduction: The article discusses how letters are being used across the disciplines, identifying similarities and differences in transcription, digitisation and annotation practices. It is based on a workshop held after the end of the project Digitising experiences of migration: the development of interconnected letters collections (DEM). The aims were to examine issues and challenges surrounding digitisation, build capacity relating to correspondence mark-up, and initiate the process of interconnecting resources to encourage cross-disciplinary research. Subsequent to the DEM project, TEI templates were developed for capturing information within and about migrant correspondence, and visualisation tools were trialled with metadata from a sample of letter collections. Additionally, as a demonstration of how the project’s outputs could be repurposed and expanded, the correspondence metadata that was collected for DEM was added to a more general correspondence project, Visual Correspondence.
Introduction: In the context of medieval and early Tudor texts scholarship, this paper discusses the methodological use of the database not simply to store information, but to clarify points of tension between the questions asked and the information provided in order to find answers.