Introduction: In this article, José Calvo Tello offers a methodological guide on data curation for creating literary corpus for quantitative analysis. This brief tutorial covers all stages of the curation and creation process and guides the reader towards practical cases from Hispanic literature. The author deals with every single step in the creation of a literary corpus for quantitative analysis: from digitization, metadata, automatic processes for cleaning and mining the texts, to licenses, publishing and achiving/long term preservation.
Introduction: Linked Data and Linked Open Data are gaining an increasing interest and application in many fields. A recent experiment conducted in 2018 at Furman University illustrates and discusses some of the challenges from a pedagogical perspective posed by Linked Open Data applied to research in the historical domain.
“Linked Open Data to navigate the Past: using Peripleo in class” by Chiara Palladino describes the exploitation of the search-engine Peripleo in order to reconstruct the past of four archeologically-relevant cities. Many databases, comprising various types of information, have been consulted, and the results, as highlighted in the contribution by Palladino, show both advantages and limitations of a Linked Open Data-oriented approach to historical investigations.
Introduction: Named Entity Recognition (NER) is used to identify textual elements that gives things a name. In this study, four different NER tools are evaluated using a corpus of modern and classic fantasy or science fiction novels. Since NER tools have been created for the news domain, it is interesting to see how they perform in a totally different domain. The article comes with a very detailed methodological part and the accompanying dataset is also made available.
Introduction: In this article, Nicolás Quiroga reflects on the fundamental place of the note-taking practice in the work of historians. The artcile also reviews some tools for classifying information -which do not substantially affect the note-taking activity – and suggests how the use of these tools can create new digital approaches for historians.
Introduction: The FAIR Data Principles (Findable, Accesible, Interoperable, Reusable) aim to make clear the need to improve the infrastructure for reuse of scholarly data. The FAIR Data Principles emphasize the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals, key activities for Digital Humanities research. The post below summarizes how Europeana’s principles (Usable, Mutual, Reliable) align with the FAIR Data ones, enhancing the findability, accessibility, interoperability, and reuse of digitised cultural heritage.
Introduction: Standards are best explained in real life use cases. The Parthenos Standardization Survival Kit is a collection of research use case scenarios illustrating best practices in Digital Humanities and Heritage research. It is designed to support researchers in selecting and using the appropriate standards for their particular disciplines and workflows. The latest addition to the SSK is a scenario for creating a born-digital dictionary in TEI.
Introduction: The explore! project tests computer stimulation and text mining on autobiographic texts as well as the reusability of the approach in literary studies. To facilitate the application of the proposed method in broader context and to new research questions, the text analysis is performed by means of scientific workflows that allow for the documentation, automation, and modularization of the processing steps. By enabling the reuse of proven workflows, the goal of the project is to enhance the efficiency of data analysis in similar projects and further advance collaboration between computer scientists and digital humanists.
Introduction: This is a comprehensive account of a workshop on research data in the study of the past. It introduces a broad spectrum of aspects and questions related to the growing relevance of digital research data and methods for this discipline and which methodological and conceptual consequences are involved and needed, especially a shared understanding of standards.
Introduction: This blog post describes how the National Library of Wales makes us of Wikidata for enriching their collections. It especially showcases new features for visualizing items on a map, including a clustering service, the support of polygons and multipolygons. It also shows how polygons like the shapes of buildings can be imported from OpenStreetMap into Wikidata, which is a great example for re-using already existing information.
Introduction: This article proposes establishing a good collaboration between FactMiners and the Transkribus project that will help the Transkribus team to evolve the “sustainable virtuous” ecosystem they described as a Transcription & Recognition Platform — a Social Machine for Job Creation & Skill Development in the 21st Century!