FactGrid – a database for historians

FactGrid – a database for historians

FactGrid is both a database as well as a wiki. This project operated by the Gotha Research Centre and the data lab of the University of Erfurt. It utilizes MediaWiki and a Wikidata’s “wikibase” extension to collect data from historic research. With FactGrid you can create a knowledge graph, giving information in triple statements. This knowledge graph can be asked with SPARQL. All data provided by FactGrid holds a CC0-license.

Linked Data from TEI (LIFT): A Teaching Tool for TEI to Linked Data Transformation

Linked Data from TEI (LIFT): A Teaching Tool for TEI to Linked Data Transformation

TEI editions are among the most used tool by scholarly editors to produce digital editions in various literary fields. LIFT is a Python-based tool that allows to programmatically extract information from digital texts annotated in TEI by modelling persons, places, events and relations annotated in the form of a Knowledge Graph which reuses ontologies and controlled vocabularies from the Digital Humanities domain.

Collaborative Digital Projects in the Undergraduate Humanities Classroom: Case Studies with Timeline JS

Collaborative Digital Projects in the Undergraduate Humanities Classroom: Case Studies with Timeline JS

https://openmethods.dariah.eu/2022/05/11/open-source-tool-allows-users-to-create-interactive-timelines-digital-humanities-at-a-state/ OpenMethods introduction to: Collaborative Digital Projects in the Undergraduate Humanities Classroom: Case Studies with Timeline JS 2022-05-11 07:28:36 Marinella Testori Blog post Creation Data Designing Digital Humanities English Methods…

BERT for Humanists: a deep learning language model  meets DH

BERT for Humanists: a deep learning language model meets DH

Introduction: Awarded as Best Long Paper at the 2019 NACCL (North American Chapter of the Association for Computational Linguistics) Conference, the contribution by Jacob Devlin et al. provides an illustration of “BERT: Pre-training of Deep Biredictional Transformers for Language Understanding” (https://aclanthology.org/N19-1423/).

As highlighted by the authors in the abstract, BERT is a “new language representation model” and, in the past few years, it has become widespread in various NLP applications; for example, a project exploiting it is CamemBERT (https://camembert-model.fr/), regarding French. 

In June 2021, a workshop organized by David Mimno, Melanie Walsh and Maria Antoniak (https://melaniewalsh.github.io/BERT-for-Humanists/workshop/) pointed out how to use BERT in projects related to digital humanities, in order to deal with word similarity and classification classification while relying on Phyton-based HuggingFace transformers library. (https://melaniewalsh.github.io/BERT-for-Humanists/tutorials/ ). A further advantage of this training resource is that it has been written with sensitivity towards the target audience in mind:  in a way that it provides a gentle introduction to complexities of language models to scholars with education and background other than Computer Science.

Along with the Tutorials, the same blog includes Introductions about BERT in general and in its specific usage in a Google Colab notebook, as well as a constantly-updated bibliography and a glossary of the main terms (‘attention’, ‘Fine-Tune’, ‘GPU’, ‘Label’, ‘Task’, ‘Transformers’, ‘Token’, ‘Type’, ‘Vector’).

Diseño de corpus literario para análisis cuantitativos

Diseño de corpus literario para análisis cuantitativos

Introduction: In this article, José Calvo Tello offers a methodological guide on data curation for creating literary corpus for quantitative analysis. This brief tutorial covers all stages of the curation and creation process and guides the reader towards practical cases from Hispanic literature. The author deals with every single step in the creation of a literary corpus for quantitative analysis: from digitization, metadata, automatic processes for cleaning and mining the texts, to licenses, publishing and achiving/long term preservation.

Digital scholarship workflows

Digital scholarship workflows

Introduction:  In this post, you can find a thoughtful and encouraging selection and description of reading, writing and organizing tools. It guides you through a whole discovery-magamement-writing-publishing workflow from the creation of annotated bibliographies in Zotero,  through a useful Markdown syntax cheat sheet  to versioning, storage and backup strategies, and shows how everybody’s research can profit by open digital methods even without sophisticated technological skills. What I particularly like in Tomislav Medak’s approach is that all these tools, practices and tricks are filtered through and tested again his own everyday scholarly routine. It would make perfect sense to create a visualization from this inventory in a similar fashion to these workflows.

DH Research Software Engineers – For We Are Many

DH Research Software Engineers – For We Are Many

Introduction: This white paper is an outcome of a DH2019 workshop dedicated to foster closer collaboration among technology-oriented DH researchers and  developers of tools to support Digital Humanities research. The paper briefly outlines the most pressing issues in their collaboration and addresses topics such as: good practices to ease mutual understanding between scholars and researchers; software development and academic career and recognition; or sustainability and funding.