Introduction: GROBID is an already well-known open source tool in the field of Digital Humanities, originally built to extract and parse bibliographical metadata from scholarly works. The acronym stands for GeneRation Of BIbliographic Data.
Shaped by use cases and adoptions to a range of different DH and non-DH settings, the tool has been progressively evolved into a suite of technical features currently applied to various fields, like that of journals, dictionaries and archives.
[Click ‘Read more’ for the full post!]
Category: Tools
Introduction: In this blog post, Michael Schonhardt explores and evaluates a range of freely available, Open Source tools – Inkscape, Blender, Stellarium, Sketchup – that enable the digital, 3D modelling of medieval scholarly objects. These diverse tools bring easily implementable solutions for both the analysis and the communication of results of object-related cultural studies and are especially suitable for projects with small budgets.
Historical newspapers, already available in many digitized collections, may represent a significant source of information for the reconstruction of events and backgrounds, enabling historians to cast new light on facts and phenomena, as well as to advance new interpretations. Lausanne, University of Zurich and C2DH Luxembourg, the ‘impresso – Media Monitoring of the Past’ project wishes to offer an advanced corpus-oriented answer to the increasing need of accessing and consulting collections of historical digitized newspapers.
[…] Thanks to a suite of computational tools for data extraction, linking and exploration, impresso aims at overcoming the traditional keyword-based approach by means of the application of advanced techniques, from lexical processing to semantically deepened n-grams, from data modelling to interoperability.
[Click ‘Read more’ for the full post!]
Introduction: the RIDE journal (the Review Journal of the Institute for Documentology and Scholarly Editing) aims to offer a solution to current misalignments between scholarly workflows and their evaluation and provides a forum for the critical evaluation of the methodology of digital edition projects. This time, we have been cherry picking from their latest issue (Issue 11) dedicated to the evaluation and critical improvement of tools and environments.
Ediarum is a toolbox developed for editors by the TELOTA initiative at the BBAW in Berlin to generate and annotate TEI-XML Data in German language. In his review, Andreas Mertgens touches upon issues regarding methodology and implementation, use cases, deployment and learning curve, Open Source, sustainability and extensibility of the tool, user interaction and GUI and of course a rich functional overview.
[Click ‘Read more’ for the full post!]
This short blog post by Laure Barbot, Frank Fischer, Yoan Moranville, and Ivan Pozdniakov from 2019 sheds some light on the old question which DH-tools are actually used in research and which are especially popular.
The StandforCore NLP wishes to represent a complete Java-based set of tools for various aspects of language analysis, from annotation to dependency parsing, from lemmatization
to coreference resolution. It thus provides a range of tools which
can be potentially applied to other languages apart from English.
Among the languages to which the StandfordCore NLP is mainly applied there is Italian, for which the Tint pipeline has been developed as described in the paper “Italy goes to Stanford: a collection of CoreNLP modules for Italian” by Alessio Palmero Apostolo and Giovanni Moretti.
On the Tint webpage the whole pipeline can be found and downloaded: it comprises tokenization and sentence splitting, morphological analysis and lemmatization, part-of-speech tagging, named-entity recognition and dependency parsing, including wrappers under construction. [Click ‘Read more’ for the whole post.]
Introduction: Ted Underwood tests a new language representation model called “Bidirectional Encoder Representations from Transformers” (BERT) and asks if humanists should use it. Due to its high degree of difficulty and its limited success (e.g. in questions of genre detection) he concludes, that this approach will be important in the future but it’s nothing to deal with for humanists at the moment. An important caveat worth reading.
Introduction: Digital humanists looking for tools in order to visualize and analyze texts can rely on ‘Voyant Tools’ (https://voyant-tools.org), a software package created by S.Sinclair and G.Rockwell. Online resources are available in order to learn how to use Voyant. In this post, we highlight two of them: “Using Voyant-Tools to Formulate Research Questions for Textual Data” by Filipa Calado (GC Digital Fellows and the tutorial “Investigating texts with Voyant” by Miriam Posner.
Introduction: The Research Software Directory of the Netherlands eScience Institute provides easy access to software, source code and its documentation. More importantly, it makes it easy to cite software, which is highly advisable when using software to derive research results. The Research Software Directory positions itself as a platform that eases scientific referencing and reproducibility of software based research—good peer praxis that is still underdeveloped in the humanities.
Introduction: Standards are best explained in real life use cases. The Parthenos Standardization Survival Kit is a collection of research use case scenarios illustrating best practices in Digital Humanities and Heritage research. It is designed to support researchers in selecting and using the appropriate standards for their particular disciplines and workflows. The latest addition to the SSK is a scenario for creating a born-digital dictionary in TEI.