Do humanists need BERT?

Do humanists need BERT?

Introduction: Ted Underwood tests a new language representation model called “Bidirectional Encoder Representations from Transformers” (BERT) and asks if humanists should use it. Due to its high degree of difficulty and its limited success (e.g. in questions of genre detection) he concludes, that this approach will be important in the future but it’s nothing to deal with for humanists at the moment. An important caveat worth reading.

Evaluating named entity recognition tools for extracting social networks from novels

Evaluating named entity recognition tools for extracting social networks from novels

Introduction: Named Entity Recognition (NER) is used to identify textual elements that gives things a name. In this study, four different NER tools are evaluated using a corpus of modern and classic fantasy or science fiction novels. Since NER tools have been created for the news domain, it is interesting to see how they perform in a totally different domain. The article comes with a very detailed methodological part and the accompanying dataset is also made available.

Little package, big dependency

Little package, big dependency

Introduction: The world of R consists of innumerous packages. Most of them have very little download rates because they are limited to certain functions as part of a larger argument. Based on a surprising experience with the small package clipr Matthew Lincoln shares his thoughts about this reception phenomenon especially in the digital humanities.

The Research Software Directory and how it promotes software citation

The Research Software Directory and how it promotes software citation

Introduction: The Research Software Directory of the Netherlands eScience Institute provides easy access to software, source code and its documentation. More importantly, it makes it easy to cite software, which is highly advisable when using software to derive research results. The Research Software Directory positions itself as a platform that eases scientific referencing and reproducibility of software based research—good peer praxis that is still underdeveloped in the humanities. 

From Hermeneutics to Data to Networks: Data Extraction and Network Visualization of Historical Sources

From Hermeneutics to Data to Networks: Data Extraction and Network Visualization of Historical Sources

Introduction: This lesson by Marten Düring from the “Programming Historian-Website” gently introduces novices to the topic to Network Visualisation of Historical Sources. As a case study it covers not only the general advantages of network visualisation for humanists but also a step-by-step explanation of the process from extraction of the data until the visualization (using the Palladio-tool). This lesson has also been translated into Spanish and includes many useful references for further reading.

Towards Scientific Workflows and Computer Simulation as a Method in Digital Humanities – Digitale Bibliothek – Gesellschaft für Informatik e.V.

Introduction: The explore! project tests computer stimulation and text mining on autobiographic texts as well as the reusability of the approach in literary studies. To facilitate the application of the proposed method in broader context and to new research questions, the text analysis is performed by means of scientific workflows that allow for the documentation, automation, and modularization of the processing steps. By enabling the reuse of proven workflows, the goal of the project is to enhance the efficiency of data analysis in similar projects and further advance collaboration between computer scientists and digital humanists.

Not All Character N-grams Are Created Equal: A Study in Authorship Attribution – ACL Anthology

Introduction: Studying n-grams of characters is today a classical choice in authorship attribution. If some discussion about the optimal length of these n-grams have been made, we have still have few clues about which specific type of n-grams are the most helpful in the process of efficiently identifying the author of a text. This paper partly fills that gap, by showing that most of the information gained from studying n-grams of characters comes from the affixes and punctuation.

If These Crawls Could Talk: Studying and Documenting Web Archives Provenance

If These Crawls Could Talk: Studying and Documenting Web Archives Provenance

Introduction: With Web archives becoming an increasingly more important resource for (humanities) researchers, it also becomes paramount to investigate and understand the ways in which such archives are being built and how to make the processes involved transparent. Emily Maemura, Nicholas Worby, Ian Milligan, and Christoph Becker report on the comparison of three use cases and suggest a framework to document Web archive provenance.