In the next Spotlights episode, we are looking behind the scenes of TaDiRAH with Dr. Luise Borek and Dr. Canan Hastic who give us a rich introduction to the new version of it. We discuss communities around TaDiRAH, the evolution of DH, open data culture, linking with Wikidata…and many more!
Category: Modeling
Modeling is the activity of creating an abstract representation of a complex phenomenon, usually in a machine-readable way, possibly in an interactive way (i.e. it includes “simulation”). Models become machine-readable when modelling produces a schema that describes the elements and the structure of an object of inquiry in an explicit way. Modeling can also refer to the activity of transforming or manipulating a digital object in such a way as to make it compatible with a previously constructed model or schema. Mapping, for instance, is an example of a spatial model. Workflow design is included as part of Modeling, using an object such as Process.
Introduction: In this blog post, Michael Schonhardt explores and evaluates a range of freely available, Open Source tools – Inkscape, Blender, Stellarium, Sketchup – that enable the digital, 3D modelling of medieval scholarly objects. These diverse tools bring easily implementable solutions for both the analysis and the communication of results of object-related cultural studies and are especially suitable for projects with small budgets.
Historical newspapers, already available in many digitized collections, may represent a significant source of information for the reconstruction of events and backgrounds, enabling historians to cast new light on facts and phenomena, as well as to advance new interpretations. Lausanne, University of Zurich and C2DH Luxembourg, the ‘impresso – Media Monitoring of the Past’ project wishes to offer an advanced corpus-oriented answer to the increasing need of accessing and consulting collections of historical digitized newspapers.
[…] Thanks to a suite of computational tools for data extraction, linking and exploration, impresso aims at overcoming the traditional keyword-based approach by means of the application of advanced techniques, from lexical processing to semantically deepened n-grams, from data modelling to interoperability.
[Click ‘Read more’ for the full post!]
Introduction: In our guidelines for nominating content, databases are explicitly excluded. However, this database is an exception, which is not due to the burning issue of COVID-19, but to its exemplary variety of digital humanities methods with which the data can be processed.AVOBMAT makes it possible to process 51,000 articles with almost every conceivable approach (Topic Modeling, Network Analysis, N-gram viewer, KWIC analyses, gender analyses, lexical diversity metrics, and so on) and is thus much more than just a simple database – rather, it is a welcome stage for the Who is Who (or What is What?) of OpenMethods.
Introduction: the RIDE journal (the Review Journal of the Institute for Documentology and Scholarly Editing) aims to offer a solution to current misalignments between scholarly workflows and their evaluation and provides a forum for the critical evaluation of the methodology of digital edition projects. This time, we have been cherry picking from their latest issue (Issue 11) dedicated to the evaluation and critical improvement of tools and environments.
Ediarum is a toolbox developed for editors by the TELOTA initiative at the BBAW in Berlin to generate and annotate TEI-XML Data in German language. In his review, Andreas Mertgens touches upon issues regarding methodology and implementation, use cases, deployment and learning curve, Open Source, sustainability and extensibility of the tool, user interaction and GUI and of course a rich functional overview.
[Click ‘Read more’ for the full post!]
Introduction: Hosted at the University of Lausanne, “A world of possibilities. Modal pathways over an extra-long period of time: the diachrony in the Latin language” (WoPoss) is a project under development exploiting a corpus-based approach to the study and reconstruction of the diachrony of modality in Latin.
Following specific annotation guidelines applied to a set of various texts pertaining to the time span between 3rd century BCE and 7th century CE, the work team lead by Francesca Dell’Oro aims at analyzing the patterns of modality in the Latin language through a close consideration of lexical markers.
Introduction: Introduction by OpenMethods Editor (Christopher Nunn): Information visualizations are helpful in detecting patterns in large amounts of text and are often used to illustrate complex relationships. Not only can they show descriptive phenomena that could be revealed in other ways, albeit slower and more laborious, but they can also heuristically generate new knowledge. The authors of this article did just that. The focus here is, fortunately, on narratological approaches that have so far hardly been combined with digital text analyzes, but which are ideally suited for them. To eight German novellas a variety of interactive visualizations were created, all of which show: The combination of digital methods with narratological interest can provide great returns to Literary Studies work. After reading this article, it pays to think ahead in this field.
Introduction: The first steps into working with digital methods of text analysis are often made with beginner-friendly tools. The DARIAH-DE TopicsExplorer opens up the world of topic modeling with an easy-to-understand GUI, numerous operating options and high-quality results. The team of forText of the University of Hamburg developed a tutorial (Lerneinheit) to guide users step by step from installing the software to the first results with a sample corpus. The tutorial also contains screenshots, videos, exercises and explanations. This follows the didactic concept of forText.
Introduction: Ted Underwood tests a new language representation model called “Bidirectional Encoder Representations from Transformers” (BERT) and asks if humanists should use it. Due to its high degree of difficulty and its limited success (e.g. in questions of genre detection) he concludes, that this approach will be important in the future but it’s nothing to deal with for humanists at the moment. An important caveat worth reading.
Introduction: Standards are best explained in real life use cases. The Parthenos Standardization Survival Kit is a collection of research use case scenarios illustrating best practices in Digital Humanities and Heritage research. It is designed to support researchers in selecting and using the appropriate standards for their particular disciplines and workflows. The latest addition to the SSK is a scenario for creating a born-digital dictionary in TEI.