TEI editions are among the most used tool by scholarly editors to produce digital editions in various literary fields. LIFT is a Python-based tool that allows to programmatically extract information from digital texts annotated in TEI by modelling persons, places, events and relations annotated in the form of a Knowledge Graph which reuses ontologies and controlled vocabularies from the Digital Humanities domain.
Category: Linked open data
Introduction: Developed in the context of the EU H2020 Polifonia project, the investigation deals with the potentialities of SPARQL Anything to
to extract musical features, both at metadata and symbolic levels, from MusicXML files. The paper captures the procedure that has applied by starting from an overview about the application of ontologies to music, as well as of the so- called ‘façade-based’ approach to knowledge graphs, which is at the core of the SPARQL Anything software. Then, it moves to an illustration of the passages involved (i.e., melody extraction, N-grams extraction, N-grams analysis and exploitation
of the Music Notation Ontology). Finally, it provides some considerations regarding the result of the experiment in terms of effectiveness of the queries’ performance. In conclusion, the authors highlight how further studies in the field may cast an increasingly brighter light on the application of semantic-oriented methods and techniques to computational musicology.
[Click ‘Read more’ for the full post!]
In this post, we reach back in time to showcase an older project and highlight its impact on data visualization in Digital Humanities as well as its good practices to make different layers of scholarship available for increased transparency and reusability.
Developed at Stanford with other research partners (‘Cultures of Knowledge’ at Oxford, the Groupe d’Alembert at CNRS, the KKCC-Circulation of Knowledge and Learned Practices in the 17th-century Dutch Republic, the DensityDesign ResearchLab), the ‘Mapping of the Republic of Letters Project’ aimed at digitizing and visualizing the intellectual community throughout the XVI and XVIII centuries known as ‘Republic of Letters’ (an overview of the concept can be found in Bots and Waquet, 1997), to get a better sense of the shape, size and associated intellectual network, its inherent complexities and boundaries.
Below we highlight the different, interrelated
layers of making project outputs available and reusable on the long term (way before FAIR data became a widespread policy imperative!): methodological reflections, interactive visualizations, the associated data and its data model schema. All of these layers are published in a trusted repository and are interlinked with each other via their Persistent Identifiers.
[Click ‘Read more’ for the full post!]
In the next episode, we are looking behind the scenes of two ontologies: NeMO and the Scholarly Ontology (SO) with Panos Constantopoulos and Vayianos Pertsas who tell us the story behind these ontologies and explain how they can be used to ease or upcycle your daily works as a researcher. We discuss the value of knowledge graphs, how NeMO and SO connect with the emerging DH ontology landscape and beyond, why Open Access is a precondition of populating them, the Greek DH landscape …and many more!
Introduction: The DraCor ecosystem encourages various approaches to the browsing and consultation of the data collected in the corpora, like those detailed in the Tools section: the Shiny DraCor app (https://shiny.dracor.org/), along with the SPARQL queries and the Easy Linavis interfaces (https://dracor.org/sparql and https://ezlinavis.dracor.org/ respectively). The project, thus, aims at creating a suitable digital environment for the development of an innovative way to approach literary corpora, potentially open to collaborations and interactions with other initiatives thanks to its ontology and Linked Open data-based nature.
[Click ‘Read more’ for the full post!]
OpenMethods Spotlights showcase people and epistemic reflections behind Digital Humanities tools and methods. You can find here brief interviews with the creator(s) of the blogs or tools that are highlighted on OpenMethods to humanize and contextualize them. In the first episode, Alíz Horváth is talking with Hilde de Weerdt at Leiden University about MARKUS, a tool that offers offers a variety of functionalities for the markup, analysis, export, linking, and visualization of texts in multiple languages, with a special focus on Chinese and now Korean as well.
East Asian studies are still largely underrepresented in digital humanities. Part of the reason for this phenomenon is the relative lack of tools and methods which could be used smoothly with non-Latin scripts. MARKUS, developed by Brent Ho within the framework of the Communication and Empire: Chinese Empires in Comparative Perspective project led by Hilde de Weerdt at Leiden University, is a comprehensive tool which helps mitigate this issue. Selected as a runner up in the category “Best tool or suite of tools” in the DH2016 awards, MARKUS offers a variety of functionalities for the markup, analysis, export, linking, and visualization of texts in multiple languages, with a special focus on Chinese and now Korean as well.
Introduction: Standardized metadata, linked meaningfully using semantic web technologies are prerequisites for cross-disciplinary Digital Humanities research as well as for FAIR data management. In this article from the Open Access Journal o-bib, members of the project „GND for Cultural Data“ (GND4C) describe how the Gemeinsame Normdatei (GND) (engl. Integrated Authority File), a widely accepted vocabulary for description and information retrieval in the library world is maintained by the German National Library and how it supports semantic interoperability and reuse of data. It also explores how the GND can be utilized and advanced collaboratively, integrating the perspectives of its multidisciplinary stakeholders, including the Digital Humanities. For background reading, the training resources „Controlled Vocabularies and SKOS“ (https://campus.dariah.eu/resource/controlled-vocabularies-and-skos) or „Formal Ontologies“ (https://campus.dariah.eu/resource/formal-ontologies-a-complete-novice-s-guide) are of interest.
Introduction: Linked Data and Linked Open Data are gaining an increasing interest and application in many fields. A recent experiment conducted in 2018 at Furman University illustrates and discusses some of the challenges from a pedagogical perspective posed by Linked Open Data applied to research in the historical domain.
“Linked Open Data to navigate the Past: using Peripleo in class” by Chiara Palladino describes the exploitation of the search-engine Peripleo in order to reconstruct the past of four archeologically-relevant cities. Many databases, comprising various types of information, have been consulted, and the results, as highlighted in the contribution by Palladino, show both advantages and limitations of a Linked Open Data-oriented approach to historical investigations.
Introduction: The FAIR Data Principles (Findable, Accesible, Interoperable, Reusable) aim to make clear the need to improve the infrastructure for reuse of scholarly data. The FAIR Data Principles emphasize the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals, key activities for Digital Humanities research. The post below summarizes how Europeana’s principles (Usable, Mutual, Reliable) align with the FAIR Data ones, enhancing the findability, accessibility, interoperability, and reuse of digitised cultural heritage.