Introduction: This post introduces two tools developed by the Max Planck Institute for the History of Science, LoGaRT and RISE with a focus on Asia and Eurasia. […]The concept of LoGaRT – treating local gazetteers as “databases” by themselves – is an innovative and pertinent way to articulate the essence of the platform: providing opportunities for multi-level analysis from the close reading of the sources (using, for example, the carousel mode) to the large-scale, “bird’s eye view” of the materials across geographical and temporal boundaries. Local gazetteers are predominantly textual sources – this characteristic of the collection is reflected in the capabilities of LoGaRT as well, since some of its key capabilities include data search (using Chinese characters), collection and analysis, as well as tagging and dataset comparison. That said, LoGaRT also offers integrated visualization tools and supports the expansion of the collection and tagging features to the images used in a number of gazetteers. The opportunity to smoothly intertwine these visual and textual collections with Chinese historical maps (see CHMap) is an added, and much welcome, advantage of the tool, which helps to develop sophisticated and multifaceted analyses.
[Click ‘Read more’ for the full post!]
Category: Collaboration
Collaboration is involved in any research activity being done jointly by several researchers, possibly in different places and at different times. Research-oriented collaboration is enabled, particularly, through comprehensive Digital Research Environments, but can also happen around more specific activities, such as communication or sharing of resources.
Introduction: In this resource, Caterina Agostini, PhD in Italian from Rutgers University, Project Manager at The Center for Digital Humanities at Princeton shares two handouts of workshops she organized and co-taught on the International Image Interoperability Framework (IIIF). They provide a gentle introduction to IIIF and clear overview of features (displaying, editing, annotating, sharing and comparing images along universal standards), examples and resources. The handouts could be of interest to anyone interested in the design and teaching of Open Educational Resources on IIF.
[Click ‘Read more’ for the full post!]
In the next episode, we are looking behind the scenes of two ontologies: NeMO and the Scholarly Ontology (SO) with Panos Constantopoulos and Vayianos Pertsas who tell us the story behind these ontologies and explain how they can be used to ease or upcycle your daily works as a researcher. We discuss the value of knowledge graphs, how NeMO and SO connect with the emerging DH ontology landscape and beyond, why Open Access is a precondition of populating them, the Greek DH landscape …and many more!
Introduction: The DraCor ecosystem encourages various approaches to the browsing and consultation of the data collected in the corpora, like those detailed in the Tools section: the Shiny DraCor app (https://shiny.dracor.org/), along with the SPARQL queries and the Easy Linavis interfaces (https://dracor.org/sparql and https://ezlinavis.dracor.org/ respectively). The project, thus, aims at creating a suitable digital environment for the development of an innovative way to approach literary corpora, potentially open to collaborations and interactions with other initiatives thanks to its ontology and Linked Open data-based nature.
[Click ‘Read more’ for the full post!]
Introduction: One of the major challenges of digital data workflows in the Arts and Humanities is that resources that belong together, in extreme cases, like this particular one, even parts of dismembered manuscripts, are hosted and embedded in different geographical and institutional silos. Combining IIIF with a mySQL database, Fragmentarium provides a user-friendly but also standardized, open workspace for the virtual reconstruction of medieval manuscript fragments. Lisa Fagin Davis’s blog post gives contextualized insights of the potentials of Fragmentarium and how, as she writes, “technology has caught up with our dreams”.
Introduction: What are the essential data literacy skills data literacy skills in (Digital) Humanities? How good data management practices can be translated to humanities disciplines and how to engage more and more humanists in such conversations? Ulrike Wuttke’s reflections on the “Vermittlung von Data Literacy in den Geisteswissenschaften“ barcamp at the DHd 2020 conference does not only make us heartfelt nostalgic about scholarly meetings happening face to face but it also gives in-depth and contextualized insights regarding the questions above. The post comes with rich documentation (including links to the barcamp’s metapad, tweets, photos, follow-up posts) and is also serve as a guide for organizers of barcamps in the future.
Introduction: Sustainability questions such as how to maintain digital project outputs after the funding period, or how to keep aging code and infrastructure that are important for our research up-to-date are among the major challenges DH projects are facing today. This post gives us a sneak peek into the solutions and working practices from the Center for Digital Humanities at Princeton. In their approach to build capacity for sustaining DH projects and preserve access to data and software, they view projects as collaborative and process-based scholarship. Therefore, their focus is on implementing project management workflows and documentation tools that can be flexibly applied to projects of different scopes and sizes and also allow for further refinement in due case. By sharing these resources together with their real-life use cases in DH projects, their aim is to benefit other scholarly communities and sustain a broader conversation about these tricky issues.
Introduction: This article assesses the issue of personalisation in internet research, raising important issues of how should we interpret users’ choices and how to account for the potential platform-design influence in your research workflow.
Introduction: This article proposes establishing a good collaboration between FactMiners and the Transkribus project that will help the Transkribus team to evolve the “sustainable virtuous” ecosystem they described as a Transcription & Recognition Platform — a Social Machine for Job Creation & Skill Development in the 21st Century!
Introduction: This is a well-structured account of a seminar session on data management held in Munich. It introduces many topics which humanists have to deal with during a research process.