FactGrid is both a database as well as a wiki. This project operated by the Gotha Research Centre and the data lab of the University of Erfurt. It utilizes MediaWiki and a Wikidata’s “wikibase” extension to collect data from historic research. With FactGrid you can create a knowledge graph, giving information in triple statements. This knowledge graph can be asked with SPARQL. All data provided by FactGrid holds a CC0-license.
Tag: via bookmarklet
Added by PressForward
TEI editions are among the most used tool by scholarly editors to produce digital editions in various literary fields. LIFT is a Python-based tool that allows to programmatically extract information from digital texts annotated in TEI by modelling persons, places, events and relations annotated in the form of a Knowledge Graph which reuses ontologies and controlled vocabularies from the Digital Humanities domain.
Introduction: Open Refine, the freely accessible successor to Google Refine, is an ideal tool for cleaning up data series and thus obtaining more sustainable results. Entries can be searched in alphabetical order or sorted by frequency, so that typing errors or slightly different variants can be easily found and adjusted. For example, with the help of the software, I discovered two such discrepancies in my Augustinian Correspondence Database, which I am now able to correct with one click in the programme. I was shown that I had noted “As a reference to Jerome’s letter it’s not counted” five times and “As a reference to Jerome’s letter, it’s not counted” three times. Consequently, if I searched the database for this expression, I would not see all the results. A second discrepancy was between the entry “continuing reference (marked by Nam)” and the entry “continuing reference (marked by nam)”. Thanks to Open Refine, such errors can be completely avoided in the future.
The tutorial by Miriam Posner is a useful introduction to come in touch with the software. However, the first step of the installation is already out of date. While version 3.1 was still the latest when the tutorial was published, it is now version 3.5.2. Under Windows, you can now distinguish between a version that requires Java and a version with embedded OpenJDK Java, which I found very pleasing.
If needed, there are links at the end of the tutorial to other introductions that go into more depth.
Introduction: Thibault Clérice reports on the successfulness of recognizing word boundaries in scripta continua (typically late classic and early medieval Latin). This will not be easy reading for many a philologist and classicist, but it is well worth trying to bridge the gap. Next to explaining and evaluating Thibault Clérice releases the software Boudams used for his research.
[Click ‘Read more’ for the full post!]
Introduction: Given in French by Mathieu Jacomy – also known for his work on Gephi, this seminar presentation gives a substantial introduction to Hyphe, an open-source web crawler designed by a team of the Sciences Po Medialab in Paris. Specifically devised for the researchers’ use, Hyphe helps collecting and curating a corpus of web pages, through an easy to handle interface.
https://openmethods.dariah.eu/2019/06/28/prensa-digitalizada-herramientas-y-metodos-digitales-para-una-investigacion-a-escala/ OpenMethods introduction to: Prensa digitalizada: herramientas y métodos digitales para una investigación a escala 2019-06-28 20:52:53 Gimena Del Rio http://revistas.uned.es/index.php/RHD/article/view/22527 Blog post Editing Literature Publishing Research Objects Spanish Aprendizaje…
Introduction: The Research Software Directory of the Netherlands eScience Institute provides easy access to software, source code and its documentation. More importantly, it makes it easy to cite software, which is highly advisable when using software to derive research results. The Research Software Directory positions itself as a platform that eases scientific referencing and reproducibility of software based research—good peer praxis that is still underdeveloped in the humanities.
Introduction: With Web archives becoming an increasingly more important resource for (humanities) researchers, it also becomes paramount to investigate and understand the ways in which such archives are being built and how to make the processes involved transparent. Emily Maemura, Nicholas Worby, Ian Milligan, and Christoph Becker report on the comparison of three use cases and suggest a framework to document Web archive provenance.
Introduction: This blog post describes how the National Library of Wales makes us of Wikidata for enriching their collections. It especially showcases new features for visualizing items on a map, including a clustering service, the support of polygons and multipolygons. It also shows how polygons like the shapes of buildings can be imported from OpenStreetMap into Wikidata, which is a great example for re-using already existing information.
Introduction: This blog post not only presents a technique of measuring poetic meter and using it to plot distances between poets, but it also provides an insight into the theoretical and empirical process leading to those results.