If These Crawls Could Talk: Studying and Documenting Web Archives Provenance

Introduction: With Web archives becoming an increasingly more important resource for (humanities) researchers, it also becomes paramount to investigate and understand the ways in which such archives are being built and how to make the processes involved transparent. Emily Maemura, Nicholas Worby, Ian Milligan, and Christoph Becker report on the comparison of three use cases and suggest a framework to document Web archive provenance.

Old Periodicals, a New Datatype and Spiderfied Query Results in Wikidata

Introduction: This blog post describes how the National Library of Wales makes us of Wikidata for enriching their collections. It especially showcases new features for visualizing items on a map, including a clustering service, the support of polygons and multipolygons. It also shows how polygons like the shapes of buildings can be imported from OpenStreetMap into Wikidata, which is a great example for re-using already existing information.

Attributing Authorship in the Noisy Digitized Correspondence of Jacob and Wilhelm Grimm | Digital Humanities

Introduction: Apart from its buoyant conclusion that authorship attribution methods are rather robust to noise (transcription errors) introduced by optical character recognition and handwritten text recognition, this article also offers a comprehensive read on the application of sophisticated computational techniques for testing and validation in a data curation process. 

Creating Web APIs with Python and Flask | Programming Historian

Introduction: This very complete tutorial by Patrick Smyth will help digital humanists or any interested person on digital technologies applied to projects how to make data more accessible to users through APIs (Application Programming Interfaces). After explaining the basics about APIs and databases, an API is built and put into practice. Python 3 and the Flask are the web frameworks used for developing this API.