Introduction: What are the essential data literacy skills data literacy skills in (Digital) Humanities? How good data management practices can be translated to humanities disciplines and how to engage more and more humanists in such conversations? Ulrike Wuttke’s reflections on the “Vermittlung von Data Literacy in den Geisteswissenschaften“ barcamp at the DHd 2020 conference does not only make us heartfelt nostalgic about scholarly meetings happening face to face but it also gives in-depth and contextualized insights regarding the questions above. The post comes with rich documentation (including links to the barcamp’s metapad, tweets, photos, follow-up posts) and is also serve as a guide for organizers of barcamps in the future.
Introduction: In this blog post, Michael Schonhardt explores and evaluates a range of freely available, Open Source tools – Inkscape, Blender, Stellarium, Sketchup – that enable the digital, 3D modelling of medieval scholarly objects. These diverse tools bring easily implementable solutions for both the analysis and the communication of results of object-related cultural studies and are especially suitable for projects with small budgets.
Introduction: Issues around sustaining digital project outputs after their funding period is a recurrent topic on OpenMethods. In this post, Arianna Ciula introduces the King’s Digital Lab’s solution, a workflow around their CKAN (Comprehensive Knowledge Archive Network) instance, and uncovers the many questions around not only maintaining a variety of legacy resources from long-running projects, but also opening them up for data re-use, verification and integration beyond siloed resources.
Introduction: As online became the default means of teaching globally, the thoughtful use of online technologies will play an even more critical role in our everyday life. In this post, Christopher Nunn guides you through how to publish your lectures as podcasts as MP3 with the help of the open source tool, Audacity. The tutorial had been published as a guest post on Mareike Schuhmacher’s blog, Lebe lieber literarisch.
Introduction: the RIDE journal (the Review Journal of the Institute for Documentology and Scholarly Editing) aims to offer a solution to current misalignments between scholarly workflows and their evaluation and provides a forum for the critical evaluation of the methodology of digital edition projects. This time, we have been cherry picking from their latest issue (Issue 11) dedicated to the evaluation and critical improvement of tools and environments.
Ediarum is a toolbox developed for editors by the TELOTA initiative at the BBAW in Berlin to generate and annotate TEI-XML Data in German language. In his review, Andreas Mertgens touches upon issues regarding methodology and implementation, use cases, deployment and learning curve, Open Source, sustainability and extensibility of the tool, user interaction and GUI and of course a rich functional overview.
[Click ‘Read more’ for the full post!]
Introduction: In this post, you can find a thoughtful and encouraging selection and description of reading, writing and organizing tools. It guides you through a whole discovery-magamement-writing-publishing workflow from the creation of annotated bibliographies in Zotero, through a useful Markdown syntax cheat sheet to versioning, storage and backup strategies, and shows how everybody’s research can profit by open digital methods even without sophisticated technological skills. What I particularly like in Tomislav Medak’s approach is that all these tools, practices and tricks are filtered through and tested again his own everyday scholarly routine. It would make perfect sense to create a visualization from this inventory in a similar fashion to these workflows.
Introduction: This white paper is an outcome of a DH2019 workshop dedicated to foster closer collaboration among technology-oriented DH researchers and developers of tools to support Digital Humanities research. The paper briefly outlines the most pressing issues in their collaboration and addresses topics such as: good practices to ease mutual understanding between scholars and researchers; software development and academic career and recognition; or sustainability and funding.
Introduction: Sustainability questions such as how to maintain digital project outputs after the funding period, or how to keep aging code and infrastructure that are important for our research up-to-date are among the major challenges DH projects are facing today. This post gives us a sneak peek into the solutions and working practices from the Center for Digital Humanities at Princeton. In their approach to build capacity for sustaining DH projects and preserve access to data and software, they view projects as collaborative and process-based scholarship. Therefore, their focus is on implementing project management workflows and documentation tools that can be flexibly applied to projects of different scopes and sizes and also allow for further refinement in due case. By sharing these resources together with their real-life use cases in DH projects, their aim is to benefit other scholarly communities and sustain a broader conversation about these tricky issues.
Introduction: Standards are best explained in real life use cases. The Parthenos Standardization Survival Kit is a collection of research use case scenarios illustrating best practices in Digital Humanities and Heritage research. It is designed to support researchers in selecting and using the appropriate standards for their particular disciplines and workflows. The latest addition to the SSK is a scenario for creating a born-digital dictionary in TEI.
Introduction: The explore! project tests computer stimulation and text mining on autobiographic texts as well as the reusability of the approach in literary studies. To facilitate the application of the proposed method in broader context and to new research questions, the text analysis is performed by means of scientific workflows that allow for the documentation, automation, and modularization of the processing steps. By enabling the reuse of proven workflows, the goal of the project is to enhance the efficiency of data analysis in similar projects and further advance collaboration between computer scientists and digital humanists.