Introduction: This post introduces two tools developed by the Max Planck Institute for the History of Science, LoGaRT and RISE with a focus on Asia and Eurasia. […]The concept of LoGaRT – treating local gazetteers as “databases” by themselves – is an innovative and pertinent way to articulate the essence of the platform: providing opportunities for multi-level analysis from the close reading of the sources (using, for example, the carousel mode) to the large-scale, “bird’s eye view” of the materials across geographical and temporal boundaries. Local gazetteers are predominantly textual sources – this characteristic of the collection is reflected in the capabilities of LoGaRT as well, since some of its key capabilities include data search (using Chinese characters), collection and analysis, as well as tagging and dataset comparison. That said, LoGaRT also offers integrated visualization tools and supports the expansion of the collection and tagging features to the images used in a number of gazetteers. The opportunity to smoothly intertwine these visual and textual collections with Chinese historical maps (see CHMap) is an added, and much welcome, advantage of the tool, which helps to develop sophisticated and multifaceted analyses.
[Click ‘Read more’ for the full post!]
Category: Analysis
This general research goal refers to the activity of extracting any kind of information from open or closed, structured or unstructured collections of data, of discovering recurring phenomena, units, elements, patterns, groupings, and the like. This can refer to structural, formal or semantic aspects of data. Analysis also includes methods used to visualize results. Methods and techniques related to this goal may be considered to follow Capture and Enrichment; however, Enrichment depends upon assumptions, research questions and results related to Analysis.
https://openmethods.dariah.eu/2022/05/11/open-source-tool-allows-users-to-create-interactive-timelines-digital-humanities-at-a-state/ OpenMethods introduction to: Collaborative Digital Projects in the Undergraduate Humanities Classroom: Case Studies with Timeline JS 2022-05-11 07:28:36 Marinella Testori Blog post Creation Data Designing Digital Humanities English Methods…
Introduction: If you are looking for solutions to translate narratological concepts to annotation guidelines to tag or mark-up your texts for both qualitative and quantitative analysis, then Edward Kearns’s paper “Annotation Guidelines for narrative levels, time features, and subjective narration styles in fiction” is for you! The tag set is designed to be used in XML, but they can be flexibly adopted to other working environments too, including for instance CATMA. The use of the tags is illustrated on a corpus of modernist fiction.
The guidelines have been published in a special issue of The Journal of Cultural Analytics (vol. 6, issue 4) entirely devoted to the illustration of the Systematic Analysis of Narrative levels Through Annotation (SANTA) project, serving as the broader intellectual context to the guidelines. All articles in the special issue are open peer reviewed , open access, and are available in both PDF and XML formats.
[Click ‘Read more’ for the full post!]
Introduction: This blog post by Lucy Havens presents a sentiment analysis of over 2000 Times Music Reviews using freely available tools: defoe for building the corpus of reviews, VADER for sentiment analysis and Jupiter Notebooks to provide a rich documentation and to connect the different components of the analysis. The description of the workflow comes with tool and method criticism reflections, including an outlook how to improve and continue to get better and more results.
Visualizando libros difundidos y censurados durante la Guerra Fría: 1956-1971. El caso Alfred Reisch
Introduction: This article explores the potential use of data-driven methods to visualise and interpret the impact of Western efforts to influence Cold War dynamics using a covert book distribution programme. Based on a documentary corpus connected to the 2013 book by Alfred Reisch, which documented efforts by the CIA to disseminate books in the Soviet Bloc in the period 1956-1971, the authors use the Tableau Public platform to re-assess information science methods for researching historical events. Their analysis suggests that books distributed did not tend to have a more obvious political slant, but were more likely to have a broader universalist outlook. While it skirts around some of the limitations of visualization (highlighted elsewhere by Drucker and others) it offers a solid introduction to the benefits of a data-driven approach to a general audience.
Introduction: In this paper, Ehrlicher et al. follow a quantitative approach to unveil possible structural parallelisms between 13 comedies and 10 autos sacramentales written by Calderón de la Barca. Comedies are analyzed within a comparative framework, setting them against Spanish comedia nueva and French comedie precepts. Authors employ tool DramaAnalysis and statistics for their examination, focusing on: word frequency per subgenre, average number of characters, their variation and discourse distribution, etc. Autos sacramentales are also evaluated through these indicators. Regarding comedies, Ehrlicher et al.’s results show that Calderón: a) plays with units of space and time depending on creative and dramatic needs, b) does not follow French comedie conventions of character intervention or linkage, but c) does abide by its concept of structural symmetry. As for autos sacramentales, their findings brought forth that these have a similar length and character variation to comedies. However, they also identified the next difference: Calderón uses character co-presence in them to reinforce the message conveyed. Considering all this, authors confirm that Calderón’s comedies disassociate from classical notions of theatre – both Aristotelian and French –ideals. With respect to autos sacramentales, they believe further evaluation would be needed to verify ideas put forward and identify other structural patterns.
Introduction: Digital Literary Studies has long engaged with the challenges in representing ambiguity, contradiction and polyvocal readings of literary texts. This book chapter describes a web-based tool called CATMA which promises a “low-threshold” approach to digitally encoded text interpretation. CATMA has a long trajectory based on a ‘standoff’ approach to markup, somewhat provocatively described by its creators as “undogmatic”, which stands in contrast to more established systems for text representation in digital scholarly editing and publishing such as XML markup, or the Text Encoding Initiative (TEI). Standoff markup involves applying numbers to each character of a text and then using those numbers as identifiers to store interpretation externally. This approach allows for “multiple, over-lapping and even taxonomically contradictory annotations by one or more users” and avoids some of the rigidity which other approaches sometimes imply. An editor working with CATMA is able to create multiple independent annotation cycles, and to even specify which interpretation model was used for each. And the tool allows for an impressive array of analysis and visualization possibilities.
Recent iterations of CATMA have developed approaches which aim to bridge the gap between ‘close’ and ‘distant’ reading by providing scalable digital annotation and interpretation involving “semantic zooming” (which is compared to the kind of experience you get from an interactive map). The latest version also brings greater automation (currently in German only) to grammatical tense capture, temporal signals and part-of-speech annotation, which offer potentially significant effort savings and a wider range of markup review options. Greater attention is also paid to different kinds of interpretation activities through the three CATMA annotation modes of ‘highlight’, ‘comment’ and ‘annotate’, and to overall workflow considerations. The latest version of the tool offers finely grained access options mapping to common editorial roles and workflows.
I would have welcome greater reflection in the book chapter on sustainability – how an editor can port their work to other digital research environments, for use with other tools. While CATMA does allow for export to other systems (such as TEI), quite how effective this is (how well its interpretation structures bind to other digitally-mediated representation systems) is not clear.
What is most impressive about CATMA, and the work of its creator – the forTEXT research group – more generally, is how firmly embedded the thinking behind the tool is in humanities (and in particular literary) scholarship and theory. The group’s long-standing and deeply reflective engagement with the concerns of literary studies is well captured in this well-crafted and highly engaging book chapter.
[Click ‘Read more’ for the full post!]
Introduction: Among the most recent, currently ongoing, projects exploiting distant techniques reading there is the European Literary Text Collection (ELTeC), which is one of the main elements of the Distant Reading for European Literary History (COST Action CA16204, https://www.distant-reading.net/). Thanks to the contribution provided by four Working Groups (respectively dealing with Scholarly Resources, Methods and Tools, Literary Theory and History, and Dissemination: https://www.distant-reading.net/working-groups/ ), the project aims at providing at least 2,500 novels written in ten European languages with a range of Distant Reading computational tools and methodological strategies to approach them from various perspectives (textual, stylistic, topical, et similia). A full description of the objectives of the Action and of ELTeC can be found and read in the Memorandum of Understanding for the implementation of the COST Action “Distant Reading for European Literary History” (DISTANT-READING) CA 16204”, available at the link https://e-services.cost.eu/files/domain_files/CA/Action_CA16204/mou/CA16204-e.pdf
[Click ‘Read more’ for the full post!]
Introduction: NLP modelling and tasks performed by them are becoming an integral part of our daily realities (everyday or research). A central concern of NLP research is that for many of their users, these models still largely operate as black boxes with limited reflections on why the model makes certain predictions, how their usage is skewed towards certain content types, what are the underlying social, cultural biases etc. The open source Language Interoperability Tool aim to change this for the better and brings transparency to the visualization and understanding of NLP models. The pre-print describing the tool comes with rich documentation and description of the tool (including case studies of different kinds) and gives us an honest SWOT analysis of it.
Introduction: Standing for ‘Architecture of Knowledge’, ArCo is an open set of resources developed and managed by some Italian institutions, like the MiBAC (Minister for the Italian Cultural Heritage) and, within it, the ICCD – Institute of the General Catalogue and Documentation), and the CNR – Italian National Research Council. Through the application of eXtreme Design (XD), ArCO basically consists in an ontology network comprising seven modules (the arco, the core, the catalogue, the location, the denotative description, the context description, and the cultural event) and a set of LOD data comprising a huge amount of linked entities referring to the national Italian cultural resources, properties and events. Under constant refinement, ArCo represents an example of a “robust Semantic Web resource” (Carriero et al., 11) in the field of cultural heritage, along with other projects like, just to mention a couple of them, the Google Arts&Culture (https://artsandculture.google.com/) or the Smithsonian American Art Museum (https://americanart.si.edu/about/lod).
[Click ‘Read more’ for the full post!]