Cultural Ontologies: the ArCo Knowledge Graph.

Cultural Ontologies: the ArCo Knowledge Graph.

Introduction: Standing for ‘Architecture of Knowledge’, ArCo is an open set of resources developed and managed by some Italian institutions, like the MiBAC (Minister for the Italian Cultural Heritage) and, within it, the ICCD – Institute of the General Catalogue and Documentation), and the CNR – Italian National Research Council. Through the application of eXtreme Design (XD), ArCO basically consists in an ontology network comprising seven modules (the arco, the core, the catalogue, the location, the denotative description, the context description, and the cultural event) and a set of LOD data comprising a huge amount of linked entities referring to the national Italian cultural resources, properties and events. Under constant refinement, ArCo represents an example of a “robust Semantic Web resource” (Carriero et al., 11) in the field of cultural heritage, along with other projects like, just to mention a couple of them, the Google Arts&Culture (https://artsandculture.google.com/) or the Smithsonian American Art Museum (https://americanart.si.edu/about/lod).

[Click ‘Read more’ for the full post!]

When history meets technology. impresso: an innovative corpus-oriented perspective.

When history meets technology. impresso: an innovative corpus-oriented perspective.

Historical newspapers, already available in many digitized collections, may represent a significant source of information for the reconstruction of events and backgrounds, enabling historians to cast new light on facts and phenomena, as well as to advance new interpretations. Lausanne, University of Zurich and C2DH Luxembourg, the ‘impresso – Media Monitoring of the Past’ project wishes to offer an advanced corpus-oriented answer to the increasing need of accessing and consulting collections of historical digitized newspapers.
[…] Thanks to a suite of computational tools for data extraction, linking and exploration, impresso aims at overcoming the traditional keyword-based approach by means of the application of advanced techniques, from lexical processing to semantically deepened n-grams, from data modelling to interoperability.
[Click ‘Read more’ for the full post!]

Research COVID-19 with AVOBMAT

Research COVID-19 with AVOBMAT

Introduction: In our guidelines for nominating content, databases are explicitly excluded. However, this database is an exception, which is not due to the burning issue of COVID-19, but to its exemplary variety of digital humanities methods with which the data can be processed.AVOBMAT makes it possible to process 51,000 articles with almost every conceivable approach (Topic Modeling, Network Analysis, N-gram viewer, KWIC analyses, gender analyses, lexical diversity metrics, and so on) and is thus much more than just a simple database – rather, it is a welcome stage for the Who is Who (or What is What?) of OpenMethods.

Met-Hodos: (Re)considering the road of research and analysis

Met-Hodos: (Re)considering the road of research and analysis

Introduction: This blog, curated by Andreas W. Müller from Halle University, provides an insight on qualitative data analysis (QDA) techniques to conduct research in the field of Digital Humanities. The field is currently dominated by quantitative research methods, and is still lacking digital analysis derived from qualitative approaches. The author implies that QDA is a not a method, but a set of techniques that can be used with different analysis methods, for instance Content Analysis or Discourse Analysis. He also outlines how QDA deals with qualitative data combined with qualitative analysis, being both elements fundamental.
[Click ‘Read more’ for the full post!]

RAWGraphs: A Visualization Platform to Create Open Outputs

RAWGraphs: A Visualization Platform to Create Open Outputs

The paper illustrates the features of the innovative tool in the field of data visualization: it is the framework RAW Graphs, available in an open access format at the website https://rawgraphs.io/. The framework permits to establish a connection between data coming from various applications (from Microsoft Excel to Google Spreadsheets) and their visualization in several layouts.

As detailed in the video guide available in the ‘Learning section’ (https://rawgraphs.io/learning), it is possible to load own data through a simple ‘copy and past’ command, and then select a chart-based layout among those provided: contour plot, beeswarm plot, hexagonal binnings, scatterplot, treemap, bump chart, Gantt chart, multiple pie charts, alluvial diagram and barchart. The platform permits also to unstack data according to a wide and a narrow format.

RAWGraphs, ideal for those working in the field of design but not only, is kept as an open-source resource thanks to an Indiegogo crowdfunding campaign (https://rawgraphs.io/blog).
[click ‘Read’ for more]

Navegación de corpus a través de anotaciones lingüísticas automáticas obtenidas por Procesamiento del Lenguaje Natural: de anecdótico a ecdótico

Navegación de corpus a través de anotaciones lingüísticas automáticas obtenidas por Procesamiento del Lenguaje Natural: de anecdótico a ecdótico

Introduction: Spanish scholars Pablo Ruiz Fabo and Helena Bermúdez Sabel work in this article on two case studies regarding the application of Natural Language Processing (NLP) technologies, entity linking, and Computational Linguistics methods to create corpus navigation interfaces. The authors also focus on how these technologies for automatic text analysis allow us to enrich scholarly digital editions. They include interesting points of view about analogue and digital editions, and their relation with ecdotic practice.

Diseño de corpus literario para análisis cuantitativos

Diseño de corpus literario para análisis cuantitativos

Introduction: In this article, José Calvo Tello offers a methodological guide on data curation for creating literary corpus for quantitative analysis. This brief tutorial covers all stages of the curation and creation process and guides the reader towards practical cases from Hispanic literature. The author deals with every single step in the creation of a literary corpus for quantitative analysis: from digitization, metadata, automatic processes for cleaning and mining the texts, to licenses, publishing and achiving/long term preservation.