The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models

https://openmethods.dariah.eu/2021/04/29/2008-05122-the-language-interpretability-tool-extensible-interactive-visualizations-and-analysis-for-nlp-models/ OpenMethods introduction to: The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models 2021-04-29 09:58:00 Introduction: NLP modelling and tasks performed by them are becoming an integral part of our daily realities (everyday or research). A central concern of NLP research is that for many of their users, these models still largely operate as black boxes with limited reflections on why the model makes certain predictions, how their usage is skewed towards certain content types, what are the underlying social, cultural biases etc. The open source Language Interoperability Tool aim to change this for the better and brings transparency to the visualization and understanding of NLP models. The pre-print describing the tool comes with rich documentation and description of the tool (including case studies of different kinds) and gives us an honest SWOT analysis of it. Erzsebet Tóth-Czifra https://arxiv.org/abs/2008.05122v1 Blog post Analysis Code Data Digital Humanities English Interpretation Link Machine Learning Modeling Research Activities Research Objects Research Techniques Sentiment Analysis Text Visualization NLP

Introduction by OpenMethods Editor (Erzsébet Tóth-Czifra): NLP modelling and tasks performed by them are becoming an integral part of our daily realities (everyday or research). A central concern of NLP research is that for many of their users, these models still largely operate as black boxes with limited reflections on why the model makes certain predictions, how their usage is skewed towards certain content types, what are the underlying social, cultural biases etc. The open source Language Interoperability Tool aim to change this for the better and brings transparency to the visualization and understanding of NLP models. The pre-print describing the tool comes with rich documentation and description of the tool (including case studies of different kinds) and gives us an honest SWOT analysis of it.

An ideal workflow would be seamless and interactive: users should see the data, what the model does with it,and why, so they can quickly test hypotheses and build understanding.With this in mind, we introduce the Language Interpretability Tool (LIT), a toolkit and browser-based user interface (UI) for NLP model understanding. LIT supports local explanations—including salience maps, attention, and rich visualizations of model predictions—as well as aggregate analysis—including metrics, embedding spaces, and flexible slicing—and allows users to seamlessly hop between them to test local hypotheses and validate them over a dataset. LIT provides first-class support for counterfactual generation:new data points can be added on the fly, and their effect on the model visualized immediately. Side-by-side comparison allows for two models, or two datapoints, to be visualized simultaneously.

Original source: https://arxiv.org/abs/2008.05122v1

Original date of publication: 12. Aug. 2020

Internet archive link: https://web.archive.org/web/20210122104622/https://arxiv.org/abs/2008.05122v1