Loading…

Global reconstruction of language models with linguistic rules – Explainable AI for online consumer reviews

Analyzing textual data by means of AI models has been recognized as highly relevant in information systems research and practice, since a vast amount of data on eCommerce platforms, review portals or social media is given in textual form. Here, language models such as BERT, which are deep learning A...

Full description

Saved in:
Bibliographic Details
Published in:Electronic markets 2022-12, Vol.32 (4), p.2123-2138
Main Authors: Binder, Markus, Heinrich, Bernd, Hopf, Marcus, Schiller, Alexander
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Analyzing textual data by means of AI models has been recognized as highly relevant in information systems research and practice, since a vast amount of data on eCommerce platforms, review portals or social media is given in textual form. Here, language models such as BERT, which are deep learning AI models, constitute a breakthrough and achieve leading-edge results in many applications of text analytics such as sentiment analysis in online consumer reviews. However, these language models are “black boxes”: It is unclear how they arrive at their predictions. Yet, applications of language models, for instance, in eCommerce require checks and justifications by means of global reconstruction of their predictions, since the decisions based thereon can have large impacts or are even mandatory due to regulations such as the GDPR. To this end, we propose a novel XAI approach for global reconstructions of language model predictions for token-level classifications (e.g., aspect term detection) by means of linguistic rules based on NLP building blocks (e.g., part-of-speech). The approach is analyzed on different datasets of online consumer reviews and NLP tasks. Since our approach allows for different setups, we further are the first to analyze the trade-off between comprehensibility and fidelity of global reconstructions of language model predictions. With respect to this trade-off, we find that our approach indeed allows for balanced setups for global reconstructions of BERT’s predictions. Thus, our approach paves the way for a thorough understanding of language model predictions in text analytics. In practice, our approach can assist businesses in their decision-making and supports compliance with regulatory requirements.
ISSN:1019-6781
1422-8890
DOI:10.1007/s12525-022-00612-5