Loading…
Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability
Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their model performance and explainability. Machine learning models with higher performance are often based on more complex algorithms and therefore la...
Saved in:
Published in: | International journal of information management 2023-04, Vol.69, p.102538, Article 102538 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their model performance and explainability. Machine learning models with higher performance are often based on more complex algorithms and therefore lack explainability and vice versa. However, there is little to no empirical evidence of this tradeoff from an end user perspective. We aim to provide empirical evidence by conducting two user experiments. Using two distinct datasets, we first measure the tradeoff for five common classes of machine learning algorithms. Second, we address the problem of end user perceptions of explainable artificial intelligence augmentations aimed at increasing the understanding of the decision logic of high-performing complex models. Our results diverge from the widespread assumption of a tradeoff curve and indicate that the tradeoff between model performance and explainability is much less gradual in the end user’s perception. This is a stark contrast to assumed inherent model interpretability. Further, we found the tradeoff to be situational for example due to data complexity. Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception.
•Theoretical algorithm interpretability does not entail perceived explainability.•Tradeoff can be characterized by a group structure rather than a curve.•Tree-based machine learning algorithms achieve best explainability results.•While performance distance increases for complex datasets, explainability distance decreases.•Local XAI augmentations requiring low cognitive effort fare better with end users. |
---|---|
ISSN: | 0268-4012 1873-4707 |
DOI: | 10.1016/j.ijinfomgt.2022.102538 |