Loading…

The Need for the Human-Centred Explanation for ML-based Clinical Decision Support Systems

Machine learning has shown great promise in a variety of applications, but the deployment of these systems is hindered by the "opaque" nature of machine learning algorithms. This has led to the development of explainable AI methods, which aim to provide insights into complex algorithms thr...

Full description

Saved in:
Bibliographic Details
Main Authors: Jia, Yan, McDermid, John, Hughes, Nathan, Sujan, Mark, Lawton, Tom, Habli, Ibrahim
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Machine learning has shown great promise in a variety of applications, but the deployment of these systems is hindered by the "opaque" nature of machine learning algorithms. This has led to the development of explainable AI methods, which aim to provide insights into complex algorithms through explanations that are comprehensible to humans. However, many of the explanations currently available are technically focused and reflect what machine learning researchers believe constitutes a good explanation, rather than what users actually want. This paper highlights the need to develop human-centred explanations for machine learning-based clinical decision support systems, as clinicians who typically have limited knowledge of machine learning techniques are the users of these systems. The authors define the requirements for human-centred explanations, then briefly discuss the current state of available explainable AI methods, and finally analyse the gaps between human-centred explanations and current explainable AI methods. A clinical use case is presented to demonstrate the vision for human-centred explanations.
ISSN:2575-2634
DOI:10.1109/ICHI57859.2023.00064