Loading…
CSSE - An agnostic method of counterfactual, selected, and social explanations for classification models
In some contexts, achieving high predictive capability may be sufficient for a machine learning model. However, in many scenarios, it is necessary to understand the model’s decisions to increase confidence in the predictions and direct the actions to be taken based on them. Therefore, it is essentia...
Saved in:
Published in: | Expert systems with applications 2023-10, Vol.228, p.120373, Article 120373 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In some contexts, achieving high predictive capability may be sufficient for a machine learning model. However, in many scenarios, it is necessary to understand the model’s decisions to increase confidence in the predictions and direct the actions to be taken based on them. Therefore, it is essential to provide interpretable models. However, some authors have pointed out the need to improve current interpretability methods to provide adequate explanations, especially for non-specialists in machine learning. The solution is to expand studies beyond computational issues to understand better how people receive explanations. Based on the literature, we identified three aspects to be considered in the explanations: contrastive, selected, and social. The counterfactual approach, contrastive in nature, inform the user of how the decision by the model can be altered through minimal changes to the input features. Given this, we introduce the Agnostic Method of Counterfactual, Selected, and Social Explanations (CSSE), capable of generating local explanations for classification models using a genetic algorithm. Thus, as contributions, we highlight that the CSSE offers counterfactual explanations from learning models, presents explanations with diversity, without prolixity, and allows the user to restrict the features that appear in the explanation (actionability), besides other parameterization options for the user to communicate their preferences. A particular novelty of our work is the possibility for the user to adjust the importance he will give to sparsity (minimum number of changes) or similarity (minimizing the distance). Furthermore, we indicate other possibilities for the actionability functionality, inherently used to lock immutable features, allowing users to block features according to their interests or expertise. These resources can help the user obtain explanations more targeted to their objective and advance further in interpretability, considering computational and social aspects in generating explanations. The experiments showed that CSSE presents relevant results compared to some existing approaches. The work also includes a case study of predicting the academic performance of children and adolescents with ADHD, in which we applied the CSSE. Thus, the proposed method advances interpretability by offering explanations aimed at the end user, which can generate greater acceptance, confidence, and understanding regarding the models’ decisions. The meth |
---|---|
ISSN: | 0957-4174 1873-6793 |
DOI: | 10.1016/j.eswa.2023.120373 |