Loading…

An Explainable and Actionable Mistrust Scoring Framework for Model Monitoring

Continuous monitoring of trained ML models to determine when their predictions should and should not be trusted is essential for their safe deployment. Such a framework ought to be high-performing, explainable, post hoc , and actionable. We propose TRUST-LAPSE, a "mistrust" scoring framewo...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on artificial intelligence 2024-04, Vol.5 (4), p.1473-1485
Main Authors: Bhaskhar, Nandita, Rubin, Daniel L., Lee-Messer, Christopher
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Continuous monitoring of trained ML models to determine when their predictions should and should not be trusted is essential for their safe deployment. Such a framework ought to be high-performing, explainable, post hoc , and actionable. We propose TRUST-LAPSE, a "mistrust" scoring framework for continuous model monitoring. We assess the trustworthiness of each input sample's model prediction using a sequence of latent-space embeddings. Specifically, 1) our latent-space mistrust score estimates mistrust using distance metrics (Mahalanobis distance) and similarity metrics (cosine similarity) in the latent-space, and 2) our sequential mistrust score determines deviations in correlations over the sequence of past input representations in a nonparametric, sliding-window-based algorithm for actionable continuous monitoring. We evaluate TRUST-LAPSE via two downstream tasks: 1) distributionally shifted input detection; and 2) data drift detection. We evaluate across diverse domains-audio and vision using public datasets and further benchmark our approach on challenging, real-world electroencephalograms (EEG) datasets for seizure detection. Our latent-space mistrust scores achieve state-of-the-art results with AUROCs of 84.1 (vision), 73.9 (audio), and 77.1 (clinical EEGs), outperforming baselines by over 10 points. We expose critical failures in popular baselines that remain insensitive to input semantic content, rendering them unfit for real-world model monitoring. We show that our sequential mistrust scores achieve high drift detection rates; over 90% of the streams show
ISSN:2691-4581
2691-4581
DOI:10.1109/TAI.2023.3272876