Loading…

From Clustering to Cluster Explanations via Neural Networks

A recent trend in machine learning has been to enrich learned models with the ability to explain their own predictions. The emerging field of Explainable AI (XAI) has so far mainly focused on supervised learning, in particular, deep neural network classifiers. In many practical problems however, lab...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2021-12
Main Authors: Kauffmann, Jacob, Esders, Malte, Ruff, Lukas, Montavon, Grégoire, Samek, Wojciech, Klaus-Robert Müller
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A recent trend in machine learning has been to enrich learned models with the ability to explain their own predictions. The emerging field of Explainable AI (XAI) has so far mainly focused on supervised learning, in particular, deep neural network classifiers. In many practical problems however, label information is not given and the goal is instead to discover the underlying structure of the data, for example, its clusters. While powerful methods exist for extracting the cluster structure in data, they typically do not answer the question why a certain data point has been assigned to a given cluster. We propose a new framework that can, for the first time, explain cluster assignments in terms of input features in an efficient and reliable manner. It is based on the novel insight that clustering models can be rewritten as neural networks - or 'neuralized'. Cluster predictions of the obtained networks can then be quickly and accurately attributed to the input features. Several showcases demonstrate the ability of our method to assess the quality of learned clusters and to extract novel insights from the analyzed data and representations.
ISSN:2331-8422
DOI:10.48550/arxiv.1906.07633