Loading…

Cloud-VAE: Variational autoencoder with concepts embedded

•The initial concepts in latent space are described as prior distribution obtained by the proposed cloud model-based clustering algorithm.•Variational lower bound of Cloud-VAE is derived to guide training process and re-construct concepts of latent space, so that the mutual mapping between latent sp...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2023-08, Vol.140, p.109530, Article 109530
Main Authors: Liu, Yue, Liu, Zitu, Li, Shuang, Yu, Zhenyao, Guo, Yike, Liu, Qun, Wang, Guoyin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•The initial concepts in latent space are described as prior distribution obtained by the proposed cloud model-based clustering algorithm.•Variational lower bound of Cloud-VAE is derived to guide training process and re-construct concepts of latent space, so that the mutual mapping between latent space and concept space is established.•Reparameterization trick based forward cloud transformation algorithm is designed to constrain the representations range of latent space by increasing the randomness of latent variables.•The experimental results on six benchmark datasets show that Cloud-VAE has good clustering and reconstruction performance. Compared with the deep clustering methods VaDE and GMVAE, Cloud-VAE improved the NMI by 22.9% and 19.9% respectively.•Cloud-VAE can explicitly explain the aggregation process of the model, and other interpretable latent representations are found on top of the existed. Variational Autoencoder (VAE) has been widely and successfully used in learning coherent latent representation of data. However, the lack of interpretability in the latent space constructed by the VAE under the prior distribution is still an urgent problem. This paper proposes a VAE with understandable concept embedding named Cloud-VAE, which constructs interpretable latent space by disentangling the latent variables and considering their uncertainty based on cloud model. Firstly, cloud model-based clustering algorithm cast initial constraint of latent space into a prior distribution of concept which can be embedded into the latent space of the VAE to disentangle the latent variables. Secondly, reparameterization trick based on forward cloud transformation algorithm is designed to estimate the latent space concept by increasing the randomness of latent variables. Furthermore, variational lower bound of Cloud-VAE is derived to guide the training process to construct concepts of latent space, realizing the mutual mapping between latent space and concept space. Finally, experimental results on 6 benchmark datasets show that Cloud-VAE has good clustering and reconstruction performance, which can explicitly explain the aggregation process of the model and discover more interpretable disentangled representations.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2023.109530