Loading…

Maximum likelihood discriminant feature spaces

Linear discriminant analysis (LDA) is known to be inappropriate for the case of classes with unequal sample covariances. There has been an interest in generalizing LDA to heteroscedastic discriminant analysis (HDA) by removing the equal within-class covariance constraint. This paper presents a new a...

Full description

Saved in:
Bibliographic Details
Main Authors: Saon, G., Padmanabhan, M., Gopinath, R., Chen, S.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Linear discriminant analysis (LDA) is known to be inappropriate for the case of classes with unequal sample covariances. There has been an interest in generalizing LDA to heteroscedastic discriminant analysis (HDA) by removing the equal within-class covariance constraint. This paper presents a new approach to HDA by defining an objective function which maximizes the class discrimination in the projected subspace while ignoring the rejected dimensions. Moreover, we investigate the link between discrimination and the likelihood of the projected samples and show that HDA can be viewed as a constrained ML projection for a full covariance Gaussian model, the constraint being given by the maximization of the projected between-class scatter volume. It is shown that, under diagonal covariance Gaussian modeling constraints, applying a diagonalizing linear transformation (MLLT) to the HDA space results in increased classification accuracy even though HDA alone actually degrades the recognition performance. Experiments performed on the Switchboard and Voicemail databases show a 10%-13% relative improvement in the word error rate over standard cepstral processing.
ISSN:1520-6149
2379-190X
DOI:10.1109/ICASSP.2000.859163