Loading…

Maximum likelihood discriminant feature spaces

Linear discriminant analysis (LDA) is known to be inappropriate for the case of classes with unequal sample covariances. There has been an interest in generalizing LDA to heteroscedastic discriminant analysis (HDA) by removing the equal within-class covariance constraint. This paper presents a new a...

Full description

Saved in:
Bibliographic Details
Main Authors: Saon, G., Padmanabhan, M., Gopinath, R., Chen, S.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page II1132 vol.2
container_issue
container_start_page II1129
container_title
container_volume 2
creator Saon, G.
Padmanabhan, M.
Gopinath, R.
Chen, S.
description Linear discriminant analysis (LDA) is known to be inappropriate for the case of classes with unequal sample covariances. There has been an interest in generalizing LDA to heteroscedastic discriminant analysis (HDA) by removing the equal within-class covariance constraint. This paper presents a new approach to HDA by defining an objective function which maximizes the class discrimination in the projected subspace while ignoring the rejected dimensions. Moreover, we investigate the link between discrimination and the likelihood of the projected samples and show that HDA can be viewed as a constrained ML projection for a full covariance Gaussian model, the constraint being given by the maximization of the projected between-class scatter volume. It is shown that, under diagonal covariance Gaussian modeling constraints, applying a diagonalizing linear transformation (MLLT) to the HDA space results in increased classification accuracy even though HDA alone actually degrades the recognition performance. Experiments performed on the Switchboard and Voicemail databases show a 10%-13% relative improvement in the word error rate over standard cepstral processing.
doi_str_mv 10.1109/ICASSP.2000.859163
format conference_proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_859163</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>859163</ieee_id><sourcerecordid>859163</sourcerecordid><originalsourceid>FETCH-LOGICAL-i172t-8e7101062dd892b8f4a4bae7f08dc7a2d9072eecfc9e9b8fdb0fa7d0e78c5f5d3</originalsourceid><addsrcrecordid>eNotj91KAzEUhIM_4Fr7Ar3aF8h6kuxuci6lqBUqClXwrmSTE4zutmWzBX17AxUG5mKY4RvGFgIqIQBvn5Z3m81rJQGgMg2KVp2xQiqNXCB8nLM5agNZqpWo5AUrRCOBt6LGK3ad0lfuGV2bglXP9icOx6Hs4zf18XO_96WPyY1xiDu7m8pAdjqOVKaDdZRu2GWwfaL5v8_Y-8P923LF1y-PmWnNo9By4oa0AAGt9N6g7Eyobd1Z0gGMd9pKj6AlkQsOCXPsOwhWeyBtXBMar2ZscdqNRLQ9ZBo7_m5PT9Ufv51G7g</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Maximum likelihood discriminant feature spaces</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Saon, G. ; Padmanabhan, M. ; Gopinath, R. ; Chen, S.</creator><creatorcontrib>Saon, G. ; Padmanabhan, M. ; Gopinath, R. ; Chen, S.</creatorcontrib><description>Linear discriminant analysis (LDA) is known to be inappropriate for the case of classes with unequal sample covariances. There has been an interest in generalizing LDA to heteroscedastic discriminant analysis (HDA) by removing the equal within-class covariance constraint. This paper presents a new approach to HDA by defining an objective function which maximizes the class discrimination in the projected subspace while ignoring the rejected dimensions. Moreover, we investigate the link between discrimination and the likelihood of the projected samples and show that HDA can be viewed as a constrained ML projection for a full covariance Gaussian model, the constraint being given by the maximization of the projected between-class scatter volume. It is shown that, under diagonal covariance Gaussian modeling constraints, applying a diagonalizing linear transformation (MLLT) to the HDA space results in increased classification accuracy even though HDA alone actually degrades the recognition performance. Experiments performed on the Switchboard and Voicemail databases show a 10%-13% relative improvement in the word error rate over standard cepstral processing.</description><identifier>ISSN: 1520-6149</identifier><identifier>ISBN: 9780780362932</identifier><identifier>ISBN: 0780362934</identifier><identifier>EISSN: 2379-190X</identifier><identifier>DOI: 10.1109/ICASSP.2000.859163</identifier><language>eng</language><publisher>IEEE</publisher><subject>Acoustic scattering ; Cepstral analysis ; Covariance matrix ; Degradation ; Error analysis ; Linear discriminant analysis ; Performance analysis ; Speech recognition ; Voice mail</subject><ispartof>2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100), 2000, Vol.2, p.II1129-II1132 vol.2</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/859163$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2058,4050,4051,27925,54555,54920,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/859163$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Saon, G.</creatorcontrib><creatorcontrib>Padmanabhan, M.</creatorcontrib><creatorcontrib>Gopinath, R.</creatorcontrib><creatorcontrib>Chen, S.</creatorcontrib><title>Maximum likelihood discriminant feature spaces</title><title>2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100)</title><addtitle>ICASSP</addtitle><description>Linear discriminant analysis (LDA) is known to be inappropriate for the case of classes with unequal sample covariances. There has been an interest in generalizing LDA to heteroscedastic discriminant analysis (HDA) by removing the equal within-class covariance constraint. This paper presents a new approach to HDA by defining an objective function which maximizes the class discrimination in the projected subspace while ignoring the rejected dimensions. Moreover, we investigate the link between discrimination and the likelihood of the projected samples and show that HDA can be viewed as a constrained ML projection for a full covariance Gaussian model, the constraint being given by the maximization of the projected between-class scatter volume. It is shown that, under diagonal covariance Gaussian modeling constraints, applying a diagonalizing linear transformation (MLLT) to the HDA space results in increased classification accuracy even though HDA alone actually degrades the recognition performance. Experiments performed on the Switchboard and Voicemail databases show a 10%-13% relative improvement in the word error rate over standard cepstral processing.</description><subject>Acoustic scattering</subject><subject>Cepstral analysis</subject><subject>Covariance matrix</subject><subject>Degradation</subject><subject>Error analysis</subject><subject>Linear discriminant analysis</subject><subject>Performance analysis</subject><subject>Speech recognition</subject><subject>Voice mail</subject><issn>1520-6149</issn><issn>2379-190X</issn><isbn>9780780362932</isbn><isbn>0780362934</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2000</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj91KAzEUhIM_4Fr7Ar3aF8h6kuxuci6lqBUqClXwrmSTE4zutmWzBX17AxUG5mKY4RvGFgIqIQBvn5Z3m81rJQGgMg2KVp2xQiqNXCB8nLM5agNZqpWo5AUrRCOBt6LGK3ad0lfuGV2bglXP9icOx6Hs4zf18XO_96WPyY1xiDu7m8pAdjqOVKaDdZRu2GWwfaL5v8_Y-8P923LF1y-PmWnNo9By4oa0AAGt9N6g7Eyobd1Z0gGMd9pKj6AlkQsOCXPsOwhWeyBtXBMar2ZscdqNRLQ9ZBo7_m5PT9Ufv51G7g</recordid><startdate>2000</startdate><enddate>2000</enddate><creator>Saon, G.</creator><creator>Padmanabhan, M.</creator><creator>Gopinath, R.</creator><creator>Chen, S.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>2000</creationdate><title>Maximum likelihood discriminant feature spaces</title><author>Saon, G. ; Padmanabhan, M. ; Gopinath, R. ; Chen, S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i172t-8e7101062dd892b8f4a4bae7f08dc7a2d9072eecfc9e9b8fdb0fa7d0e78c5f5d3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2000</creationdate><topic>Acoustic scattering</topic><topic>Cepstral analysis</topic><topic>Covariance matrix</topic><topic>Degradation</topic><topic>Error analysis</topic><topic>Linear discriminant analysis</topic><topic>Performance analysis</topic><topic>Speech recognition</topic><topic>Voice mail</topic><toplevel>online_resources</toplevel><creatorcontrib>Saon, G.</creatorcontrib><creatorcontrib>Padmanabhan, M.</creatorcontrib><creatorcontrib>Gopinath, R.</creatorcontrib><creatorcontrib>Chen, S.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Saon, G.</au><au>Padmanabhan, M.</au><au>Gopinath, R.</au><au>Chen, S.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Maximum likelihood discriminant feature spaces</atitle><btitle>2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100)</btitle><stitle>ICASSP</stitle><date>2000</date><risdate>2000</risdate><volume>2</volume><spage>II1129</spage><epage>II1132 vol.2</epage><pages>II1129-II1132 vol.2</pages><issn>1520-6149</issn><eissn>2379-190X</eissn><isbn>9780780362932</isbn><isbn>0780362934</isbn><abstract>Linear discriminant analysis (LDA) is known to be inappropriate for the case of classes with unequal sample covariances. There has been an interest in generalizing LDA to heteroscedastic discriminant analysis (HDA) by removing the equal within-class covariance constraint. This paper presents a new approach to HDA by defining an objective function which maximizes the class discrimination in the projected subspace while ignoring the rejected dimensions. Moreover, we investigate the link between discrimination and the likelihood of the projected samples and show that HDA can be viewed as a constrained ML projection for a full covariance Gaussian model, the constraint being given by the maximization of the projected between-class scatter volume. It is shown that, under diagonal covariance Gaussian modeling constraints, applying a diagonalizing linear transformation (MLLT) to the HDA space results in increased classification accuracy even though HDA alone actually degrades the recognition performance. Experiments performed on the Switchboard and Voicemail databases show a 10%-13% relative improvement in the word error rate over standard cepstral processing.</abstract><pub>IEEE</pub><doi>10.1109/ICASSP.2000.859163</doi></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1520-6149
ispartof 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100), 2000, Vol.2, p.II1129-II1132 vol.2
issn 1520-6149
2379-190X
language eng
recordid cdi_ieee_primary_859163
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Acoustic scattering
Cepstral analysis
Covariance matrix
Degradation
Error analysis
Linear discriminant analysis
Performance analysis
Speech recognition
Voice mail
title Maximum likelihood discriminant feature spaces
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T06%3A12%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Maximum%20likelihood%20discriminant%20feature%20spaces&rft.btitle=2000%20IEEE%20International%20Conference%20on%20Acoustics,%20Speech,%20and%20Signal%20Processing.%20Proceedings%20(Cat.%20No.00CH37100)&rft.au=Saon,%20G.&rft.date=2000&rft.volume=2&rft.spage=II1129&rft.epage=II1132%20vol.2&rft.pages=II1129-II1132%20vol.2&rft.issn=1520-6149&rft.eissn=2379-190X&rft.isbn=9780780362932&rft.isbn_list=0780362934&rft_id=info:doi/10.1109/ICASSP.2000.859163&rft_dat=%3Cieee_6IE%3E859163%3C/ieee_6IE%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i172t-8e7101062dd892b8f4a4bae7f08dc7a2d9072eecfc9e9b8fdb0fa7d0e78c5f5d3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=859163&rfr_iscdi=true