Loading…

Unsupervised, smooth training of feed-forward neural networks for mismatch compensation

We present a maximum likelihood technique for training feedforward neural networks. The proposed technique is completely unsupervised; hence it eliminates the need for having target values for each input. Thus stereo databases are no longer required for learning nonlinear distortions under adverse c...

Full description

Saved in:
Bibliographic Details
Main Authors: Surendran, A.C., Chin-Hui Lee, Rahim, M.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 489
container_issue
container_start_page 482
container_title
container_volume
creator Surendran, A.C.
Chin-Hui Lee
Rahim, M.
description We present a maximum likelihood technique for training feedforward neural networks. The proposed technique is completely unsupervised; hence it eliminates the need for having target values for each input. Thus stereo databases are no longer required for learning nonlinear distortions under adverse conditions in speech recognition applications. We show that this technique is guaranteed to converge smoothly to the local maxima, and provides a more meaningful metric in speech recognition applications than the traditional mean square error. We apply the technique to model compensation to reduce the mismatch between training and testing in speech recognition applications and show that this data driven technique can be used under a wide variety of conditions without prior knowledge of the mismatch.
doi_str_mv 10.1109/ASRU.1997.659127
format conference_proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_659127</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>659127</ieee_id><sourcerecordid>659127</sourcerecordid><originalsourceid>FETCH-LOGICAL-i104t-43d080eb010e76990957d38391105ccd7e97ff347e8358d6c46ea223bac273623</originalsourceid><addsrcrecordid>eNotj81KAzEYRQMiqLV7cZUHcMZk8r8sRa1QENTBZUmTb2y0MxmS1OLbO1Dv5sBdHDgI3VBSU0rM_eLtta2pMaqWwtBGnaErojRhTBrNL9A85y8yjQtBtbhEH-2QDyOkn5DB3-Hcx1h2uCQbhjB84tjhDsBXXUxHmzwe4JDsfkI5xvSd8fTjPuTeFrfDLvYjDNmWEIdrdN7ZfYb5P2eofXx4X66q9cvT83KxrgIlvFSceaIJbAkloKQxxAjlmWZmahHOeQVGdR3jCjQT2kvHJdimYVvrGsVkw2bo9uQNALAZU-ht-t2c0tkf_W5QDA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Unsupervised, smooth training of feed-forward neural networks for mismatch compensation</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Surendran, A.C. ; Chin-Hui Lee ; Rahim, M.</creator><creatorcontrib>Surendran, A.C. ; Chin-Hui Lee ; Rahim, M.</creatorcontrib><description>We present a maximum likelihood technique for training feedforward neural networks. The proposed technique is completely unsupervised; hence it eliminates the need for having target values for each input. Thus stereo databases are no longer required for learning nonlinear distortions under adverse conditions in speech recognition applications. We show that this technique is guaranteed to converge smoothly to the local maxima, and provides a more meaningful metric in speech recognition applications than the traditional mean square error. We apply the technique to model compensation to reduce the mismatch between training and testing in speech recognition applications and show that this data driven technique can be used under a wide variety of conditions without prior knowledge of the mismatch.</description><identifier>ISBN: 0780336984</identifier><identifier>ISBN: 9780780336988</identifier><identifier>DOI: 10.1109/ASRU.1997.659127</identifier><language>eng</language><publisher>IEEE</publisher><subject>Artificial neural networks ; Convergence ; Equations ; Feedforward neural networks ; Feedforward systems ; Neural networks ; Speech recognition ; Testing ; Training data</subject><ispartof>1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings, 1997, p.482-489</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/659127$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2056,4047,4048,27923,54918</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/659127$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Surendran, A.C.</creatorcontrib><creatorcontrib>Chin-Hui Lee</creatorcontrib><creatorcontrib>Rahim, M.</creatorcontrib><title>Unsupervised, smooth training of feed-forward neural networks for mismatch compensation</title><title>1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings</title><addtitle>ASRU</addtitle><description>We present a maximum likelihood technique for training feedforward neural networks. The proposed technique is completely unsupervised; hence it eliminates the need for having target values for each input. Thus stereo databases are no longer required for learning nonlinear distortions under adverse conditions in speech recognition applications. We show that this technique is guaranteed to converge smoothly to the local maxima, and provides a more meaningful metric in speech recognition applications than the traditional mean square error. We apply the technique to model compensation to reduce the mismatch between training and testing in speech recognition applications and show that this data driven technique can be used under a wide variety of conditions without prior knowledge of the mismatch.</description><subject>Artificial neural networks</subject><subject>Convergence</subject><subject>Equations</subject><subject>Feedforward neural networks</subject><subject>Feedforward systems</subject><subject>Neural networks</subject><subject>Speech recognition</subject><subject>Testing</subject><subject>Training data</subject><isbn>0780336984</isbn><isbn>9780780336988</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>1997</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj81KAzEYRQMiqLV7cZUHcMZk8r8sRa1QENTBZUmTb2y0MxmS1OLbO1Dv5sBdHDgI3VBSU0rM_eLtta2pMaqWwtBGnaErojRhTBrNL9A85y8yjQtBtbhEH-2QDyOkn5DB3-Hcx1h2uCQbhjB84tjhDsBXXUxHmzwe4JDsfkI5xvSd8fTjPuTeFrfDLvYjDNmWEIdrdN7ZfYb5P2eofXx4X66q9cvT83KxrgIlvFSceaIJbAkloKQxxAjlmWZmahHOeQVGdR3jCjQT2kvHJdimYVvrGsVkw2bo9uQNALAZU-ht-t2c0tkf_W5QDA</recordid><startdate>1997</startdate><enddate>1997</enddate><creator>Surendran, A.C.</creator><creator>Chin-Hui Lee</creator><creator>Rahim, M.</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>1997</creationdate><title>Unsupervised, smooth training of feed-forward neural networks for mismatch compensation</title><author>Surendran, A.C. ; Chin-Hui Lee ; Rahim, M.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i104t-43d080eb010e76990957d38391105ccd7e97ff347e8358d6c46ea223bac273623</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>1997</creationdate><topic>Artificial neural networks</topic><topic>Convergence</topic><topic>Equations</topic><topic>Feedforward neural networks</topic><topic>Feedforward systems</topic><topic>Neural networks</topic><topic>Speech recognition</topic><topic>Testing</topic><topic>Training data</topic><toplevel>online_resources</toplevel><creatorcontrib>Surendran, A.C.</creatorcontrib><creatorcontrib>Chin-Hui Lee</creatorcontrib><creatorcontrib>Rahim, M.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Surendran, A.C.</au><au>Chin-Hui Lee</au><au>Rahim, M.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Unsupervised, smooth training of feed-forward neural networks for mismatch compensation</atitle><btitle>1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings</btitle><stitle>ASRU</stitle><date>1997</date><risdate>1997</risdate><spage>482</spage><epage>489</epage><pages>482-489</pages><isbn>0780336984</isbn><isbn>9780780336988</isbn><abstract>We present a maximum likelihood technique for training feedforward neural networks. The proposed technique is completely unsupervised; hence it eliminates the need for having target values for each input. Thus stereo databases are no longer required for learning nonlinear distortions under adverse conditions in speech recognition applications. We show that this technique is guaranteed to converge smoothly to the local maxima, and provides a more meaningful metric in speech recognition applications than the traditional mean square error. We apply the technique to model compensation to reduce the mismatch between training and testing in speech recognition applications and show that this data driven technique can be used under a wide variety of conditions without prior knowledge of the mismatch.</abstract><pub>IEEE</pub><doi>10.1109/ASRU.1997.659127</doi><tpages>8</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISBN: 0780336984
ispartof 1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings, 1997, p.482-489
issn
language eng
recordid cdi_ieee_primary_659127
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Artificial neural networks
Convergence
Equations
Feedforward neural networks
Feedforward systems
Neural networks
Speech recognition
Testing
Training data
title Unsupervised, smooth training of feed-forward neural networks for mismatch compensation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T11%3A47%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Unsupervised,%20smooth%20training%20of%20feed-forward%20neural%20networks%20for%20mismatch%20compensation&rft.btitle=1997%20IEEE%20Workshop%20on%20Automatic%20Speech%20Recognition%20and%20Understanding%20Proceedings&rft.au=Surendran,%20A.C.&rft.date=1997&rft.spage=482&rft.epage=489&rft.pages=482-489&rft.isbn=0780336984&rft.isbn_list=9780780336988&rft_id=info:doi/10.1109/ASRU.1997.659127&rft_dat=%3Cieee_6IE%3E659127%3C/ieee_6IE%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i104t-43d080eb010e76990957d38391105ccd7e97ff347e8358d6c46ea223bac273623%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=659127&rfr_iscdi=true