Loading…

Knowledge Distillation for Improved Accuracy in Spoken Question Answering

Spoken question answering (SQA) is a challenging task that requires the machine to fully understand the complex spoken documents. Automatic speech recognition (ASR) plays a significant role in the development of QA systems. However, the recent work shows that ASR systems generate highly noisy transc...

Full description

Saved in:
Bibliographic Details
Main Authors: You, Chenyu, Chen, Nuo, Zou, Yuexian
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 7797
container_issue
container_start_page 7793
container_title
container_volume
creator You, Chenyu
Chen, Nuo
Zou, Yuexian
description Spoken question answering (SQA) is a challenging task that requires the machine to fully understand the complex spoken documents. Automatic speech recognition (ASR) plays a significant role in the development of QA systems. However, the recent work shows that ASR systems generate highly noisy transcripts, which critically limit the capability of machine comprehension on the SQA task. To address the issue, we present a novel distillation framework. Specifically, we devise a training strategy to perform knowledge distillation (KD) from spoken documents and written counterparts. Our work aims at distilling rich knowledge from the language model to improve the performance of the student model by reducing the misalignment between automatic and manual transcripts. Experiments demonstrate that our approach outperforms several state-of-the-art language models on the Spoken-SQuAD dataset.
doi_str_mv 10.1109/ICASSP39728.2021.9414999
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9414999</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9414999</ieee_id><sourcerecordid>9414999</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-36e5121db753ba687303b710214eec6ed70ac25ba7b75b880f823e94b3b724493</originalsourceid><addsrcrecordid>eNotj11LwzAYhaMgOOd-gTf5A51vPtokl2V-FQcqVfBupO3bEe3SknSO_XuL7urA4eHwHEIogyVjYG6LVV6Wr8IorpccOFsayaQx5owsjNJsqpnKIE3PyYwLZRJm4POSXMX4BQBaST0jxbPvDx02W6R3Lo6u6-zoek_bPtBiN4T-Bxua1_U-2PpInafl0H-jp297jH9g7uMBg_Pba3LR2i7i4pRz8vFw_756StYvj5PoOnEcxJiIDFPGWVOpVFQ200qAqBSb7CVinWGjwNY8rayaiEpraDUXaGQ1UVxKI-bk5n_XIeJmCG5nw3Fzei5-AQfwTmU</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Knowledge Distillation for Improved Accuracy in Spoken Question Answering</title><source>IEEE Xplore All Conference Series</source><creator>You, Chenyu ; Chen, Nuo ; Zou, Yuexian</creator><creatorcontrib>You, Chenyu ; Chen, Nuo ; Zou, Yuexian</creatorcontrib><description>Spoken question answering (SQA) is a challenging task that requires the machine to fully understand the complex spoken documents. Automatic speech recognition (ASR) plays a significant role in the development of QA systems. However, the recent work shows that ASR systems generate highly noisy transcripts, which critically limit the capability of machine comprehension on the SQA task. To address the issue, we present a novel distillation framework. Specifically, we devise a training strategy to perform knowledge distillation (KD) from spoken documents and written counterparts. Our work aims at distilling rich knowledge from the language model to improve the performance of the student model by reducing the misalignment between automatic and manual transcripts. Experiments demonstrate that our approach outperforms several state-of-the-art language models on the Spoken-SQuAD dataset.</description><identifier>EISSN: 2379-190X</identifier><identifier>EISBN: 9781728176055</identifier><identifier>EISBN: 1728176050</identifier><identifier>DOI: 10.1109/ICASSP39728.2021.9414999</identifier><language>eng</language><publisher>IEEE</publisher><subject>Conferences ; Knowledge discovery ; knowledge distillation ; Manuals ; Natural language processing ; question answering ; Signal processing ; spoken question answering ; Syntactics ; Training</subject><ispartof>ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, p.7793-7797</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9414999$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,27904,54534,54911</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9414999$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>You, Chenyu</creatorcontrib><creatorcontrib>Chen, Nuo</creatorcontrib><creatorcontrib>Zou, Yuexian</creatorcontrib><title>Knowledge Distillation for Improved Accuracy in Spoken Question Answering</title><title>ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</title><addtitle>ICASSP</addtitle><description>Spoken question answering (SQA) is a challenging task that requires the machine to fully understand the complex spoken documents. Automatic speech recognition (ASR) plays a significant role in the development of QA systems. However, the recent work shows that ASR systems generate highly noisy transcripts, which critically limit the capability of machine comprehension on the SQA task. To address the issue, we present a novel distillation framework. Specifically, we devise a training strategy to perform knowledge distillation (KD) from spoken documents and written counterparts. Our work aims at distilling rich knowledge from the language model to improve the performance of the student model by reducing the misalignment between automatic and manual transcripts. Experiments demonstrate that our approach outperforms several state-of-the-art language models on the Spoken-SQuAD dataset.</description><subject>Conferences</subject><subject>Knowledge discovery</subject><subject>knowledge distillation</subject><subject>Manuals</subject><subject>Natural language processing</subject><subject>question answering</subject><subject>Signal processing</subject><subject>spoken question answering</subject><subject>Syntactics</subject><subject>Training</subject><issn>2379-190X</issn><isbn>9781728176055</isbn><isbn>1728176050</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2021</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj11LwzAYhaMgOOd-gTf5A51vPtokl2V-FQcqVfBupO3bEe3SknSO_XuL7urA4eHwHEIogyVjYG6LVV6Wr8IorpccOFsayaQx5owsjNJsqpnKIE3PyYwLZRJm4POSXMX4BQBaST0jxbPvDx02W6R3Lo6u6-zoek_bPtBiN4T-Bxua1_U-2PpInafl0H-jp297jH9g7uMBg_Pba3LR2i7i4pRz8vFw_756StYvj5PoOnEcxJiIDFPGWVOpVFQ200qAqBSb7CVinWGjwNY8rayaiEpraDUXaGQ1UVxKI-bk5n_XIeJmCG5nw3Fzei5-AQfwTmU</recordid><startdate>20210606</startdate><enddate>20210606</enddate><creator>You, Chenyu</creator><creator>Chen, Nuo</creator><creator>Zou, Yuexian</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20210606</creationdate><title>Knowledge Distillation for Improved Accuracy in Spoken Question Answering</title><author>You, Chenyu ; Chen, Nuo ; Zou, Yuexian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-36e5121db753ba687303b710214eec6ed70ac25ba7b75b880f823e94b3b724493</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Conferences</topic><topic>Knowledge discovery</topic><topic>knowledge distillation</topic><topic>Manuals</topic><topic>Natural language processing</topic><topic>question answering</topic><topic>Signal processing</topic><topic>spoken question answering</topic><topic>Syntactics</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>You, Chenyu</creatorcontrib><creatorcontrib>Chen, Nuo</creatorcontrib><creatorcontrib>Zou, Yuexian</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>You, Chenyu</au><au>Chen, Nuo</au><au>Zou, Yuexian</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Knowledge Distillation for Improved Accuracy in Spoken Question Answering</atitle><btitle>ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</btitle><stitle>ICASSP</stitle><date>2021-06-06</date><risdate>2021</risdate><spage>7793</spage><epage>7797</epage><pages>7793-7797</pages><eissn>2379-190X</eissn><eisbn>9781728176055</eisbn><eisbn>1728176050</eisbn><abstract>Spoken question answering (SQA) is a challenging task that requires the machine to fully understand the complex spoken documents. Automatic speech recognition (ASR) plays a significant role in the development of QA systems. However, the recent work shows that ASR systems generate highly noisy transcripts, which critically limit the capability of machine comprehension on the SQA task. To address the issue, we present a novel distillation framework. Specifically, we devise a training strategy to perform knowledge distillation (KD) from spoken documents and written counterparts. Our work aims at distilling rich knowledge from the language model to improve the performance of the student model by reducing the misalignment between automatic and manual transcripts. Experiments demonstrate that our approach outperforms several state-of-the-art language models on the Spoken-SQuAD dataset.</abstract><pub>IEEE</pub><doi>10.1109/ICASSP39728.2021.9414999</doi><tpages>5</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2379-190X
ispartof ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, p.7793-7797
issn 2379-190X
language eng
recordid cdi_ieee_primary_9414999
source IEEE Xplore All Conference Series
subjects Conferences
Knowledge discovery
knowledge distillation
Manuals
Natural language processing
question answering
Signal processing
spoken question answering
Syntactics
Training
title Knowledge Distillation for Improved Accuracy in Spoken Question Answering
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T14%3A35%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Knowledge%20Distillation%20for%20Improved%20Accuracy%20in%20Spoken%20Question%20Answering&rft.btitle=ICASSP%202021%20-%202021%20IEEE%20International%20Conference%20on%20Acoustics,%20Speech%20and%20Signal%20Processing%20(ICASSP)&rft.au=You,%20Chenyu&rft.date=2021-06-06&rft.spage=7793&rft.epage=7797&rft.pages=7793-7797&rft.eissn=2379-190X&rft_id=info:doi/10.1109/ICASSP39728.2021.9414999&rft.eisbn=9781728176055&rft.eisbn_list=1728176050&rft_dat=%3Cieee_CHZPO%3E9414999%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-36e5121db753ba687303b710214eec6ed70ac25ba7b75b880f823e94b3b724493%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9414999&rfr_iscdi=true