Loading…

Multi-modality Associative Bridging through Memory: Speech Sound Recollected from Face Video

In this paper, we introduce a novel audio-visual multi-modal bridging framework that can utilize both audio and visual information, even with uni-modal inputs. We exploit a memory network that stores source (i.e., visual) and target (i.e., audio) modal representations, where source modal representat...

Full description

Saved in:
Bibliographic Details
Main Authors: Kim, Minsu, Hong, Joanna, Park, Se Jin, Man Ro, Yong
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 306
container_issue
container_start_page 296
container_title
container_volume
creator Kim, Minsu
Hong, Joanna
Park, Se Jin
Man Ro, Yong
description In this paper, we introduce a novel audio-visual multi-modal bridging framework that can utilize both audio and visual information, even with uni-modal inputs. We exploit a memory network that stores source (i.e., visual) and target (i.e., audio) modal representations, where source modal representation is what we are given, and target modal representations are what we want to obtain from the memory network. We then construct an associative bridge between source and target memories that considers the inter-relationship between the two memories. By learning the interrelationship through the associative bridge, the proposed bridging framework is able to obtain the target modal representations inside the memory network, even with the source modal input only, and it provides rich information for its downstream tasks. We apply the proposed framework to two tasks: lip reading and speech reconstruction from silent video. Through the proposed associative bridge and modality-specific memories, each task knowledge is enriched with the recalled audio context, achieving state-of-the-art performance. We also verify that the associative bridge properly relates the source and target memories.
doi_str_mv 10.1109/ICCV48922.2021.00036
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9710720</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9710720</ieee_id><sourcerecordid>9710720</sourcerecordid><originalsourceid>FETCH-LOGICAL-i249t-5a39fcbd69ca58515d4b8ef004b211e96a24d15c9e5c6074e3279e3e042aa9e03</originalsourceid><addsrcrecordid>eNotzNFKwzAUgOEoCM65J9CLvEDnyWmSJt7N4nSwITjdlTCy5HSLtMtou8HeXkGv_puPn7F7AWMhwD7MynIljUUcI6AYA0CuL9jIFkZorSQageqSDTA3kBUK5DW76brvX2XR6AH7WhzrPmZNCq6O_ZlPui756Pp4Iv7UxrCN-y3vd206bnd8QU1qz498eSDyO75Mx33g7-RTXZPvKfCqTQ2fOk98FQOlW3ZVubqj0X-H7HP6_FG-ZvO3l1k5mWcRpe0z5XJb-U3Q1jtllFBBbgxVAHKDQpDVDmUQyltSXkMhKcfCUk4g0TlLkA_Z3d83EtH60MbGtee1LQQUCPkPm0tULA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Multi-modality Associative Bridging through Memory: Speech Sound Recollected from Face Video</title><source>IEEE Xplore All Conference Series</source><creator>Kim, Minsu ; Hong, Joanna ; Park, Se Jin ; Man Ro, Yong</creator><creatorcontrib>Kim, Minsu ; Hong, Joanna ; Park, Se Jin ; Man Ro, Yong</creatorcontrib><description>In this paper, we introduce a novel audio-visual multi-modal bridging framework that can utilize both audio and visual information, even with uni-modal inputs. We exploit a memory network that stores source (i.e., visual) and target (i.e., audio) modal representations, where source modal representation is what we are given, and target modal representations are what we want to obtain from the memory network. We then construct an associative bridge between source and target memories that considers the inter-relationship between the two memories. By learning the interrelationship through the associative bridge, the proposed bridging framework is able to obtain the target modal representations inside the memory network, even with the source modal input only, and it provides rich information for its downstream tasks. We apply the proposed framework to two tasks: lip reading and speech reconstruction from silent video. Through the proposed associative bridge and modality-specific memories, each task knowledge is enriched with the recalled audio context, achieving state-of-the-art performance. We also verify that the associative bridge properly relates the source and target memories.</description><identifier>EISSN: 2380-7504</identifier><identifier>EISBN: 9781665428125</identifier><identifier>EISBN: 1665428120</identifier><identifier>DOI: 10.1109/ICCV48922.2021.00036</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Bridges ; Computer vision ; Faces ; Lips ; Task analysis ; Vision + language ; Vision + other modalities ; Visualization</subject><ispartof>2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, p.296-306</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9710720$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9710720$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Kim, Minsu</creatorcontrib><creatorcontrib>Hong, Joanna</creatorcontrib><creatorcontrib>Park, Se Jin</creatorcontrib><creatorcontrib>Man Ro, Yong</creatorcontrib><title>Multi-modality Associative Bridging through Memory: Speech Sound Recollected from Face Video</title><title>2021 IEEE/CVF International Conference on Computer Vision (ICCV)</title><addtitle>ICCV</addtitle><description>In this paper, we introduce a novel audio-visual multi-modal bridging framework that can utilize both audio and visual information, even with uni-modal inputs. We exploit a memory network that stores source (i.e., visual) and target (i.e., audio) modal representations, where source modal representation is what we are given, and target modal representations are what we want to obtain from the memory network. We then construct an associative bridge between source and target memories that considers the inter-relationship between the two memories. By learning the interrelationship through the associative bridge, the proposed bridging framework is able to obtain the target modal representations inside the memory network, even with the source modal input only, and it provides rich information for its downstream tasks. We apply the proposed framework to two tasks: lip reading and speech reconstruction from silent video. Through the proposed associative bridge and modality-specific memories, each task knowledge is enriched with the recalled audio context, achieving state-of-the-art performance. We also verify that the associative bridge properly relates the source and target memories.</description><subject>Bridges</subject><subject>Computer vision</subject><subject>Faces</subject><subject>Lips</subject><subject>Task analysis</subject><subject>Vision + language</subject><subject>Vision + other modalities</subject><subject>Visualization</subject><issn>2380-7504</issn><isbn>9781665428125</isbn><isbn>1665428120</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2021</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotzNFKwzAUgOEoCM65J9CLvEDnyWmSJt7N4nSwITjdlTCy5HSLtMtou8HeXkGv_puPn7F7AWMhwD7MynIljUUcI6AYA0CuL9jIFkZorSQageqSDTA3kBUK5DW76brvX2XR6AH7WhzrPmZNCq6O_ZlPui756Pp4Iv7UxrCN-y3vd206bnd8QU1qz498eSDyO75Mx33g7-RTXZPvKfCqTQ2fOk98FQOlW3ZVubqj0X-H7HP6_FG-ZvO3l1k5mWcRpe0z5XJb-U3Q1jtllFBBbgxVAHKDQpDVDmUQyltSXkMhKcfCUk4g0TlLkA_Z3d83EtH60MbGtee1LQQUCPkPm0tULA</recordid><startdate>202110</startdate><enddate>202110</enddate><creator>Kim, Minsu</creator><creator>Hong, Joanna</creator><creator>Park, Se Jin</creator><creator>Man Ro, Yong</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>202110</creationdate><title>Multi-modality Associative Bridging through Memory: Speech Sound Recollected from Face Video</title><author>Kim, Minsu ; Hong, Joanna ; Park, Se Jin ; Man Ro, Yong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i249t-5a39fcbd69ca58515d4b8ef004b211e96a24d15c9e5c6074e3279e3e042aa9e03</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Bridges</topic><topic>Computer vision</topic><topic>Faces</topic><topic>Lips</topic><topic>Task analysis</topic><topic>Vision + language</topic><topic>Vision + other modalities</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Minsu</creatorcontrib><creatorcontrib>Hong, Joanna</creatorcontrib><creatorcontrib>Park, Se Jin</creatorcontrib><creatorcontrib>Man Ro, Yong</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library Online</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kim, Minsu</au><au>Hong, Joanna</au><au>Park, Se Jin</au><au>Man Ro, Yong</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Multi-modality Associative Bridging through Memory: Speech Sound Recollected from Face Video</atitle><btitle>2021 IEEE/CVF International Conference on Computer Vision (ICCV)</btitle><stitle>ICCV</stitle><date>2021-10</date><risdate>2021</risdate><spage>296</spage><epage>306</epage><pages>296-306</pages><eissn>2380-7504</eissn><eisbn>9781665428125</eisbn><eisbn>1665428120</eisbn><coden>IEEPAD</coden><abstract>In this paper, we introduce a novel audio-visual multi-modal bridging framework that can utilize both audio and visual information, even with uni-modal inputs. We exploit a memory network that stores source (i.e., visual) and target (i.e., audio) modal representations, where source modal representation is what we are given, and target modal representations are what we want to obtain from the memory network. We then construct an associative bridge between source and target memories that considers the inter-relationship between the two memories. By learning the interrelationship through the associative bridge, the proposed bridging framework is able to obtain the target modal representations inside the memory network, even with the source modal input only, and it provides rich information for its downstream tasks. We apply the proposed framework to two tasks: lip reading and speech reconstruction from silent video. Through the proposed associative bridge and modality-specific memories, each task knowledge is enriched with the recalled audio context, achieving state-of-the-art performance. We also verify that the associative bridge properly relates the source and target memories.</abstract><pub>IEEE</pub><doi>10.1109/ICCV48922.2021.00036</doi><tpages>11</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2380-7504
ispartof 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, p.296-306
issn 2380-7504
language eng
recordid cdi_ieee_primary_9710720
source IEEE Xplore All Conference Series
subjects Bridges
Computer vision
Faces
Lips
Task analysis
Vision + language
Vision + other modalities
Visualization
title Multi-modality Associative Bridging through Memory: Speech Sound Recollected from Face Video
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T03%3A04%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Multi-modality%20Associative%20Bridging%20through%20Memory:%20Speech%20Sound%20Recollected%20from%20Face%20Video&rft.btitle=2021%20IEEE/CVF%20International%20Conference%20on%20Computer%20Vision%20(ICCV)&rft.au=Kim,%20Minsu&rft.date=2021-10&rft.spage=296&rft.epage=306&rft.pages=296-306&rft.eissn=2380-7504&rft.coden=IEEPAD&rft_id=info:doi/10.1109/ICCV48922.2021.00036&rft.eisbn=9781665428125&rft.eisbn_list=1665428120&rft_dat=%3Cieee_CHZPO%3E9710720%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i249t-5a39fcbd69ca58515d4b8ef004b211e96a24d15c9e5c6074e3279e3e042aa9e03%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9710720&rfr_iscdi=true