Loading…

An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks

Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Switzerland), 2018-05, Vol.18 (5), p.1427
Main Authors: Shamwell, E Jared, Nothwang, William D, Perlis, Donald
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c469t-32b28190003f540234f469aacfb4567d592e9f14ce7958a0065a667a2bf8fd793
cites cdi_FETCH-LOGICAL-c469t-32b28190003f540234f469aacfb4567d592e9f14ce7958a0065a667a2bf8fd793
container_end_page
container_issue 5
container_start_page 1427
container_title Sensors (Basel, Switzerland)
container_volume 18
creator Shamwell, E Jared
Nothwang, William D
Perlis, Donald
description Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.
doi_str_mv 10.3390/s18051427
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_dcce72542fa14d10ade143f7e9d7e8b4</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_dcce72542fa14d10ade143f7e9d7e8b4</doaj_id><sourcerecordid>2036198478</sourcerecordid><originalsourceid>FETCH-LOGICAL-c469t-32b28190003f540234f469aacfb4567d592e9f14ce7958a0065a667a2bf8fd793</originalsourceid><addsrcrecordid>eNpdkk1v1DAQhiMEoh9w4A-gSFzoIeDP2L4grcoWKrVwgOVqOfZ46yUbBzsp4t_j7ZZVy2lGM68evfNRVa8wekepQu8zlohjRsST6rgE1khC0NMH-VF1kvMGIUIplc-rI6IEZa0UxxUshnq57aIL4OrruZ9C8w2GHFN9MecQh3oxjikae1NPsf4R8mz6-jpOu84yT2Fr7tJVDsO6Xg15HiHdhlxYHwHG-gtMv2P6mV9Uz7zpM7y8j6fV6mL5_fxzc_X10-X54qqxrFVTQ0lHJFYIIeo5K26ZL3VjrO8Yb4XjioDymFkQikuDUMtN2wpDOi-9E4qeVpd7rotmo8dU_KU_Opqg7woxrbVJU7A9aGcLhXBGvMHMYWQcYEa9AOUEyI4V1oc9a5y7LTgLw5RM_wj6uDOEG72Ot5orSTgVBfD2HpDirxnypLchW-h7M0CcsyaItlhJJmSRvvlPuolzGsqqNMFICoIJ3U13tlfZFHNO4A9mMNK7R9CHRyja1w_dH5T_Lk__Ars9rck</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2108721239</pqid></control><display><type>article</type><title>An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks</title><source>Open Access: PubMed Central</source><source>Publicly Available Content Database</source><creator>Shamwell, E Jared ; Nothwang, William D ; Perlis, Donald</creator><creatorcontrib>Shamwell, E Jared ; Nothwang, William D ; Perlis, Donald</creatorcontrib><description>Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.</description><identifier>ISSN: 1424-8220</identifier><identifier>EISSN: 1424-8220</identifier><identifier>DOI: 10.3390/s18051427</identifier><identifier>PMID: 29734687</identifier><language>eng</language><publisher>Switzerland: MDPI AG</publisher><subject>deep learning ; Deformation ; Hypotheses ; Motion simulation ; Multisensor fusion ; Odometers ; optical flow ; Pyramids ; sensor fusion ; Sensors ; State estimation</subject><ispartof>Sensors (Basel, Switzerland), 2018-05, Vol.18 (5), p.1427</ispartof><rights>2018. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2018 by the authors. 2018</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c469t-32b28190003f540234f469aacfb4567d592e9f14ce7958a0065a667a2bf8fd793</citedby><cites>FETCH-LOGICAL-c469t-32b28190003f540234f469aacfb4567d592e9f14ce7958a0065a667a2bf8fd793</cites><orcidid>0000-0002-7991-6454</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2108721239/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2108721239?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,724,777,781,882,25734,27905,27906,36993,36994,44571,53772,53774,74875</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/29734687$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Shamwell, E Jared</creatorcontrib><creatorcontrib>Nothwang, William D</creatorcontrib><creatorcontrib>Perlis, Donald</creatorcontrib><title>An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks</title><title>Sensors (Basel, Switzerland)</title><addtitle>Sensors (Basel)</addtitle><description>Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.</description><subject>deep learning</subject><subject>Deformation</subject><subject>Hypotheses</subject><subject>Motion simulation</subject><subject>Multisensor fusion</subject><subject>Odometers</subject><subject>optical flow</subject><subject>Pyramids</subject><subject>sensor fusion</subject><subject>Sensors</subject><subject>State estimation</subject><issn>1424-8220</issn><issn>1424-8220</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdkk1v1DAQhiMEoh9w4A-gSFzoIeDP2L4grcoWKrVwgOVqOfZ46yUbBzsp4t_j7ZZVy2lGM68evfNRVa8wekepQu8zlohjRsST6rgE1khC0NMH-VF1kvMGIUIplc-rI6IEZa0UxxUshnq57aIL4OrruZ9C8w2GHFN9MecQh3oxjikae1NPsf4R8mz6-jpOu84yT2Fr7tJVDsO6Xg15HiHdhlxYHwHG-gtMv2P6mV9Uz7zpM7y8j6fV6mL5_fxzc_X10-X54qqxrFVTQ0lHJFYIIeo5K26ZL3VjrO8Yb4XjioDymFkQikuDUMtN2wpDOi-9E4qeVpd7rotmo8dU_KU_Opqg7woxrbVJU7A9aGcLhXBGvMHMYWQcYEa9AOUEyI4V1oc9a5y7LTgLw5RM_wj6uDOEG72Ot5orSTgVBfD2HpDirxnypLchW-h7M0CcsyaItlhJJmSRvvlPuolzGsqqNMFICoIJ3U13tlfZFHNO4A9mMNK7R9CHRyja1w_dH5T_Lk__Ars9rck</recordid><startdate>20180504</startdate><enddate>20180504</enddate><creator>Shamwell, E Jared</creator><creator>Nothwang, William D</creator><creator>Perlis, Donald</creator><general>MDPI AG</general><general>MDPI</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>K9.</scope><scope>M0S</scope><scope>M1P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-7991-6454</orcidid></search><sort><creationdate>20180504</creationdate><title>An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks</title><author>Shamwell, E Jared ; Nothwang, William D ; Perlis, Donald</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c469t-32b28190003f540234f469aacfb4567d592e9f14ce7958a0065a667a2bf8fd793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>deep learning</topic><topic>Deformation</topic><topic>Hypotheses</topic><topic>Motion simulation</topic><topic>Multisensor fusion</topic><topic>Odometers</topic><topic>optical flow</topic><topic>Pyramids</topic><topic>sensor fusion</topic><topic>Sensors</topic><topic>State estimation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shamwell, E Jared</creatorcontrib><creatorcontrib>Nothwang, William D</creatorcontrib><creatorcontrib>Perlis, Donald</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest - Health &amp; Medical Complete保健、医学与药学数据库</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>PML(ProQuest Medical Library)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Sensors (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shamwell, E Jared</au><au>Nothwang, William D</au><au>Perlis, Donald</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks</atitle><jtitle>Sensors (Basel, Switzerland)</jtitle><addtitle>Sensors (Basel)</addtitle><date>2018-05-04</date><risdate>2018</risdate><volume>18</volume><issue>5</issue><spage>1427</spage><pages>1427-</pages><issn>1424-8220</issn><eissn>1424-8220</eissn><abstract>Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.</abstract><cop>Switzerland</cop><pub>MDPI AG</pub><pmid>29734687</pmid><doi>10.3390/s18051427</doi><orcidid>https://orcid.org/0000-0002-7991-6454</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1424-8220
ispartof Sensors (Basel, Switzerland), 2018-05, Vol.18 (5), p.1427
issn 1424-8220
1424-8220
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_dcce72542fa14d10ade143f7e9d7e8b4
source Open Access: PubMed Central; Publicly Available Content Database
subjects deep learning
Deformation
Hypotheses
Motion simulation
Multisensor fusion
Odometers
optical flow
Pyramids
sensor fusion
Sensors
State estimation
title An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T13%3A59%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20Embodied%20Multi-Sensor%20Fusion%20Approach%20to%20Visual%20Motion%20Estimation%20Using%20Unsupervised%20Deep%20Networks&rft.jtitle=Sensors%20(Basel,%20Switzerland)&rft.au=Shamwell,%20E%20Jared&rft.date=2018-05-04&rft.volume=18&rft.issue=5&rft.spage=1427&rft.pages=1427-&rft.issn=1424-8220&rft.eissn=1424-8220&rft_id=info:doi/10.3390/s18051427&rft_dat=%3Cproquest_doaj_%3E2036198478%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c469t-32b28190003f540234f469aacfb4567d592e9f14ce7958a0065a667a2bf8fd793%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2108721239&rft_id=info:pmid/29734687&rfr_iscdi=true