Loading…
Vid2Param: Modelling of Dynamics Parameters from Video
Videos provide a rich source of information, but it is generally hard to extract dynamical parameters of interest. Inferring those parameters from a video stream would be beneficial for physical reasoning. Robots performing tasks in dynamic environments would benefit greatly from understanding the u...
Saved in:
Published in: | arXiv.org 2020-08 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Asenov, Martin Burke, Michael Angelov, Daniel Davchev, Todor Subr, Kartic Ramamoorthy, Subramanian |
description | Videos provide a rich source of information, but it is generally hard to extract dynamical parameters of interest. Inferring those parameters from a video stream would be beneficial for physical reasoning. Robots performing tasks in dynamic environments would benefit greatly from understanding the underlying environment motion, in order to make future predictions and to synthesize effective control policies that use this inductive bias. Online physical reasoning is therefore a fundamental requirement for robust autonomous agents. When the dynamics involves multiple modes (due to contacts or interactions between objects) and sensing must proceed directly from a rich sensory stream such as video, then traditional methods for system identification may not be well suited. We propose an approach wherein fast parameter estimation can be achieved directly from video. We integrate a physically based dynamics model with a recurrent variational autoencoder, by introducing an additional loss to enforce desired constraints. The model, which we call Vid2Param, can be trained entirely in simulation, in an end-to-end manner with domain randomization, to perform online system identification, and make probabilistic forward predictions of parameters of interest. This enables the resulting model to encode parameters such as position, velocity, restitution, air drag and other physical properties of the system. We illustrate the utility of this in physical experiments wherein a PR2 robot with a velocity constrained arm must intercept an unknown bouncing ball with partly occluded vision, by estimating the physical parameters of this ball directly from the video trace after the ball is released. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2258496506</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2258496506</sourcerecordid><originalsourceid>FETCH-proquest_journals_22584965063</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwC8tMMQpILErMtVLwzU9JzcnJzEtXyE9TcKnMS8zNTC5WAEumlqQWFSukFeXnKgA1pObzMLCmJeYUp_JCaW4GZTfXEGcP3YKi_MLS1OKS-Kz80qI8oFS8kZGphYmlmamBmTFxqgB6rzTA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2258496506</pqid></control><display><type>article</type><title>Vid2Param: Modelling of Dynamics Parameters from Video</title><source>Publicly Available Content Database</source><creator>Asenov, Martin ; Burke, Michael ; Angelov, Daniel ; Davchev, Todor ; Subr, Kartic ; Ramamoorthy, Subramanian</creator><creatorcontrib>Asenov, Martin ; Burke, Michael ; Angelov, Daniel ; Davchev, Todor ; Subr, Kartic ; Ramamoorthy, Subramanian</creatorcontrib><description>Videos provide a rich source of information, but it is generally hard to extract dynamical parameters of interest. Inferring those parameters from a video stream would be beneficial for physical reasoning. Robots performing tasks in dynamic environments would benefit greatly from understanding the underlying environment motion, in order to make future predictions and to synthesize effective control policies that use this inductive bias. Online physical reasoning is therefore a fundamental requirement for robust autonomous agents. When the dynamics involves multiple modes (due to contacts or interactions between objects) and sensing must proceed directly from a rich sensory stream such as video, then traditional methods for system identification may not be well suited. We propose an approach wherein fast parameter estimation can be achieved directly from video. We integrate a physically based dynamics model with a recurrent variational autoencoder, by introducing an additional loss to enforce desired constraints. The model, which we call Vid2Param, can be trained entirely in simulation, in an end-to-end manner with domain randomization, to perform online system identification, and make probabilistic forward predictions of parameters of interest. This enables the resulting model to encode parameters such as position, velocity, restitution, air drag and other physical properties of the system. We illustrate the utility of this in physical experiments wherein a PR2 robot with a velocity constrained arm must intercept an unknown bouncing ball with partly occluded vision, by estimating the physical parameters of this ball directly from the video trace after the ball is released.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer simulation ; Identification ; Identification methods ; Mathematical models ; Object recognition ; On-line systems ; Parameter estimation ; Parameter identification ; Physical properties ; Robotics ; Robots ; System identification</subject><ispartof>arXiv.org, 2020-08</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2258496506?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Asenov, Martin</creatorcontrib><creatorcontrib>Burke, Michael</creatorcontrib><creatorcontrib>Angelov, Daniel</creatorcontrib><creatorcontrib>Davchev, Todor</creatorcontrib><creatorcontrib>Subr, Kartic</creatorcontrib><creatorcontrib>Ramamoorthy, Subramanian</creatorcontrib><title>Vid2Param: Modelling of Dynamics Parameters from Video</title><title>arXiv.org</title><description>Videos provide a rich source of information, but it is generally hard to extract dynamical parameters of interest. Inferring those parameters from a video stream would be beneficial for physical reasoning. Robots performing tasks in dynamic environments would benefit greatly from understanding the underlying environment motion, in order to make future predictions and to synthesize effective control policies that use this inductive bias. Online physical reasoning is therefore a fundamental requirement for robust autonomous agents. When the dynamics involves multiple modes (due to contacts or interactions between objects) and sensing must proceed directly from a rich sensory stream such as video, then traditional methods for system identification may not be well suited. We propose an approach wherein fast parameter estimation can be achieved directly from video. We integrate a physically based dynamics model with a recurrent variational autoencoder, by introducing an additional loss to enforce desired constraints. The model, which we call Vid2Param, can be trained entirely in simulation, in an end-to-end manner with domain randomization, to perform online system identification, and make probabilistic forward predictions of parameters of interest. This enables the resulting model to encode parameters such as position, velocity, restitution, air drag and other physical properties of the system. We illustrate the utility of this in physical experiments wherein a PR2 robot with a velocity constrained arm must intercept an unknown bouncing ball with partly occluded vision, by estimating the physical parameters of this ball directly from the video trace after the ball is released.</description><subject>Computer simulation</subject><subject>Identification</subject><subject>Identification methods</subject><subject>Mathematical models</subject><subject>Object recognition</subject><subject>On-line systems</subject><subject>Parameter estimation</subject><subject>Parameter identification</subject><subject>Physical properties</subject><subject>Robotics</subject><subject>Robots</subject><subject>System identification</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwC8tMMQpILErMtVLwzU9JzcnJzEtXyE9TcKnMS8zNTC5WAEumlqQWFSukFeXnKgA1pObzMLCmJeYUp_JCaW4GZTfXEGcP3YKi_MLS1OKS-Kz80qI8oFS8kZGphYmlmamBmTFxqgB6rzTA</recordid><startdate>20200828</startdate><enddate>20200828</enddate><creator>Asenov, Martin</creator><creator>Burke, Michael</creator><creator>Angelov, Daniel</creator><creator>Davchev, Todor</creator><creator>Subr, Kartic</creator><creator>Ramamoorthy, Subramanian</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20200828</creationdate><title>Vid2Param: Modelling of Dynamics Parameters from Video</title><author>Asenov, Martin ; Burke, Michael ; Angelov, Daniel ; Davchev, Todor ; Subr, Kartic ; Ramamoorthy, Subramanian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_22584965063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer simulation</topic><topic>Identification</topic><topic>Identification methods</topic><topic>Mathematical models</topic><topic>Object recognition</topic><topic>On-line systems</topic><topic>Parameter estimation</topic><topic>Parameter identification</topic><topic>Physical properties</topic><topic>Robotics</topic><topic>Robots</topic><topic>System identification</topic><toplevel>online_resources</toplevel><creatorcontrib>Asenov, Martin</creatorcontrib><creatorcontrib>Burke, Michael</creatorcontrib><creatorcontrib>Angelov, Daniel</creatorcontrib><creatorcontrib>Davchev, Todor</creatorcontrib><creatorcontrib>Subr, Kartic</creatorcontrib><creatorcontrib>Ramamoorthy, Subramanian</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Asenov, Martin</au><au>Burke, Michael</au><au>Angelov, Daniel</au><au>Davchev, Todor</au><au>Subr, Kartic</au><au>Ramamoorthy, Subramanian</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Vid2Param: Modelling of Dynamics Parameters from Video</atitle><jtitle>arXiv.org</jtitle><date>2020-08-28</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Videos provide a rich source of information, but it is generally hard to extract dynamical parameters of interest. Inferring those parameters from a video stream would be beneficial for physical reasoning. Robots performing tasks in dynamic environments would benefit greatly from understanding the underlying environment motion, in order to make future predictions and to synthesize effective control policies that use this inductive bias. Online physical reasoning is therefore a fundamental requirement for robust autonomous agents. When the dynamics involves multiple modes (due to contacts or interactions between objects) and sensing must proceed directly from a rich sensory stream such as video, then traditional methods for system identification may not be well suited. We propose an approach wherein fast parameter estimation can be achieved directly from video. We integrate a physically based dynamics model with a recurrent variational autoencoder, by introducing an additional loss to enforce desired constraints. The model, which we call Vid2Param, can be trained entirely in simulation, in an end-to-end manner with domain randomization, to perform online system identification, and make probabilistic forward predictions of parameters of interest. This enables the resulting model to encode parameters such as position, velocity, restitution, air drag and other physical properties of the system. We illustrate the utility of this in physical experiments wherein a PR2 robot with a velocity constrained arm must intercept an unknown bouncing ball with partly occluded vision, by estimating the physical parameters of this ball directly from the video trace after the ball is released.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2020-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2258496506 |
source | Publicly Available Content Database |
subjects | Computer simulation Identification Identification methods Mathematical models Object recognition On-line systems Parameter estimation Parameter identification Physical properties Robotics Robots System identification |
title | Vid2Param: Modelling of Dynamics Parameters from Video |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T05%3A37%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Vid2Param:%20Modelling%20of%20Dynamics%20Parameters%20from%20Video&rft.jtitle=arXiv.org&rft.au=Asenov,%20Martin&rft.date=2020-08-28&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2258496506%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_22584965063%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2258496506&rft_id=info:pmid/&rfr_iscdi=true |