Loading…
Mutual Suppression Network for Video Prediction using Disentangled Features
Video prediction has been considered a difficult problem because the video contains not only high-dimensional spatial information but also complex temporal information. Video prediction can be performed by finding features in recent frames, and using them to generate approximations to upcoming frame...
Saved in:
Published in: | arXiv.org 2019-07 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Lee, Jungbeom Lee, Jangho Lee, Sungmin Yoon, Sungroh |
description | Video prediction has been considered a difficult problem because the video contains not only high-dimensional spatial information but also complex temporal information. Video prediction can be performed by finding features in recent frames, and using them to generate approximations to upcoming frames. We approach this problem by disentangling spatial and temporal features in videos. We introduce a mutual suppression network (MSnet) which are trained in an adversarial manner and then produces spatial features which are free of motion information, and motion features with no spatial information. MSnet then uses motion-guided connection within an encoder-decoder-based architecture to transform spatial features from a previous frame to the time of an upcoming frame. We show how MSnet can be used for video prediction using disentangled representations. We also carry out experiments to assess the effectiveness of our method to disentangle features. MSnet obtains better results than other recent video prediction methods even though it has simpler encoders. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2072028705</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2072028705</sourcerecordid><originalsourceid>FETCH-proquest_journals_20720287053</originalsourceid><addsrcrecordid>eNqNi0EKwjAQAIMgWLR_CHguxI2xvatFEEVQvJZgtyW1JDWb4Pet4AM8zWFmJiwBKVdZsQaYsZSoE0LAJgelZMKOpxii7vk1DoNHIuMsP2N4O__kjfP8bmp0_OKxNo_wlZGMbfnOENqgbdtjzUvUIY7zgk0b3ROmP87Zstzftods8O4VkULVuejtqCoQOQgocqHkf9UHWOU9pQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2072028705</pqid></control><display><type>article</type><title>Mutual Suppression Network for Video Prediction using Disentangled Features</title><source>Publicly Available Content Database</source><creator>Lee, Jungbeom ; Lee, Jangho ; Lee, Sungmin ; Yoon, Sungroh</creator><creatorcontrib>Lee, Jungbeom ; Lee, Jangho ; Lee, Sungmin ; Yoon, Sungroh</creatorcontrib><description>Video prediction has been considered a difficult problem because the video contains not only high-dimensional spatial information but also complex temporal information. Video prediction can be performed by finding features in recent frames, and using them to generate approximations to upcoming frames. We approach this problem by disentangling spatial and temporal features in videos. We introduce a mutual suppression network (MSnet) which are trained in an adversarial manner and then produces spatial features which are free of motion information, and motion features with no spatial information. MSnet then uses motion-guided connection within an encoder-decoder-based architecture to transform spatial features from a previous frame to the time of an upcoming frame. We show how MSnet can be used for video prediction using disentangled representations. We also carry out experiments to assess the effectiveness of our method to disentangle features. MSnet obtains better results than other recent video prediction methods even though it has simpler encoders.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Feature extraction ; Optical flow (image analysis) ; Pixels ; Representations ; Source code</subject><ispartof>arXiv.org, 2019-07</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2072028705?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Lee, Jungbeom</creatorcontrib><creatorcontrib>Lee, Jangho</creatorcontrib><creatorcontrib>Lee, Sungmin</creatorcontrib><creatorcontrib>Yoon, Sungroh</creatorcontrib><title>Mutual Suppression Network for Video Prediction using Disentangled Features</title><title>arXiv.org</title><description>Video prediction has been considered a difficult problem because the video contains not only high-dimensional spatial information but also complex temporal information. Video prediction can be performed by finding features in recent frames, and using them to generate approximations to upcoming frames. We approach this problem by disentangling spatial and temporal features in videos. We introduce a mutual suppression network (MSnet) which are trained in an adversarial manner and then produces spatial features which are free of motion information, and motion features with no spatial information. MSnet then uses motion-guided connection within an encoder-decoder-based architecture to transform spatial features from a previous frame to the time of an upcoming frame. We show how MSnet can be used for video prediction using disentangled representations. We also carry out experiments to assess the effectiveness of our method to disentangle features. MSnet obtains better results than other recent video prediction methods even though it has simpler encoders.</description><subject>Feature extraction</subject><subject>Optical flow (image analysis)</subject><subject>Pixels</subject><subject>Representations</subject><subject>Source code</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi0EKwjAQAIMgWLR_CHguxI2xvatFEEVQvJZgtyW1JDWb4Pet4AM8zWFmJiwBKVdZsQaYsZSoE0LAJgelZMKOpxii7vk1DoNHIuMsP2N4O__kjfP8bmp0_OKxNo_wlZGMbfnOENqgbdtjzUvUIY7zgk0b3ROmP87Zstzftods8O4VkULVuejtqCoQOQgocqHkf9UHWOU9pQ</recordid><startdate>20190714</startdate><enddate>20190714</enddate><creator>Lee, Jungbeom</creator><creator>Lee, Jangho</creator><creator>Lee, Sungmin</creator><creator>Yoon, Sungroh</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20190714</creationdate><title>Mutual Suppression Network for Video Prediction using Disentangled Features</title><author>Lee, Jungbeom ; Lee, Jangho ; Lee, Sungmin ; Yoon, Sungroh</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20720287053</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Feature extraction</topic><topic>Optical flow (image analysis)</topic><topic>Pixels</topic><topic>Representations</topic><topic>Source code</topic><toplevel>online_resources</toplevel><creatorcontrib>Lee, Jungbeom</creatorcontrib><creatorcontrib>Lee, Jangho</creatorcontrib><creatorcontrib>Lee, Sungmin</creatorcontrib><creatorcontrib>Yoon, Sungroh</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>Proquest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lee, Jungbeom</au><au>Lee, Jangho</au><au>Lee, Sungmin</au><au>Yoon, Sungroh</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Mutual Suppression Network for Video Prediction using Disentangled Features</atitle><jtitle>arXiv.org</jtitle><date>2019-07-14</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>Video prediction has been considered a difficult problem because the video contains not only high-dimensional spatial information but also complex temporal information. Video prediction can be performed by finding features in recent frames, and using them to generate approximations to upcoming frames. We approach this problem by disentangling spatial and temporal features in videos. We introduce a mutual suppression network (MSnet) which are trained in an adversarial manner and then produces spatial features which are free of motion information, and motion features with no spatial information. MSnet then uses motion-guided connection within an encoder-decoder-based architecture to transform spatial features from a previous frame to the time of an upcoming frame. We show how MSnet can be used for video prediction using disentangled representations. We also carry out experiments to assess the effectiveness of our method to disentangle features. MSnet obtains better results than other recent video prediction methods even though it has simpler encoders.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2019-07 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2072028705 |
source | Publicly Available Content Database |
subjects | Feature extraction Optical flow (image analysis) Pixels Representations Source code |
title | Mutual Suppression Network for Video Prediction using Disentangled Features |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T07%3A39%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Mutual%20Suppression%20Network%20for%20Video%20Prediction%20using%20Disentangled%20Features&rft.jtitle=arXiv.org&rft.au=Lee,%20Jungbeom&rft.date=2019-07-14&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2072028705%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_20720287053%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2072028705&rft_id=info:pmid/&rfr_iscdi=true |