Loading…

Adaptively Meshed Video Stabilization

Video stabilization is essential for improving the visual quality of shaky videos. Current video stabilization methods usually take feature trajectories in the background to estimate one global transformation matrix or several transformation matrices based on a fixed mesh, and warp shaky frames into...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on circuits and systems for video technology 2021-09, Vol.31 (9), p.3504-3517
Main Authors: Zhao, Minda, Ling, Qiang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c295t-6d6c245798b1b52099d463dc3d5e89e40ebaaeced1fc48094416d75bf99d89bb3
cites cdi_FETCH-LOGICAL-c295t-6d6c245798b1b52099d463dc3d5e89e40ebaaeced1fc48094416d75bf99d89bb3
container_end_page 3517
container_issue 9
container_start_page 3504
container_title IEEE transactions on circuits and systems for video technology
container_volume 31
creator Zhao, Minda
Ling, Qiang
description Video stabilization is essential for improving the visual quality of shaky videos. Current video stabilization methods usually take feature trajectories in the background to estimate one global transformation matrix or several transformation matrices based on a fixed mesh, and warp shaky frames into their stabilized views. However, these methods may not model the shaky camera motion well in complicated scenes, such as scenes containing large foreground objects or strong parallax, and may result in notable visual artifacts in the stabilized videos. To resolve the above issues, this paper proposes an adaptively meshed method to stabilize a shaky video based on all of its feature trajectories and an adaptive blocking strategy. More specifically, we first extract the feature trajectories of the shaky video and then generate a triangle mesh according to the distribution of the feature trajectories in each frame. Then, the transformations between shaky frames and their stabilized views over all triangular grids of the mesh are calculated to stabilize the shaky video. Since more feature trajectories can usually be extracted from all of the regions, including both the background and foreground regions, a finer mesh will be obtained and provided for camera motion estimation and frame warping. We estimate the mesh-based transformations of each frame by solving a two-stage optimization problem. Moreover, foreground and background feature trajectories are no longer distinguished and both contribute to the estimation of the camera motion in the proposed optimization problem, yielding better estimation performance than previous works, particularly in challenging videos with large foreground objects or strong parallax. To further enhance the robustness of our method, we propose two adaptive weighting mechanisms to improve its spatial and temporal adaptability. Experimental results demonstrate the effectiveness of our method in producing visually pleasing stabilization effects in various challenging videos.
doi_str_mv 10.1109/TCSVT.2020.3040753
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2568777149</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9272336</ieee_id><sourcerecordid>2568777149</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-6d6c245798b1b52099d463dc3d5e89e40ebaaeced1fc48094416d75bf99d89bb3</originalsourceid><addsrcrecordid>eNo9kE1LAzEQQIMoWKt_QC8F8bh18rVJjqX4BRUPXXsNyWYWU9Zu3WyF-uvdusXTzOG9GXiEXFOYUgrmvpgvV8WUAYMpBwFK8hMyolLqjDGQp_0OkmaaUXlOLlJaA1ChhRqRu1lw2y5-Y72fvGL6wDBZxYDNZNk5H-v447rYbC7JWeXqhFfHOSbvjw_F_DlbvD29zGeLrGRGdlke8pIJqYz21EsGxgSR81DyIFEbFIDeOSwx0KoUGowQNA9K-qoHtfGej8ntcHfbNl87TJ1dN7t207-0TOZaKUWF6Sk2UGXbpNRiZbdt_HTt3lKwhxz2L4c95LDHHL10M0gREf8FwxTjPOe_eSlauA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2568777149</pqid></control><display><type>article</type><title>Adaptively Meshed Video Stabilization</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Zhao, Minda ; Ling, Qiang</creator><creatorcontrib>Zhao, Minda ; Ling, Qiang</creatorcontrib><description>Video stabilization is essential for improving the visual quality of shaky videos. Current video stabilization methods usually take feature trajectories in the background to estimate one global transformation matrix or several transformation matrices based on a fixed mesh, and warp shaky frames into their stabilized views. However, these methods may not model the shaky camera motion well in complicated scenes, such as scenes containing large foreground objects or strong parallax, and may result in notable visual artifacts in the stabilized videos. To resolve the above issues, this paper proposes an adaptively meshed method to stabilize a shaky video based on all of its feature trajectories and an adaptive blocking strategy. More specifically, we first extract the feature trajectories of the shaky video and then generate a triangle mesh according to the distribution of the feature trajectories in each frame. Then, the transformations between shaky frames and their stabilized views over all triangular grids of the mesh are calculated to stabilize the shaky video. Since more feature trajectories can usually be extracted from all of the regions, including both the background and foreground regions, a finer mesh will be obtained and provided for camera motion estimation and frame warping. We estimate the mesh-based transformations of each frame by solving a two-stage optimization problem. Moreover, foreground and background feature trajectories are no longer distinguished and both contribute to the estimation of the camera motion in the proposed optimization problem, yielding better estimation performance than previous works, particularly in challenging videos with large foreground objects or strong parallax. To further enhance the robustness of our method, we propose two adaptive weighting mechanisms to improve its spatial and temporal adaptability. Experimental results demonstrate the effectiveness of our method in producing visually pleasing stabilization effects in various challenging videos.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2020.3040753</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Cameras ; Feature extraction ; feature trajectories ; Finite element method ; Frames (data processing) ; Mesh generation ; Motion simulation ; Optimization ; Parallax ; Stabilization ; Three-dimensional displays ; Trajectory ; Transformations ; Transmission line matrix methods ; triangle mesh ; Two dimensional displays ; Video ; Video stabilization</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2021-09, Vol.31 (9), p.3504-3517</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-6d6c245798b1b52099d463dc3d5e89e40ebaaeced1fc48094416d75bf99d89bb3</citedby><cites>FETCH-LOGICAL-c295t-6d6c245798b1b52099d463dc3d5e89e40ebaaeced1fc48094416d75bf99d89bb3</cites><orcidid>0000-0002-8736-272X ; 0000-0001-5688-4130</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9272336$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Zhao, Minda</creatorcontrib><creatorcontrib>Ling, Qiang</creatorcontrib><title>Adaptively Meshed Video Stabilization</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Video stabilization is essential for improving the visual quality of shaky videos. Current video stabilization methods usually take feature trajectories in the background to estimate one global transformation matrix or several transformation matrices based on a fixed mesh, and warp shaky frames into their stabilized views. However, these methods may not model the shaky camera motion well in complicated scenes, such as scenes containing large foreground objects or strong parallax, and may result in notable visual artifacts in the stabilized videos. To resolve the above issues, this paper proposes an adaptively meshed method to stabilize a shaky video based on all of its feature trajectories and an adaptive blocking strategy. More specifically, we first extract the feature trajectories of the shaky video and then generate a triangle mesh according to the distribution of the feature trajectories in each frame. Then, the transformations between shaky frames and their stabilized views over all triangular grids of the mesh are calculated to stabilize the shaky video. Since more feature trajectories can usually be extracted from all of the regions, including both the background and foreground regions, a finer mesh will be obtained and provided for camera motion estimation and frame warping. We estimate the mesh-based transformations of each frame by solving a two-stage optimization problem. Moreover, foreground and background feature trajectories are no longer distinguished and both contribute to the estimation of the camera motion in the proposed optimization problem, yielding better estimation performance than previous works, particularly in challenging videos with large foreground objects or strong parallax. To further enhance the robustness of our method, we propose two adaptive weighting mechanisms to improve its spatial and temporal adaptability. Experimental results demonstrate the effectiveness of our method in producing visually pleasing stabilization effects in various challenging videos.</description><subject>Cameras</subject><subject>Feature extraction</subject><subject>feature trajectories</subject><subject>Finite element method</subject><subject>Frames (data processing)</subject><subject>Mesh generation</subject><subject>Motion simulation</subject><subject>Optimization</subject><subject>Parallax</subject><subject>Stabilization</subject><subject>Three-dimensional displays</subject><subject>Trajectory</subject><subject>Transformations</subject><subject>Transmission line matrix methods</subject><subject>triangle mesh</subject><subject>Two dimensional displays</subject><subject>Video</subject><subject>Video stabilization</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNo9kE1LAzEQQIMoWKt_QC8F8bh18rVJjqX4BRUPXXsNyWYWU9Zu3WyF-uvdusXTzOG9GXiEXFOYUgrmvpgvV8WUAYMpBwFK8hMyolLqjDGQp_0OkmaaUXlOLlJaA1ChhRqRu1lw2y5-Y72fvGL6wDBZxYDNZNk5H-v447rYbC7JWeXqhFfHOSbvjw_F_DlbvD29zGeLrGRGdlke8pIJqYz21EsGxgSR81DyIFEbFIDeOSwx0KoUGowQNA9K-qoHtfGej8ntcHfbNl87TJ1dN7t207-0TOZaKUWF6Sk2UGXbpNRiZbdt_HTt3lKwhxz2L4c95LDHHL10M0gREf8FwxTjPOe_eSlauA</recordid><startdate>20210901</startdate><enddate>20210901</enddate><creator>Zhao, Minda</creator><creator>Ling, Qiang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-8736-272X</orcidid><orcidid>https://orcid.org/0000-0001-5688-4130</orcidid></search><sort><creationdate>20210901</creationdate><title>Adaptively Meshed Video Stabilization</title><author>Zhao, Minda ; Ling, Qiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-6d6c245798b1b52099d463dc3d5e89e40ebaaeced1fc48094416d75bf99d89bb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Cameras</topic><topic>Feature extraction</topic><topic>feature trajectories</topic><topic>Finite element method</topic><topic>Frames (data processing)</topic><topic>Mesh generation</topic><topic>Motion simulation</topic><topic>Optimization</topic><topic>Parallax</topic><topic>Stabilization</topic><topic>Three-dimensional displays</topic><topic>Trajectory</topic><topic>Transformations</topic><topic>Transmission line matrix methods</topic><topic>triangle mesh</topic><topic>Two dimensional displays</topic><topic>Video</topic><topic>Video stabilization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Minda</creatorcontrib><creatorcontrib>Ling, Qiang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhao, Minda</au><au>Ling, Qiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adaptively Meshed Video Stabilization</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2021-09-01</date><risdate>2021</risdate><volume>31</volume><issue>9</issue><spage>3504</spage><epage>3517</epage><pages>3504-3517</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Video stabilization is essential for improving the visual quality of shaky videos. Current video stabilization methods usually take feature trajectories in the background to estimate one global transformation matrix or several transformation matrices based on a fixed mesh, and warp shaky frames into their stabilized views. However, these methods may not model the shaky camera motion well in complicated scenes, such as scenes containing large foreground objects or strong parallax, and may result in notable visual artifacts in the stabilized videos. To resolve the above issues, this paper proposes an adaptively meshed method to stabilize a shaky video based on all of its feature trajectories and an adaptive blocking strategy. More specifically, we first extract the feature trajectories of the shaky video and then generate a triangle mesh according to the distribution of the feature trajectories in each frame. Then, the transformations between shaky frames and their stabilized views over all triangular grids of the mesh are calculated to stabilize the shaky video. Since more feature trajectories can usually be extracted from all of the regions, including both the background and foreground regions, a finer mesh will be obtained and provided for camera motion estimation and frame warping. We estimate the mesh-based transformations of each frame by solving a two-stage optimization problem. Moreover, foreground and background feature trajectories are no longer distinguished and both contribute to the estimation of the camera motion in the proposed optimization problem, yielding better estimation performance than previous works, particularly in challenging videos with large foreground objects or strong parallax. To further enhance the robustness of our method, we propose two adaptive weighting mechanisms to improve its spatial and temporal adaptability. Experimental results demonstrate the effectiveness of our method in producing visually pleasing stabilization effects in various challenging videos.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2020.3040753</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-8736-272X</orcidid><orcidid>https://orcid.org/0000-0001-5688-4130</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1051-8215
ispartof IEEE transactions on circuits and systems for video technology, 2021-09, Vol.31 (9), p.3504-3517
issn 1051-8215
1558-2205
language eng
recordid cdi_proquest_journals_2568777149
source IEEE Electronic Library (IEL) Journals
subjects Cameras
Feature extraction
feature trajectories
Finite element method
Frames (data processing)
Mesh generation
Motion simulation
Optimization
Parallax
Stabilization
Three-dimensional displays
Trajectory
Transformations
Transmission line matrix methods
triangle mesh
Two dimensional displays
Video
Video stabilization
title Adaptively Meshed Video Stabilization
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T02%3A49%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adaptively%20Meshed%20Video%20Stabilization&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Zhao,%20Minda&rft.date=2021-09-01&rft.volume=31&rft.issue=9&rft.spage=3504&rft.epage=3517&rft.pages=3504-3517&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2020.3040753&rft_dat=%3Cproquest_cross%3E2568777149%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c295t-6d6c245798b1b52099d463dc3d5e89e40ebaaeced1fc48094416d75bf99d89bb3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2568777149&rft_id=info:pmid/&rft_ieee_id=9272336&rfr_iscdi=true