Loading…

Very Low-Resolution Moving Vehicle Detection in Satellite Videos

This paper proposes a practical end-to-end neural network framework to detect tiny moving vehicles in satellite videos with low imaging quality. Some instability factors such as illumination changes, motion blurs, and low contrast to the cluttered background make it difficult to distinguish true obj...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on geoscience and remote sensing 2022-01, Vol.60, p.1-1
Main Authors: Pi, Zhaoliang, Jiao, Licheng, Liu, Fang, Liu, Xu, Li, Lingling, Hou, Biao, Yang, Shuyuan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c293t-4f48c717155e182a41d06b145fcbd1344dbb38c4020504a98478c2647b25b9d63
cites cdi_FETCH-LOGICAL-c293t-4f48c717155e182a41d06b145fcbd1344dbb38c4020504a98478c2647b25b9d63
container_end_page 1
container_issue
container_start_page 1
container_title IEEE transactions on geoscience and remote sensing
container_volume 60
creator Pi, Zhaoliang
Jiao, Licheng
Liu, Fang
Liu, Xu
Li, Lingling
Hou, Biao
Yang, Shuyuan
description This paper proposes a practical end-to-end neural network framework to detect tiny moving vehicles in satellite videos with low imaging quality. Some instability factors such as illumination changes, motion blurs, and low contrast to the cluttered background make it difficult to distinguish true objects from noise and other point-shaped distractors. Moving vehicle detection in satellite videos can be carried out based on background subtraction or frame differencing. However, these methods are prone to produce lots of false alarms and miss many positive targets. Appearance-based detection can be an alternative but is not well-suited since classifier models are of weak discriminative power for the vehicles in top view at such low resolution. This article addresses these issues by integrating motion information from adjacent frames to facilitate the extraction of semantic features and incorporating the Transformer to refine the features for key points estimation and scale prediction. Our proposed model can well identify the actual moving targets and suppress interference from stationary targets or background. The experiments and evaluations using satellite videos show that the proposed approach can accurately locate the targets under weak feature attributes and improve the detection performance in complex scenarios.
doi_str_mv 10.1109/TGRS.2022.3179502
format article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_9785979</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9785979</ieee_id><sourcerecordid>2676782475</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-4f48c717155e182a41d06b145fcbd1344dbb38c4020504a98478c2647b25b9d63</originalsourceid><addsrcrecordid>eNo9kEFOwzAQRS0EEqVwAMQmEusUj2PH9g5UoCAVIbWlWytxJuAqxMVOQb09Ka1YzWLen_l6hFwCHQFQfbOYzOYjRhkbZSC1oOyIDEAIldKc82MyoKDzlCnNTslZjCtKgQuQA3K7xLBNpv4nnWH0zaZzvk1e_Ldr35MlfjjbYHKPHdq_hWuTedFh07gOk6Wr0MdzclIXTcSLwxySt8eHxfgpnb5Onsd309QynXUpr7myEmTfCUGxgkNF87IvUduygozzqiwzZTllVFBeaMWlsiznsmSi1FWeDcn1_u46-K8Nxs6s_Ca0_UvDcplLxbgUPQV7ygYfY8DarIP7LMLWADU7UWYnyuxEmYOoPnO1zzhE_Oe1VEJLnf0C0eJijw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2676782475</pqid></control><display><type>article</type><title>Very Low-Resolution Moving Vehicle Detection in Satellite Videos</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Pi, Zhaoliang ; Jiao, Licheng ; Liu, Fang ; Liu, Xu ; Li, Lingling ; Hou, Biao ; Yang, Shuyuan</creator><creatorcontrib>Pi, Zhaoliang ; Jiao, Licheng ; Liu, Fang ; Liu, Xu ; Li, Lingling ; Hou, Biao ; Yang, Shuyuan</creatorcontrib><description>This paper proposes a practical end-to-end neural network framework to detect tiny moving vehicles in satellite videos with low imaging quality. Some instability factors such as illumination changes, motion blurs, and low contrast to the cluttered background make it difficult to distinguish true objects from noise and other point-shaped distractors. Moving vehicle detection in satellite videos can be carried out based on background subtraction or frame differencing. However, these methods are prone to produce lots of false alarms and miss many positive targets. Appearance-based detection can be an alternative but is not well-suited since classifier models are of weak discriminative power for the vehicles in top view at such low resolution. This article addresses these issues by integrating motion information from adjacent frames to facilitate the extraction of semantic features and incorporating the Transformer to refine the features for key points estimation and scale prediction. Our proposed model can well identify the actual moving targets and suppress interference from stationary targets or background. The experiments and evaluations using satellite videos show that the proposed approach can accurately locate the targets under weak feature attributes and improve the detection performance in complex scenarios.</description><identifier>ISSN: 0196-2892</identifier><identifier>EISSN: 1558-0644</identifier><identifier>DOI: 10.1109/TGRS.2022.3179502</identifier><identifier>CODEN: IGRSD2</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Background noise ; Detection ; end-to-end neural network framework ; False alarms ; Feature extraction ; integrate motion information ; Interference ; low-resolution ; Motion stability ; Moving targets ; moving vehicle ; Neural networks ; Object recognition ; Production methods ; Resolution ; satellite video ; Satellites ; Semantics ; Subtraction ; Target detection ; Transformer ; Transformers ; Vehicle detection ; Vehicles ; Video ; Videos</subject><ispartof>IEEE transactions on geoscience and remote sensing, 2022-01, Vol.60, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-4f48c717155e182a41d06b145fcbd1344dbb38c4020504a98478c2647b25b9d63</citedby><cites>FETCH-LOGICAL-c293t-4f48c717155e182a41d06b145fcbd1344dbb38c4020504a98478c2647b25b9d63</cites><orcidid>0000-0002-4796-5737 ; 0000-0002-5669-9354 ; 0000-0002-6130-2518 ; 0000-0002-8780-5455 ; 0000-0002-6518-6358 ; 0000-0003-3354-9617</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9785979$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,54771</link.rule.ids></links><search><creatorcontrib>Pi, Zhaoliang</creatorcontrib><creatorcontrib>Jiao, Licheng</creatorcontrib><creatorcontrib>Liu, Fang</creatorcontrib><creatorcontrib>Liu, Xu</creatorcontrib><creatorcontrib>Li, Lingling</creatorcontrib><creatorcontrib>Hou, Biao</creatorcontrib><creatorcontrib>Yang, Shuyuan</creatorcontrib><title>Very Low-Resolution Moving Vehicle Detection in Satellite Videos</title><title>IEEE transactions on geoscience and remote sensing</title><addtitle>TGRS</addtitle><description>This paper proposes a practical end-to-end neural network framework to detect tiny moving vehicles in satellite videos with low imaging quality. Some instability factors such as illumination changes, motion blurs, and low contrast to the cluttered background make it difficult to distinguish true objects from noise and other point-shaped distractors. Moving vehicle detection in satellite videos can be carried out based on background subtraction or frame differencing. However, these methods are prone to produce lots of false alarms and miss many positive targets. Appearance-based detection can be an alternative but is not well-suited since classifier models are of weak discriminative power for the vehicles in top view at such low resolution. This article addresses these issues by integrating motion information from adjacent frames to facilitate the extraction of semantic features and incorporating the Transformer to refine the features for key points estimation and scale prediction. Our proposed model can well identify the actual moving targets and suppress interference from stationary targets or background. The experiments and evaluations using satellite videos show that the proposed approach can accurately locate the targets under weak feature attributes and improve the detection performance in complex scenarios.</description><subject>Background noise</subject><subject>Detection</subject><subject>end-to-end neural network framework</subject><subject>False alarms</subject><subject>Feature extraction</subject><subject>integrate motion information</subject><subject>Interference</subject><subject>low-resolution</subject><subject>Motion stability</subject><subject>Moving targets</subject><subject>moving vehicle</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Production methods</subject><subject>Resolution</subject><subject>satellite video</subject><subject>Satellites</subject><subject>Semantics</subject><subject>Subtraction</subject><subject>Target detection</subject><subject>Transformer</subject><subject>Transformers</subject><subject>Vehicle detection</subject><subject>Vehicles</subject><subject>Video</subject><subject>Videos</subject><issn>0196-2892</issn><issn>1558-0644</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNo9kEFOwzAQRS0EEqVwAMQmEusUj2PH9g5UoCAVIbWlWytxJuAqxMVOQb09Ka1YzWLen_l6hFwCHQFQfbOYzOYjRhkbZSC1oOyIDEAIldKc82MyoKDzlCnNTslZjCtKgQuQA3K7xLBNpv4nnWH0zaZzvk1e_Ldr35MlfjjbYHKPHdq_hWuTedFh07gOk6Wr0MdzclIXTcSLwxySt8eHxfgpnb5Onsd309QynXUpr7myEmTfCUGxgkNF87IvUduygozzqiwzZTllVFBeaMWlsiznsmSi1FWeDcn1_u46-K8Nxs6s_Ca0_UvDcplLxbgUPQV7ygYfY8DarIP7LMLWADU7UWYnyuxEmYOoPnO1zzhE_Oe1VEJLnf0C0eJijw</recordid><startdate>20220101</startdate><enddate>20220101</enddate><creator>Pi, Zhaoliang</creator><creator>Jiao, Licheng</creator><creator>Liu, Fang</creator><creator>Liu, Xu</creator><creator>Li, Lingling</creator><creator>Hou, Biao</creator><creator>Yang, Shuyuan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-4796-5737</orcidid><orcidid>https://orcid.org/0000-0002-5669-9354</orcidid><orcidid>https://orcid.org/0000-0002-6130-2518</orcidid><orcidid>https://orcid.org/0000-0002-8780-5455</orcidid><orcidid>https://orcid.org/0000-0002-6518-6358</orcidid><orcidid>https://orcid.org/0000-0003-3354-9617</orcidid></search><sort><creationdate>20220101</creationdate><title>Very Low-Resolution Moving Vehicle Detection in Satellite Videos</title><author>Pi, Zhaoliang ; Jiao, Licheng ; Liu, Fang ; Liu, Xu ; Li, Lingling ; Hou, Biao ; Yang, Shuyuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-4f48c717155e182a41d06b145fcbd1344dbb38c4020504a98478c2647b25b9d63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Background noise</topic><topic>Detection</topic><topic>end-to-end neural network framework</topic><topic>False alarms</topic><topic>Feature extraction</topic><topic>integrate motion information</topic><topic>Interference</topic><topic>low-resolution</topic><topic>Motion stability</topic><topic>Moving targets</topic><topic>moving vehicle</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Production methods</topic><topic>Resolution</topic><topic>satellite video</topic><topic>Satellites</topic><topic>Semantics</topic><topic>Subtraction</topic><topic>Target detection</topic><topic>Transformer</topic><topic>Transformers</topic><topic>Vehicle detection</topic><topic>Vehicles</topic><topic>Video</topic><topic>Videos</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Pi, Zhaoliang</creatorcontrib><creatorcontrib>Jiao, Licheng</creatorcontrib><creatorcontrib>Liu, Fang</creatorcontrib><creatorcontrib>Liu, Xu</creatorcontrib><creatorcontrib>Li, Lingling</creatorcontrib><creatorcontrib>Hou, Biao</creatorcontrib><creatorcontrib>Yang, Shuyuan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE All-Society Periodicals Package (ASPP) Online</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy &amp; Non-Living Resources</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on geoscience and remote sensing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Pi, Zhaoliang</au><au>Jiao, Licheng</au><au>Liu, Fang</au><au>Liu, Xu</au><au>Li, Lingling</au><au>Hou, Biao</au><au>Yang, Shuyuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Very Low-Resolution Moving Vehicle Detection in Satellite Videos</atitle><jtitle>IEEE transactions on geoscience and remote sensing</jtitle><stitle>TGRS</stitle><date>2022-01-01</date><risdate>2022</risdate><volume>60</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>0196-2892</issn><eissn>1558-0644</eissn><coden>IGRSD2</coden><abstract>This paper proposes a practical end-to-end neural network framework to detect tiny moving vehicles in satellite videos with low imaging quality. Some instability factors such as illumination changes, motion blurs, and low contrast to the cluttered background make it difficult to distinguish true objects from noise and other point-shaped distractors. Moving vehicle detection in satellite videos can be carried out based on background subtraction or frame differencing. However, these methods are prone to produce lots of false alarms and miss many positive targets. Appearance-based detection can be an alternative but is not well-suited since classifier models are of weak discriminative power for the vehicles in top view at such low resolution. This article addresses these issues by integrating motion information from adjacent frames to facilitate the extraction of semantic features and incorporating the Transformer to refine the features for key points estimation and scale prediction. Our proposed model can well identify the actual moving targets and suppress interference from stationary targets or background. The experiments and evaluations using satellite videos show that the proposed approach can accurately locate the targets under weak feature attributes and improve the detection performance in complex scenarios.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TGRS.2022.3179502</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-4796-5737</orcidid><orcidid>https://orcid.org/0000-0002-5669-9354</orcidid><orcidid>https://orcid.org/0000-0002-6130-2518</orcidid><orcidid>https://orcid.org/0000-0002-8780-5455</orcidid><orcidid>https://orcid.org/0000-0002-6518-6358</orcidid><orcidid>https://orcid.org/0000-0003-3354-9617</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0196-2892
ispartof IEEE transactions on geoscience and remote sensing, 2022-01, Vol.60, p.1-1
issn 0196-2892
1558-0644
language eng
recordid cdi_ieee_primary_9785979
source IEEE Electronic Library (IEL) Journals
subjects Background noise
Detection
end-to-end neural network framework
False alarms
Feature extraction
integrate motion information
Interference
low-resolution
Motion stability
Moving targets
moving vehicle
Neural networks
Object recognition
Production methods
Resolution
satellite video
Satellites
Semantics
Subtraction
Target detection
Transformer
Transformers
Vehicle detection
Vehicles
Video
Videos
title Very Low-Resolution Moving Vehicle Detection in Satellite Videos
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T21%3A59%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Very%20Low-Resolution%20Moving%20Vehicle%20Detection%20in%20Satellite%20Videos&rft.jtitle=IEEE%20transactions%20on%20geoscience%20and%20remote%20sensing&rft.au=Pi,%20Zhaoliang&rft.date=2022-01-01&rft.volume=60&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=0196-2892&rft.eissn=1558-0644&rft.coden=IGRSD2&rft_id=info:doi/10.1109/TGRS.2022.3179502&rft_dat=%3Cproquest_ieee_%3E2676782475%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c293t-4f48c717155e182a41d06b145fcbd1344dbb38c4020504a98478c2647b25b9d63%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2676782475&rft_id=info:pmid/&rft_ieee_id=9785979&rfr_iscdi=true