Loading…

Meta-Knowledge Learning and Domain Adaptation for Unseen Background Subtraction

Background subtraction is a classic video processing task pervading in numerous visual applications such as video surveillance and traffic monitoring. Given the diversity and variability of real application scenes, an ideal background subtraction model should be robust to various scenarios. Even tho...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing 2021, Vol.30, p.9058-9068
Main Authors: Zhang, Jin, Zhang, Xi, Zhang, Yanyan, Duan, Yexin, Li, Yang, Pan, Zhisong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c324t-ac82a38011bababf258cc9f78d53c7a3612a9fc98e4c618d8b5d395dbecc42593
cites cdi_FETCH-LOGICAL-c324t-ac82a38011bababf258cc9f78d53c7a3612a9fc98e4c618d8b5d395dbecc42593
container_end_page 9068
container_issue
container_start_page 9058
container_title IEEE transactions on image processing
container_volume 30
creator Zhang, Jin
Zhang, Xi
Zhang, Yanyan
Duan, Yexin
Li, Yang
Pan, Zhisong
description Background subtraction is a classic video processing task pervading in numerous visual applications such as video surveillance and traffic monitoring. Given the diversity and variability of real application scenes, an ideal background subtraction model should be robust to various scenarios. Even though deep-learning approaches have demonstrated unprecedented improvements, they often fail to generalize to unseen scenarios, thereby less suitable for extensive deployment. In this work, we propose to tackle cross-scene background subtraction via a two-phase framework that includes meta-knowledge learning and domain adaptation. Specifically, as we observe that meta-knowledge (i.e., scene-independent common knowledge) is the cornerstone for generalizing to unseen scenes, we draw on traditional frame differencing algorithms and design a deep difference network (DDN) to encode meta-knowledge especially temporal change knowledge from various cross-scene data (source domain) without intermittent foreground motion pattern. In addition, we explore a self-training domain adaptation strategy based on iterative evolution. With iteratively updated pseudo-labels, the DDN is continuously fine-tuned and evolves progressively toward unseen scenes (target domain) in an unsupervised fashion. Our framework could be easily deployed on unseen scenes without relying on their annotations. As evidenced by our experiments on the CDnet2014 dataset, it brings a significant improvement to background subtraction. Our method has a favorable processing speed (70 fps) and outperforms the best unsupervised algorithm and top supervised algorithm designed for unseen scenes by 9% and 3%, respectively.
doi_str_mv 10.1109/TIP.2021.3122102
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TIP_2021_3122102</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9594702</ieee_id><sourcerecordid>2590081271</sourcerecordid><originalsourceid>FETCH-LOGICAL-c324t-ac82a38011bababf258cc9f78d53c7a3612a9fc98e4c618d8b5d395dbecc42593</originalsourceid><addsrcrecordid>eNpd0M9LwzAUB_AgipvTu-Cl4MVLZ15-tMlxzp84meB2Lmmajs4tmUmL-N-bsuFBcngh-bzH44vQJeAxAJa3i5f3McEExhQIAUyO0BAkgxRjRo7jHfM8zYHJAToLYY0xMA7ZKRpQFl9zlg3R_M20Kn217ntjqpVJZkZ529hVomyV3LutamwyqdSuVW3jbFI7nyxtMMYmd0p_rrzrovvoytYr3YtzdFKrTTAXhzpCy8eHxfQ5nc2fXqaTWaopYW2qtCCKCgxQqnhqwoXWss5FxanOFc2AKFlrKQzTGYhKlLyiklel0ZoRLukI3ezn7rz76kxoi20TtNlslDWuC0U0GAsgOUR6_Y-uXedt3K5XJI7Pca_wXmnvQvCmLna-2Sr_UwAu-rCLGHbRh10cwo4tV_uWxhjzxyWXLI-_v__aeFM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2592618701</pqid></control><display><type>article</type><title>Meta-Knowledge Learning and Domain Adaptation for Unseen Background Subtraction</title><source>IEEE Xplore (Online service)</source><creator>Zhang, Jin ; Zhang, Xi ; Zhang, Yanyan ; Duan, Yexin ; Li, Yang ; Pan, Zhisong</creator><creatorcontrib>Zhang, Jin ; Zhang, Xi ; Zhang, Yanyan ; Duan, Yexin ; Li, Yang ; Pan, Zhisong</creatorcontrib><description>Background subtraction is a classic video processing task pervading in numerous visual applications such as video surveillance and traffic monitoring. Given the diversity and variability of real application scenes, an ideal background subtraction model should be robust to various scenarios. Even though deep-learning approaches have demonstrated unprecedented improvements, they often fail to generalize to unseen scenarios, thereby less suitable for extensive deployment. In this work, we propose to tackle cross-scene background subtraction via a two-phase framework that includes meta-knowledge learning and domain adaptation. Specifically, as we observe that meta-knowledge (i.e., scene-independent common knowledge) is the cornerstone for generalizing to unseen scenes, we draw on traditional frame differencing algorithms and design a deep difference network (DDN) to encode meta-knowledge especially temporal change knowledge from various cross-scene data (source domain) without intermittent foreground motion pattern. In addition, we explore a self-training domain adaptation strategy based on iterative evolution. With iteratively updated pseudo-labels, the DDN is continuously fine-tuned and evolves progressively toward unseen scenes (target domain) in an unsupervised fashion. Our framework could be easily deployed on unseen scenes without relying on their annotations. As evidenced by our experiments on the CDnet2014 dataset, it brings a significant improvement to background subtraction. Our method has a favorable processing speed (70 fps) and outperforms the best unsupervised algorithm and top supervised algorithm designed for unseen scenes by 9% and 3%, respectively.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2021.3122102</identifier><identifier>PMID: 34714746</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adaptation ; Adaptation models ; Algorithms ; Annotations ; Background subtraction ; deep difference network ; Deep learning ; domain adaptation ; Domains ; frame differencing algorithm ; Heuristic algorithms ; Image color analysis ; Image processing ; Inference algorithms ; Iterative methods ; Knowledge ; Machine learning ; self-training ; Semantics ; Subtraction ; Task analysis ; Traffic surveillance ; Training data ; unseen scene ; Video ; Visual tasks</subject><ispartof>IEEE transactions on image processing, 2021, Vol.30, p.9058-9068</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c324t-ac82a38011bababf258cc9f78d53c7a3612a9fc98e4c618d8b5d395dbecc42593</citedby><cites>FETCH-LOGICAL-c324t-ac82a38011bababf258cc9f78d53c7a3612a9fc98e4c618d8b5d395dbecc42593</cites><orcidid>0000-0003-1682-0284 ; 0000-0002-5223-6790 ; 0000-0002-9588-8828</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9594702$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,4024,27923,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Zhang, Jin</creatorcontrib><creatorcontrib>Zhang, Xi</creatorcontrib><creatorcontrib>Zhang, Yanyan</creatorcontrib><creatorcontrib>Duan, Yexin</creatorcontrib><creatorcontrib>Li, Yang</creatorcontrib><creatorcontrib>Pan, Zhisong</creatorcontrib><title>Meta-Knowledge Learning and Domain Adaptation for Unseen Background Subtraction</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><description>Background subtraction is a classic video processing task pervading in numerous visual applications such as video surveillance and traffic monitoring. Given the diversity and variability of real application scenes, an ideal background subtraction model should be robust to various scenarios. Even though deep-learning approaches have demonstrated unprecedented improvements, they often fail to generalize to unseen scenarios, thereby less suitable for extensive deployment. In this work, we propose to tackle cross-scene background subtraction via a two-phase framework that includes meta-knowledge learning and domain adaptation. Specifically, as we observe that meta-knowledge (i.e., scene-independent common knowledge) is the cornerstone for generalizing to unseen scenes, we draw on traditional frame differencing algorithms and design a deep difference network (DDN) to encode meta-knowledge especially temporal change knowledge from various cross-scene data (source domain) without intermittent foreground motion pattern. In addition, we explore a self-training domain adaptation strategy based on iterative evolution. With iteratively updated pseudo-labels, the DDN is continuously fine-tuned and evolves progressively toward unseen scenes (target domain) in an unsupervised fashion. Our framework could be easily deployed on unseen scenes without relying on their annotations. As evidenced by our experiments on the CDnet2014 dataset, it brings a significant improvement to background subtraction. Our method has a favorable processing speed (70 fps) and outperforms the best unsupervised algorithm and top supervised algorithm designed for unseen scenes by 9% and 3%, respectively.</description><subject>Adaptation</subject><subject>Adaptation models</subject><subject>Algorithms</subject><subject>Annotations</subject><subject>Background subtraction</subject><subject>deep difference network</subject><subject>Deep learning</subject><subject>domain adaptation</subject><subject>Domains</subject><subject>frame differencing algorithm</subject><subject>Heuristic algorithms</subject><subject>Image color analysis</subject><subject>Image processing</subject><subject>Inference algorithms</subject><subject>Iterative methods</subject><subject>Knowledge</subject><subject>Machine learning</subject><subject>self-training</subject><subject>Semantics</subject><subject>Subtraction</subject><subject>Task analysis</subject><subject>Traffic surveillance</subject><subject>Training data</subject><subject>unseen scene</subject><subject>Video</subject><subject>Visual tasks</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNpd0M9LwzAUB_AgipvTu-Cl4MVLZ15-tMlxzp84meB2Lmmajs4tmUmL-N-bsuFBcngh-bzH44vQJeAxAJa3i5f3McEExhQIAUyO0BAkgxRjRo7jHfM8zYHJAToLYY0xMA7ZKRpQFl9zlg3R_M20Kn217ntjqpVJZkZ529hVomyV3LutamwyqdSuVW3jbFI7nyxtMMYmd0p_rrzrovvoytYr3YtzdFKrTTAXhzpCy8eHxfQ5nc2fXqaTWaopYW2qtCCKCgxQqnhqwoXWss5FxanOFc2AKFlrKQzTGYhKlLyiklel0ZoRLukI3ezn7rz76kxoi20TtNlslDWuC0U0GAsgOUR6_Y-uXedt3K5XJI7Pca_wXmnvQvCmLna-2Sr_UwAu-rCLGHbRh10cwo4tV_uWxhjzxyWXLI-_v__aeFM</recordid><startdate>2021</startdate><enddate>2021</enddate><creator>Zhang, Jin</creator><creator>Zhang, Xi</creator><creator>Zhang, Yanyan</creator><creator>Duan, Yexin</creator><creator>Li, Yang</creator><creator>Pan, Zhisong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-1682-0284</orcidid><orcidid>https://orcid.org/0000-0002-5223-6790</orcidid><orcidid>https://orcid.org/0000-0002-9588-8828</orcidid></search><sort><creationdate>2021</creationdate><title>Meta-Knowledge Learning and Domain Adaptation for Unseen Background Subtraction</title><author>Zhang, Jin ; Zhang, Xi ; Zhang, Yanyan ; Duan, Yexin ; Li, Yang ; Pan, Zhisong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c324t-ac82a38011bababf258cc9f78d53c7a3612a9fc98e4c618d8b5d395dbecc42593</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Adaptation</topic><topic>Adaptation models</topic><topic>Algorithms</topic><topic>Annotations</topic><topic>Background subtraction</topic><topic>deep difference network</topic><topic>Deep learning</topic><topic>domain adaptation</topic><topic>Domains</topic><topic>frame differencing algorithm</topic><topic>Heuristic algorithms</topic><topic>Image color analysis</topic><topic>Image processing</topic><topic>Inference algorithms</topic><topic>Iterative methods</topic><topic>Knowledge</topic><topic>Machine learning</topic><topic>self-training</topic><topic>Semantics</topic><topic>Subtraction</topic><topic>Task analysis</topic><topic>Traffic surveillance</topic><topic>Training data</topic><topic>unseen scene</topic><topic>Video</topic><topic>Visual tasks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Jin</creatorcontrib><creatorcontrib>Zhang, Xi</creatorcontrib><creatorcontrib>Zhang, Yanyan</creatorcontrib><creatorcontrib>Duan, Yexin</creatorcontrib><creatorcontrib>Li, Yang</creatorcontrib><creatorcontrib>Pan, Zhisong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Jin</au><au>Zhang, Xi</au><au>Zhang, Yanyan</au><au>Duan, Yexin</au><au>Li, Yang</au><au>Pan, Zhisong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Meta-Knowledge Learning and Domain Adaptation for Unseen Background Subtraction</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><date>2021</date><risdate>2021</risdate><volume>30</volume><spage>9058</spage><epage>9068</epage><pages>9058-9068</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>Background subtraction is a classic video processing task pervading in numerous visual applications such as video surveillance and traffic monitoring. Given the diversity and variability of real application scenes, an ideal background subtraction model should be robust to various scenarios. Even though deep-learning approaches have demonstrated unprecedented improvements, they often fail to generalize to unseen scenarios, thereby less suitable for extensive deployment. In this work, we propose to tackle cross-scene background subtraction via a two-phase framework that includes meta-knowledge learning and domain adaptation. Specifically, as we observe that meta-knowledge (i.e., scene-independent common knowledge) is the cornerstone for generalizing to unseen scenes, we draw on traditional frame differencing algorithms and design a deep difference network (DDN) to encode meta-knowledge especially temporal change knowledge from various cross-scene data (source domain) without intermittent foreground motion pattern. In addition, we explore a self-training domain adaptation strategy based on iterative evolution. With iteratively updated pseudo-labels, the DDN is continuously fine-tuned and evolves progressively toward unseen scenes (target domain) in an unsupervised fashion. Our framework could be easily deployed on unseen scenes without relying on their annotations. As evidenced by our experiments on the CDnet2014 dataset, it brings a significant improvement to background subtraction. Our method has a favorable processing speed (70 fps) and outperforms the best unsupervised algorithm and top supervised algorithm designed for unseen scenes by 9% and 3%, respectively.</abstract><cop>New York</cop><pub>IEEE</pub><pmid>34714746</pmid><doi>10.1109/TIP.2021.3122102</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0003-1682-0284</orcidid><orcidid>https://orcid.org/0000-0002-5223-6790</orcidid><orcidid>https://orcid.org/0000-0002-9588-8828</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1057-7149
ispartof IEEE transactions on image processing, 2021, Vol.30, p.9058-9068
issn 1057-7149
1941-0042
language eng
recordid cdi_crossref_primary_10_1109_TIP_2021_3122102
source IEEE Xplore (Online service)
subjects Adaptation
Adaptation models
Algorithms
Annotations
Background subtraction
deep difference network
Deep learning
domain adaptation
Domains
frame differencing algorithm
Heuristic algorithms
Image color analysis
Image processing
Inference algorithms
Iterative methods
Knowledge
Machine learning
self-training
Semantics
Subtraction
Task analysis
Traffic surveillance
Training data
unseen scene
Video
Visual tasks
title Meta-Knowledge Learning and Domain Adaptation for Unseen Background Subtraction
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T21%3A47%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Meta-Knowledge%20Learning%20and%20Domain%20Adaptation%20for%20Unseen%20Background%20Subtraction&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Zhang,%20Jin&rft.date=2021&rft.volume=30&rft.spage=9058&rft.epage=9068&rft.pages=9058-9068&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2021.3122102&rft_dat=%3Cproquest_cross%3E2590081271%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c324t-ac82a38011bababf258cc9f78d53c7a3612a9fc98e4c618d8b5d395dbecc42593%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2592618701&rft_id=info:pmid/34714746&rft_ieee_id=9594702&rfr_iscdi=true