Loading…
Adaptive Learning for Target Tracking and True Linking Discovering Across Multiple Non-Overlapping Cameras
To track targets across networked cameras with disjoint views, one of the major problems is to learn the spatio-temporal relationship and the appearance relationship, where the appearance relationship is usually modeled as a brightness transfer function. Traditional methods learning the relationship...
Saved in:
Published in: | IEEE transactions on multimedia 2011-08, Vol.13 (4), p.625-638 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c394t-7dbc3afea809df4235ca3e09a34fbad4ace542cdec408aef930dce2dad0c7aa13 |
---|---|
cites | cdi_FETCH-LOGICAL-c394t-7dbc3afea809df4235ca3e09a34fbad4ace542cdec408aef930dce2dad0c7aa13 |
container_end_page | 638 |
container_issue | 4 |
container_start_page | 625 |
container_title | IEEE transactions on multimedia |
container_volume | 13 |
creator | CHEN, Kuan-Wen LAI, Chih-Chuan LEE, Pei-Jyun CHEN, Chu-Song HUNG, Yi-Ping |
description | To track targets across networked cameras with disjoint views, one of the major problems is to learn the spatio-temporal relationship and the appearance relationship, where the appearance relationship is usually modeled as a brightness transfer function. Traditional methods learning the relationships by using either hand-labeled correspondence or batch-learning procedure are applicable when the environment remains unchanged. However, in many situations such as lighting changes, the environment varies seriously and hence traditional methods fail to work. In this paper, we propose an unsupervised method which learns adaptively and can be applied to long-term monitoring. Furthermore, we propose a method that can avoid weak links and discover the true valid links among the entry/exit zones of cameras from the correspondence. Experimental results demonstrate that our method outperforms existing methods in learning both the spatio-temporal and the appearance relationship, and can achieve high tracking accuracy in both indoor and outdoor environment. |
doi_str_mv | 10.1109/TMM.2011.2131639 |
format | article |
fullrecord | <record><control><sourceid>proquest_pasca</sourceid><recordid>TN_cdi_proquest_journals_878270047</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5737792</ieee_id><sourcerecordid>907944686</sourcerecordid><originalsourceid>FETCH-LOGICAL-c394t-7dbc3afea809df4235ca3e09a34fbad4ace542cdec408aef930dce2dad0c7aa13</originalsourceid><addsrcrecordid>eNpdkMtLw0AQxoMo-LwLXoIgnlJnH-lmj6U-odVLPS_j7kRS0yTuJoL_vRtbPHiax_ebYeZLknMGE8ZA36yWywkHxiacCTYVei85YlqyDECp_ZjnHDLNGRwmxyGsAZjMQR0l65nDrq--KF0Q-qZq3tOy9ekK_Tv16cqj_Rh72LhYDJGqmt_GbRVs-0V-zGfWtyGky6Huq66m9Lltspeo1dh1oz7HDXkMp8lBiXWgs108SV7v71bzx2zx8vA0ny0yK7TsM-XerMCSsADtSslFblEQaBSyfEMn0VIuuXVkJRRIpRbgLHGHDqxCZOIkud7u7Xz7OVDozSYeS3WNDbVDMBqUlnJaTCN5-Y9ct4Nv4nGmUAVXAFJFCLbQ75eeStP5aoP-2zAwo_UmWm9G683O-jhytduLwWJdemxsFf7muIyMZkXkLrZcRUR_cq6EUpqLH_8ojnc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>878270047</pqid></control><display><type>article</type><title>Adaptive Learning for Target Tracking and True Linking Discovering Across Multiple Non-Overlapping Cameras</title><source>IEEE Xplore (Online service)</source><creator>CHEN, Kuan-Wen ; LAI, Chih-Chuan ; LEE, Pei-Jyun ; CHEN, Chu-Song ; HUNG, Yi-Ping</creator><creatorcontrib>CHEN, Kuan-Wen ; LAI, Chih-Chuan ; LEE, Pei-Jyun ; CHEN, Chu-Song ; HUNG, Yi-Ping</creatorcontrib><description>To track targets across networked cameras with disjoint views, one of the major problems is to learn the spatio-temporal relationship and the appearance relationship, where the appearance relationship is usually modeled as a brightness transfer function. Traditional methods learning the relationships by using either hand-labeled correspondence or batch-learning procedure are applicable when the environment remains unchanged. However, in many situations such as lighting changes, the environment varies seriously and hence traditional methods fail to work. In this paper, we propose an unsupervised method which learns adaptively and can be applied to long-term monitoring. Furthermore, we propose a method that can avoid weak links and discover the true valid links among the entry/exit zones of cameras from the correspondence. Experimental results demonstrate that our method outperforms existing methods in learning both the spatio-temporal and the appearance relationship, and can achieve high tracking accuracy in both indoor and outdoor environment.</description><identifier>ISSN: 1520-9210</identifier><identifier>EISSN: 1941-0077</identifier><identifier>DOI: 10.1109/TMM.2011.2131639</identifier><identifier>CODEN: ITMUF8</identifier><language>eng</language><publisher>New York, NY: IEEE</publisher><subject>Applied sciences ; Artificial intelligence ; Brightness ; Brightness transfer function ; camera network ; Cameras ; Computer science; control theory; systems ; Exact sciences and technology ; Learning ; Lighting ; Links ; Mathematical models ; Methods ; Monitoring ; Multimedia ; non-overlapping cameras ; Pattern recognition. Digital image processing. Computational geometry ; spatio-temporal relationship ; Target tracking ; Topology ; Tracking ; Transfer functions ; visual surveillance ; visual tracking</subject><ispartof>IEEE transactions on multimedia, 2011-08, Vol.13 (4), p.625-638</ispartof><rights>2015 INIST-CNRS</rights><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Aug 2011</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c394t-7dbc3afea809df4235ca3e09a34fbad4ace542cdec408aef930dce2dad0c7aa13</citedby><cites>FETCH-LOGICAL-c394t-7dbc3afea809df4235ca3e09a34fbad4ace542cdec408aef930dce2dad0c7aa13</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5737792$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=24393918$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>CHEN, Kuan-Wen</creatorcontrib><creatorcontrib>LAI, Chih-Chuan</creatorcontrib><creatorcontrib>LEE, Pei-Jyun</creatorcontrib><creatorcontrib>CHEN, Chu-Song</creatorcontrib><creatorcontrib>HUNG, Yi-Ping</creatorcontrib><title>Adaptive Learning for Target Tracking and True Linking Discovering Across Multiple Non-Overlapping Cameras</title><title>IEEE transactions on multimedia</title><addtitle>TMM</addtitle><description>To track targets across networked cameras with disjoint views, one of the major problems is to learn the spatio-temporal relationship and the appearance relationship, where the appearance relationship is usually modeled as a brightness transfer function. Traditional methods learning the relationships by using either hand-labeled correspondence or batch-learning procedure are applicable when the environment remains unchanged. However, in many situations such as lighting changes, the environment varies seriously and hence traditional methods fail to work. In this paper, we propose an unsupervised method which learns adaptively and can be applied to long-term monitoring. Furthermore, we propose a method that can avoid weak links and discover the true valid links among the entry/exit zones of cameras from the correspondence. Experimental results demonstrate that our method outperforms existing methods in learning both the spatio-temporal and the appearance relationship, and can achieve high tracking accuracy in both indoor and outdoor environment.</description><subject>Applied sciences</subject><subject>Artificial intelligence</subject><subject>Brightness</subject><subject>Brightness transfer function</subject><subject>camera network</subject><subject>Cameras</subject><subject>Computer science; control theory; systems</subject><subject>Exact sciences and technology</subject><subject>Learning</subject><subject>Lighting</subject><subject>Links</subject><subject>Mathematical models</subject><subject>Methods</subject><subject>Monitoring</subject><subject>Multimedia</subject><subject>non-overlapping cameras</subject><subject>Pattern recognition. Digital image processing. Computational geometry</subject><subject>spatio-temporal relationship</subject><subject>Target tracking</subject><subject>Topology</subject><subject>Tracking</subject><subject>Transfer functions</subject><subject>visual surveillance</subject><subject>visual tracking</subject><issn>1520-9210</issn><issn>1941-0077</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2011</creationdate><recordtype>article</recordtype><recordid>eNpdkMtLw0AQxoMo-LwLXoIgnlJnH-lmj6U-odVLPS_j7kRS0yTuJoL_vRtbPHiax_ebYeZLknMGE8ZA36yWywkHxiacCTYVei85YlqyDECp_ZjnHDLNGRwmxyGsAZjMQR0l65nDrq--KF0Q-qZq3tOy9ekK_Tv16cqj_Rh72LhYDJGqmt_GbRVs-0V-zGfWtyGky6Huq66m9Lltspeo1dh1oz7HDXkMp8lBiXWgs108SV7v71bzx2zx8vA0ny0yK7TsM-XerMCSsADtSslFblEQaBSyfEMn0VIuuXVkJRRIpRbgLHGHDqxCZOIkud7u7Xz7OVDozSYeS3WNDbVDMBqUlnJaTCN5-Y9ct4Nv4nGmUAVXAFJFCLbQ75eeStP5aoP-2zAwo_UmWm9G683O-jhytduLwWJdemxsFf7muIyMZkXkLrZcRUR_cq6EUpqLH_8ojnc</recordid><startdate>20110801</startdate><enddate>20110801</enddate><creator>CHEN, Kuan-Wen</creator><creator>LAI, Chih-Chuan</creator><creator>LEE, Pei-Jyun</creator><creator>CHEN, Chu-Song</creator><creator>HUNG, Yi-Ping</creator><general>IEEE</general><general>Institute of Electrical and Electronics Engineers</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>F28</scope><scope>FR3</scope></search><sort><creationdate>20110801</creationdate><title>Adaptive Learning for Target Tracking and True Linking Discovering Across Multiple Non-Overlapping Cameras</title><author>CHEN, Kuan-Wen ; LAI, Chih-Chuan ; LEE, Pei-Jyun ; CHEN, Chu-Song ; HUNG, Yi-Ping</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c394t-7dbc3afea809df4235ca3e09a34fbad4ace542cdec408aef930dce2dad0c7aa13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2011</creationdate><topic>Applied sciences</topic><topic>Artificial intelligence</topic><topic>Brightness</topic><topic>Brightness transfer function</topic><topic>camera network</topic><topic>Cameras</topic><topic>Computer science; control theory; systems</topic><topic>Exact sciences and technology</topic><topic>Learning</topic><topic>Lighting</topic><topic>Links</topic><topic>Mathematical models</topic><topic>Methods</topic><topic>Monitoring</topic><topic>Multimedia</topic><topic>non-overlapping cameras</topic><topic>Pattern recognition. Digital image processing. Computational geometry</topic><topic>spatio-temporal relationship</topic><topic>Target tracking</topic><topic>Topology</topic><topic>Tracking</topic><topic>Transfer functions</topic><topic>visual surveillance</topic><topic>visual tracking</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>CHEN, Kuan-Wen</creatorcontrib><creatorcontrib>LAI, Chih-Chuan</creatorcontrib><creatorcontrib>LEE, Pei-Jyun</creatorcontrib><creatorcontrib>CHEN, Chu-Song</creatorcontrib><creatorcontrib>HUNG, Yi-Ping</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>Pascal-Francis</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><jtitle>IEEE transactions on multimedia</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>CHEN, Kuan-Wen</au><au>LAI, Chih-Chuan</au><au>LEE, Pei-Jyun</au><au>CHEN, Chu-Song</au><au>HUNG, Yi-Ping</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adaptive Learning for Target Tracking and True Linking Discovering Across Multiple Non-Overlapping Cameras</atitle><jtitle>IEEE transactions on multimedia</jtitle><stitle>TMM</stitle><date>2011-08-01</date><risdate>2011</risdate><volume>13</volume><issue>4</issue><spage>625</spage><epage>638</epage><pages>625-638</pages><issn>1520-9210</issn><eissn>1941-0077</eissn><coden>ITMUF8</coden><abstract>To track targets across networked cameras with disjoint views, one of the major problems is to learn the spatio-temporal relationship and the appearance relationship, where the appearance relationship is usually modeled as a brightness transfer function. Traditional methods learning the relationships by using either hand-labeled correspondence or batch-learning procedure are applicable when the environment remains unchanged. However, in many situations such as lighting changes, the environment varies seriously and hence traditional methods fail to work. In this paper, we propose an unsupervised method which learns adaptively and can be applied to long-term monitoring. Furthermore, we propose a method that can avoid weak links and discover the true valid links among the entry/exit zones of cameras from the correspondence. Experimental results demonstrate that our method outperforms existing methods in learning both the spatio-temporal and the appearance relationship, and can achieve high tracking accuracy in both indoor and outdoor environment.</abstract><cop>New York, NY</cop><pub>IEEE</pub><doi>10.1109/TMM.2011.2131639</doi><tpages>14</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1520-9210 |
ispartof | IEEE transactions on multimedia, 2011-08, Vol.13 (4), p.625-638 |
issn | 1520-9210 1941-0077 |
language | eng |
recordid | cdi_proquest_journals_878270047 |
source | IEEE Xplore (Online service) |
subjects | Applied sciences Artificial intelligence Brightness Brightness transfer function camera network Cameras Computer science control theory systems Exact sciences and technology Learning Lighting Links Mathematical models Methods Monitoring Multimedia non-overlapping cameras Pattern recognition. Digital image processing. Computational geometry spatio-temporal relationship Target tracking Topology Tracking Transfer functions visual surveillance visual tracking |
title | Adaptive Learning for Target Tracking and True Linking Discovering Across Multiple Non-Overlapping Cameras |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T10%3A17%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pasca&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adaptive%20Learning%20for%20Target%20Tracking%20and%20True%20Linking%20Discovering%20Across%20Multiple%20Non-Overlapping%20Cameras&rft.jtitle=IEEE%20transactions%20on%20multimedia&rft.au=CHEN,%20Kuan-Wen&rft.date=2011-08-01&rft.volume=13&rft.issue=4&rft.spage=625&rft.epage=638&rft.pages=625-638&rft.issn=1520-9210&rft.eissn=1941-0077&rft.coden=ITMUF8&rft_id=info:doi/10.1109/TMM.2011.2131639&rft_dat=%3Cproquest_pasca%3E907944686%3C/proquest_pasca%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c394t-7dbc3afea809df4235ca3e09a34fbad4ace542cdec408aef930dce2dad0c7aa13%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=878270047&rft_id=info:pmid/&rft_ieee_id=5737792&rfr_iscdi=true |