Loading…
NLOOK: a computational attention model for robot vision
The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements f...
Saved in:
Published in: | Journal of the Brazilian Computer Society 2009-09, Vol.15 (3), p.3-17 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c2162-1744db08b2b73777db05449860f5cb28356ac9f56eeae808fdefc89bcbb771c23 |
container_end_page | 17 |
container_issue | 3 |
container_start_page | 3 |
container_title | Journal of the Brazilian Computer Society |
container_volume | 15 |
creator | Heinen, Milton Roberto Engel, Paulo Martins |
description | The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements from those originally proposed. More specifically, a robotic vision system must be relatively insensitive to 2D similarity transforms of the image, as in-plane translations, rotations, reflections and scales, and it should also select fixation points in scale as well as position. In this paper a new visual attention model, called NLOOK, is proposed. This model is validated through several experiments, which show that it is less sensitive to 2D similarity transforms than other two well known and publicly available visual attention models: NVT and SAFE. Besides, NLOOK can select more accurate fixations than other attention models, and it can select the scales of fixations, too. Thus, the proposed model is a good tool to be used in robot vision systems. |
doi_str_mv | 10.1007/BF03194502 |
format | article |
fullrecord | <record><control><sourceid>scielo_cross</sourceid><recordid>TN_cdi_scielo_journals_S0104_65002009000300002</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><scielo_id>S0104_65002009000300002</scielo_id><sourcerecordid>S0104_65002009000300002</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2162-1744db08b2b73777db05449860f5cb28356ac9f56eeae808fdefc89bcbb771c23</originalsourceid><addsrcrecordid>eNptUMFKxDAUDKJgXb34BTkrXV_StEm96eKqWOxBPYckTaRL2yxJV_DvbdmFvXh4vOExM7wZhK4JLAkAv3tcQ0ZKlgM9QQkpuEiZAHaKEiDA0iIHOEcXMW4AKLAMEsTfq7p-u8cKG99vd6MaWz-oDqtxtMOMce8b22HnAw5e-xH_tHE6X6Izp7porw57gb7WT5-rl7Sqn19XD1VqKCloSjhjjQahqeYZ53zCOWOlKMDlRlOR5YUypcsLa5UVIFxjnRGlNlpzTgzNFmi5942mtZ2XG78L039RfsyJ5JyIApQAkE0Ds-BmLzDBxxisk9vQ9ir8SgJy7kgeO5rItwf3iTR823D0_4f9B72bYuE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>NLOOK: a computational attention model for robot vision</title><source>Springer Nature - SpringerLink Journals - Fully Open Access</source><source>SciELO</source><creator>Heinen, Milton Roberto ; Engel, Paulo Martins</creator><creatorcontrib>Heinen, Milton Roberto ; Engel, Paulo Martins</creatorcontrib><description>The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements from those originally proposed. More specifically, a robotic vision system must be relatively insensitive to 2D similarity transforms of the image, as in-plane translations, rotations, reflections and scales, and it should also select fixation points in scale as well as position. In this paper a new visual attention model, called NLOOK, is proposed. This model is validated through several experiments, which show that it is less sensitive to 2D similarity transforms than other two well known and publicly available visual attention models: NVT and SAFE. Besides, NLOOK can select more accurate fixations than other attention models, and it can select the scales of fixations, too. Thus, the proposed model is a good tool to be used in robot vision systems.</description><identifier>ISSN: 0104-6500</identifier><identifier>ISSN: 1678-4804</identifier><identifier>EISSN: 1678-4804</identifier><identifier>DOI: 10.1007/BF03194502</identifier><language>eng</language><publisher>London: Springer-Verlag</publisher><subject>Computer Science ; COMPUTER SCIENCE, INFORMATION SYSTEMS ; Computer System Implementation ; Data Structures ; Operating Systems ; Simulation and Modeling</subject><ispartof>Journal of the Brazilian Computer Society, 2009-09, Vol.15 (3), p.3-17</ispartof><rights>The Brazilian Computer Society 2009</rights><rights>This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c2162-1744db08b2b73777db05449860f5cb28356ac9f56eeae808fdefc89bcbb771c23</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,776,780,881,24129,27901,27902</link.rule.ids></links><search><creatorcontrib>Heinen, Milton Roberto</creatorcontrib><creatorcontrib>Engel, Paulo Martins</creatorcontrib><title>NLOOK: a computational attention model for robot vision</title><title>Journal of the Brazilian Computer Society</title><addtitle>J Braz Comp Soc</addtitle><addtitle>J. Braz. Comp. Soc</addtitle><description>The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements from those originally proposed. More specifically, a robotic vision system must be relatively insensitive to 2D similarity transforms of the image, as in-plane translations, rotations, reflections and scales, and it should also select fixation points in scale as well as position. In this paper a new visual attention model, called NLOOK, is proposed. This model is validated through several experiments, which show that it is less sensitive to 2D similarity transforms than other two well known and publicly available visual attention models: NVT and SAFE. Besides, NLOOK can select more accurate fixations than other attention models, and it can select the scales of fixations, too. Thus, the proposed model is a good tool to be used in robot vision systems.</description><subject>Computer Science</subject><subject>COMPUTER SCIENCE, INFORMATION SYSTEMS</subject><subject>Computer System Implementation</subject><subject>Data Structures</subject><subject>Operating Systems</subject><subject>Simulation and Modeling</subject><issn>0104-6500</issn><issn>1678-4804</issn><issn>1678-4804</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2009</creationdate><recordtype>article</recordtype><recordid>eNptUMFKxDAUDKJgXb34BTkrXV_StEm96eKqWOxBPYckTaRL2yxJV_DvbdmFvXh4vOExM7wZhK4JLAkAv3tcQ0ZKlgM9QQkpuEiZAHaKEiDA0iIHOEcXMW4AKLAMEsTfq7p-u8cKG99vd6MaWz-oDqtxtMOMce8b22HnAw5e-xH_tHE6X6Izp7porw57gb7WT5-rl7Sqn19XD1VqKCloSjhjjQahqeYZ53zCOWOlKMDlRlOR5YUypcsLa5UVIFxjnRGlNlpzTgzNFmi5942mtZ2XG78L039RfsyJ5JyIApQAkE0Ds-BmLzDBxxisk9vQ9ir8SgJy7kgeO5rItwf3iTR823D0_4f9B72bYuE</recordid><startdate>20090901</startdate><enddate>20090901</enddate><creator>Heinen, Milton Roberto</creator><creator>Engel, Paulo Martins</creator><general>Springer-Verlag</general><general>Sociedade Brasileira de Computação</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>GPN</scope></search><sort><creationdate>20090901</creationdate><title>NLOOK: a computational attention model for robot vision</title><author>Heinen, Milton Roberto ; Engel, Paulo Martins</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2162-1744db08b2b73777db05449860f5cb28356ac9f56eeae808fdefc89bcbb771c23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2009</creationdate><topic>Computer Science</topic><topic>COMPUTER SCIENCE, INFORMATION SYSTEMS</topic><topic>Computer System Implementation</topic><topic>Data Structures</topic><topic>Operating Systems</topic><topic>Simulation and Modeling</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Heinen, Milton Roberto</creatorcontrib><creatorcontrib>Engel, Paulo Martins</creatorcontrib><collection>SpringerOpen</collection><collection>CrossRef</collection><collection>SciELO</collection><jtitle>Journal of the Brazilian Computer Society</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Heinen, Milton Roberto</au><au>Engel, Paulo Martins</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>NLOOK: a computational attention model for robot vision</atitle><jtitle>Journal of the Brazilian Computer Society</jtitle><stitle>J Braz Comp Soc</stitle><addtitle>J. Braz. Comp. Soc</addtitle><date>2009-09-01</date><risdate>2009</risdate><volume>15</volume><issue>3</issue><spage>3</spage><epage>17</epage><pages>3-17</pages><issn>0104-6500</issn><issn>1678-4804</issn><eissn>1678-4804</eissn><abstract>The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements from those originally proposed. More specifically, a robotic vision system must be relatively insensitive to 2D similarity transforms of the image, as in-plane translations, rotations, reflections and scales, and it should also select fixation points in scale as well as position. In this paper a new visual attention model, called NLOOK, is proposed. This model is validated through several experiments, which show that it is less sensitive to 2D similarity transforms than other two well known and publicly available visual attention models: NVT and SAFE. Besides, NLOOK can select more accurate fixations than other attention models, and it can select the scales of fixations, too. Thus, the proposed model is a good tool to be used in robot vision systems.</abstract><cop>London</cop><pub>Springer-Verlag</pub><doi>10.1007/BF03194502</doi><tpages>15</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0104-6500 |
ispartof | Journal of the Brazilian Computer Society, 2009-09, Vol.15 (3), p.3-17 |
issn | 0104-6500 1678-4804 1678-4804 |
language | eng |
recordid | cdi_scielo_journals_S0104_65002009000300002 |
source | Springer Nature - SpringerLink Journals - Fully Open Access; SciELO |
subjects | Computer Science COMPUTER SCIENCE, INFORMATION SYSTEMS Computer System Implementation Data Structures Operating Systems Simulation and Modeling |
title | NLOOK: a computational attention model for robot vision |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T16%3A02%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-scielo_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=NLOOK:%20a%20computational%20attention%20model%20for%20robot%20vision&rft.jtitle=Journal%20of%20the%20Brazilian%20Computer%20Society&rft.au=Heinen,%20Milton%20Roberto&rft.date=2009-09-01&rft.volume=15&rft.issue=3&rft.spage=3&rft.epage=17&rft.pages=3-17&rft.issn=0104-6500&rft.eissn=1678-4804&rft_id=info:doi/10.1007/BF03194502&rft_dat=%3Cscielo_cross%3ES0104_65002009000300002%3C/scielo_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c2162-1744db08b2b73777db05449860f5cb28356ac9f56eeae808fdefc89bcbb771c23%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_scielo_id=S0104_65002009000300002&rfr_iscdi=true |