Loading…
Toward Deep Universal Sketch Perceptual Grouper
Human free-hand sketches provide the useful data for studying human perceptual grouping, where the grouping principles such as the Gestalt laws of grouping are naturally in play during both the perception and sketching stages. In this paper, we make the first attempt to develop a universal sketch pe...
Saved in:
Published in: | IEEE transactions on image processing 2019-07, Vol.28 (7), p.3219-3231 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c389t-37b478c53f4dcbd49ec4b00cc4cedc0cc364deadfc83c793295f8da08828b73f3 |
---|---|
cites | cdi_FETCH-LOGICAL-c389t-37b478c53f4dcbd49ec4b00cc4cedc0cc364deadfc83c793295f8da08828b73f3 |
container_end_page | 3231 |
container_issue | 7 |
container_start_page | 3219 |
container_title | IEEE transactions on image processing |
container_volume | 28 |
creator | Ke Li Kaiyue Pang Yi-Zhe Song Tao Xiang Hospedales, Timothy M. Honggang Zhang |
description | Human free-hand sketches provide the useful data for studying human perceptual grouping, where the grouping principles such as the Gestalt laws of grouping are naturally in play during both the perception and sketching stages. In this paper, we make the first attempt to develop a universal sketch perceptual grouper. That is, a grouper that can be applied to sketches of any category created with any drawing style and ability, to group constituent strokes/segments into semantically meaningful object parts. The first obstacle to achieving this goal is the lack of large-scale datasets with grouping annotation. To overcome this, we contribute the largest sketch perceptual grouping dataset to date, consisting of 20 000 unique sketches evenly distributed over 25 object categories. Furthermore, we propose a novel deep perceptual grouping model learned with both generative and discriminative losses. The generative loss improves the generalization ability of the model, while the discriminative loss guarantees both local and global grouping consistency. Extensive experiments demonstrate that the proposed grouper significantly outperforms the state-of-the-art competitors. In addition, we show that our grouper is useful for a number of sketch analysis tasks, including sketch semantic segmentation, synthesis, and fine-grained sketch-based image retrieval. |
doi_str_mv | 10.1109/TIP.2019.2895155 |
format | article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_proquest_miscellaneous_2179493620</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8626530</ieee_id><sourcerecordid>2179493620</sourcerecordid><originalsourceid>FETCH-LOGICAL-c389t-37b478c53f4dcbd49ec4b00cc4cedc0cc364deadfc83c793295f8da08828b73f3</originalsourceid><addsrcrecordid>eNpdkE1Lw0AQhhdRbK3eBUEKXryknf1Kdo9StRYKFmzPS7KZYGraxN1E8d-7pbUHTzPMPO8wPIRcUxhRCnq8nC1GDKgeMaUllfKE9KkWNAIQ7DT0IJMooUL3yIX3awAqJI3PSY9DAhwY7ZPxsv5OXT58RGyGq235hc6n1fDtA1v7Plygs9i0XZhMXd016C7JWZFWHq8OdUBWz0_LyUs0f53OJg_zyHKl24gnmUiUlbwQuc1yodGKDMBaYTG3ofJY5JjmhVXcJpozLQuVp6AUU1nCCz4g9_u7jas_O_St2ZTeYlWlW6w7bxhNtNA8ZhDQu3_ouu7cNnxnGGMxjWXMRaBgT1lXe--wMI0rN6n7MRTMTqYJMs1OpjnIDJHbw-Eu22B-DPzZC8DNHigR8bhWMYslB_4LSox21w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2226165634</pqid></control><display><type>article</type><title>Toward Deep Universal Sketch Perceptual Grouper</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Ke Li ; Kaiyue Pang ; Yi-Zhe Song ; Tao Xiang ; Hospedales, Timothy M. ; Honggang Zhang</creator><creatorcontrib>Ke Li ; Kaiyue Pang ; Yi-Zhe Song ; Tao Xiang ; Hospedales, Timothy M. ; Honggang Zhang</creatorcontrib><description>Human free-hand sketches provide the useful data for studying human perceptual grouping, where the grouping principles such as the Gestalt laws of grouping are naturally in play during both the perception and sketching stages. In this paper, we make the first attempt to develop a universal sketch perceptual grouper. That is, a grouper that can be applied to sketches of any category created with any drawing style and ability, to group constituent strokes/segments into semantically meaningful object parts. The first obstacle to achieving this goal is the lack of large-scale datasets with grouping annotation. To overcome this, we contribute the largest sketch perceptual grouping dataset to date, consisting of 20 000 unique sketches evenly distributed over 25 object categories. Furthermore, we propose a novel deep perceptual grouping model learned with both generative and discriminative losses. The generative loss improves the generalization ability of the model, while the discriminative loss guarantees both local and global grouping consistency. Extensive experiments demonstrate that the proposed grouper significantly outperforms the state-of-the-art competitors. In addition, we show that our grouper is useful for a number of sketch analysis tasks, including sketch semantic segmentation, synthesis, and fine-grained sketch-based image retrieval.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2019.2895155</identifier><identifier>PMID: 30703021</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Analytical models ; Data models ; dataset ; deep grouping model ; Image annotation ; Image management ; Image retrieval ; Image segmentation ; Semantics ; Sketch perceptual grouping ; Sketches ; Task analysis ; Training ; universal grouper ; Visualization</subject><ispartof>IEEE transactions on image processing, 2019-07, Vol.28 (7), p.3219-3231</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c389t-37b478c53f4dcbd49ec4b00cc4cedc0cc364deadfc83c793295f8da08828b73f3</citedby><cites>FETCH-LOGICAL-c389t-37b478c53f4dcbd49ec4b00cc4cedc0cc364deadfc83c793295f8da08828b73f3</cites><orcidid>0000-0001-8287-6783 ; 0000-0002-9739-7969 ; 0000-0002-2530-1059 ; 0000-0003-4867-7486</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8626530$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,27905,27906,54777</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30703021$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Ke Li</creatorcontrib><creatorcontrib>Kaiyue Pang</creatorcontrib><creatorcontrib>Yi-Zhe Song</creatorcontrib><creatorcontrib>Tao Xiang</creatorcontrib><creatorcontrib>Hospedales, Timothy M.</creatorcontrib><creatorcontrib>Honggang Zhang</creatorcontrib><title>Toward Deep Universal Sketch Perceptual Grouper</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><addtitle>IEEE Trans Image Process</addtitle><description>Human free-hand sketches provide the useful data for studying human perceptual grouping, where the grouping principles such as the Gestalt laws of grouping are naturally in play during both the perception and sketching stages. In this paper, we make the first attempt to develop a universal sketch perceptual grouper. That is, a grouper that can be applied to sketches of any category created with any drawing style and ability, to group constituent strokes/segments into semantically meaningful object parts. The first obstacle to achieving this goal is the lack of large-scale datasets with grouping annotation. To overcome this, we contribute the largest sketch perceptual grouping dataset to date, consisting of 20 000 unique sketches evenly distributed over 25 object categories. Furthermore, we propose a novel deep perceptual grouping model learned with both generative and discriminative losses. The generative loss improves the generalization ability of the model, while the discriminative loss guarantees both local and global grouping consistency. Extensive experiments demonstrate that the proposed grouper significantly outperforms the state-of-the-art competitors. In addition, we show that our grouper is useful for a number of sketch analysis tasks, including sketch semantic segmentation, synthesis, and fine-grained sketch-based image retrieval.</description><subject>Analytical models</subject><subject>Data models</subject><subject>dataset</subject><subject>deep grouping model</subject><subject>Image annotation</subject><subject>Image management</subject><subject>Image retrieval</subject><subject>Image segmentation</subject><subject>Semantics</subject><subject>Sketch perceptual grouping</subject><subject>Sketches</subject><subject>Task analysis</subject><subject>Training</subject><subject>universal grouper</subject><subject>Visualization</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNpdkE1Lw0AQhhdRbK3eBUEKXryknf1Kdo9StRYKFmzPS7KZYGraxN1E8d-7pbUHTzPMPO8wPIRcUxhRCnq8nC1GDKgeMaUllfKE9KkWNAIQ7DT0IJMooUL3yIX3awAqJI3PSY9DAhwY7ZPxsv5OXT58RGyGq235hc6n1fDtA1v7Plygs9i0XZhMXd016C7JWZFWHq8OdUBWz0_LyUs0f53OJg_zyHKl24gnmUiUlbwQuc1yodGKDMBaYTG3ofJY5JjmhVXcJpozLQuVp6AUU1nCCz4g9_u7jas_O_St2ZTeYlWlW6w7bxhNtNA8ZhDQu3_ouu7cNnxnGGMxjWXMRaBgT1lXe--wMI0rN6n7MRTMTqYJMs1OpjnIDJHbw-Eu22B-DPzZC8DNHigR8bhWMYslB_4LSox21w</recordid><startdate>20190701</startdate><enddate>20190701</enddate><creator>Ke Li</creator><creator>Kaiyue Pang</creator><creator>Yi-Zhe Song</creator><creator>Tao Xiang</creator><creator>Hospedales, Timothy M.</creator><creator>Honggang Zhang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-8287-6783</orcidid><orcidid>https://orcid.org/0000-0002-9739-7969</orcidid><orcidid>https://orcid.org/0000-0002-2530-1059</orcidid><orcidid>https://orcid.org/0000-0003-4867-7486</orcidid></search><sort><creationdate>20190701</creationdate><title>Toward Deep Universal Sketch Perceptual Grouper</title><author>Ke Li ; Kaiyue Pang ; Yi-Zhe Song ; Tao Xiang ; Hospedales, Timothy M. ; Honggang Zhang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c389t-37b478c53f4dcbd49ec4b00cc4cedc0cc364deadfc83c793295f8da08828b73f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Analytical models</topic><topic>Data models</topic><topic>dataset</topic><topic>deep grouping model</topic><topic>Image annotation</topic><topic>Image management</topic><topic>Image retrieval</topic><topic>Image segmentation</topic><topic>Semantics</topic><topic>Sketch perceptual grouping</topic><topic>Sketches</topic><topic>Task analysis</topic><topic>Training</topic><topic>universal grouper</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ke Li</creatorcontrib><creatorcontrib>Kaiyue Pang</creatorcontrib><creatorcontrib>Yi-Zhe Song</creatorcontrib><creatorcontrib>Tao Xiang</creatorcontrib><creatorcontrib>Hospedales, Timothy M.</creatorcontrib><creatorcontrib>Honggang Zhang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ke Li</au><au>Kaiyue Pang</au><au>Yi-Zhe Song</au><au>Tao Xiang</au><au>Hospedales, Timothy M.</au><au>Honggang Zhang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Toward Deep Universal Sketch Perceptual Grouper</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><addtitle>IEEE Trans Image Process</addtitle><date>2019-07-01</date><risdate>2019</risdate><volume>28</volume><issue>7</issue><spage>3219</spage><epage>3231</epage><pages>3219-3231</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>Human free-hand sketches provide the useful data for studying human perceptual grouping, where the grouping principles such as the Gestalt laws of grouping are naturally in play during both the perception and sketching stages. In this paper, we make the first attempt to develop a universal sketch perceptual grouper. That is, a grouper that can be applied to sketches of any category created with any drawing style and ability, to group constituent strokes/segments into semantically meaningful object parts. The first obstacle to achieving this goal is the lack of large-scale datasets with grouping annotation. To overcome this, we contribute the largest sketch perceptual grouping dataset to date, consisting of 20 000 unique sketches evenly distributed over 25 object categories. Furthermore, we propose a novel deep perceptual grouping model learned with both generative and discriminative losses. The generative loss improves the generalization ability of the model, while the discriminative loss guarantees both local and global grouping consistency. Extensive experiments demonstrate that the proposed grouper significantly outperforms the state-of-the-art competitors. In addition, we show that our grouper is useful for a number of sketch analysis tasks, including sketch semantic segmentation, synthesis, and fine-grained sketch-based image retrieval.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>30703021</pmid><doi>10.1109/TIP.2019.2895155</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0001-8287-6783</orcidid><orcidid>https://orcid.org/0000-0002-9739-7969</orcidid><orcidid>https://orcid.org/0000-0002-2530-1059</orcidid><orcidid>https://orcid.org/0000-0003-4867-7486</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1057-7149 |
ispartof | IEEE transactions on image processing, 2019-07, Vol.28 (7), p.3219-3231 |
issn | 1057-7149 1941-0042 |
language | eng |
recordid | cdi_proquest_miscellaneous_2179493620 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Analytical models Data models dataset deep grouping model Image annotation Image management Image retrieval Image segmentation Semantics Sketch perceptual grouping Sketches Task analysis Training universal grouper Visualization |
title | Toward Deep Universal Sketch Perceptual Grouper |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T18%3A01%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Toward%20Deep%20Universal%20Sketch%20Perceptual%20Grouper&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Ke%20Li&rft.date=2019-07-01&rft.volume=28&rft.issue=7&rft.spage=3219&rft.epage=3231&rft.pages=3219-3231&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2019.2895155&rft_dat=%3Cproquest_pubme%3E2179493620%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c389t-37b478c53f4dcbd49ec4b00cc4cedc0cc364deadfc83c793295f8da08828b73f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2226165634&rft_id=info:pmid/30703021&rft_ieee_id=8626530&rfr_iscdi=true |