Loading…

IconShop: Text-Guided Vector Icon Synthesis with Autoregressive Transformers

Scalable Vector Graphics (SVG) is a popular vector image format that offers good support for interactivity and animation. Despite its appealing characteristics, creating custom SVG content can be challenging for users due to the steep learning curve required to understand SVG grammars or get familia...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-06
Main Authors: Wu, Ronghuan, Su, Wanchao, Ma, Kede, Liao, Jing
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Wu, Ronghuan
Su, Wanchao
Ma, Kede
Liao, Jing
description Scalable Vector Graphics (SVG) is a popular vector image format that offers good support for interactivity and animation. Despite its appealing characteristics, creating custom SVG content can be challenging for users due to the steep learning curve required to understand SVG grammars or get familiar with professional editing software. Recent advancements in text-to-image generation have inspired researchers to explore vector graphics synthesis using either image-based methods (i.e., text -> raster image -> vector graphics) combining text-to-image generation models with image vectorization, or language-based methods (i.e., text -> vector graphics script) through pretrained large language models. However, these methods still suffer from limitations in terms of generation quality, diversity, and flexibility. In this paper, we introduce IconShop, a text-guided vector icon synthesis method using autoregressive transformers. The key to success of our approach is to sequentialize and tokenize SVG paths (and textual descriptions as guidance) into a uniquely decodable token sequence. With that, we are able to fully exploit the sequence learning power of autoregressive transformers, while enabling both unconditional and text-conditioned icon synthesis. Through standard training to predict the next token on a large-scale vector icon dataset accompanied by textural descriptions, the proposed IconShop consistently exhibits better icon synthesis capability than existing image-based and language-based methods both quantitatively and qualitatively. Meanwhile, we observe a dramatic improvement in generation diversity, which is validated by the objective Uniqueness and Novelty measures. More importantly, we demonstrate the flexibility of IconShop with multiple novel icon synthesis tasks, including icon editing, icon interpolation, icon semantic combination, and icon design auto-suggestion.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2807203878</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2807203878</sourcerecordid><originalsourceid>FETCH-proquest_journals_28072038783</originalsourceid><addsrcrecordid>eNqNyt0KgjAYgOERBEl5D4OOhbVljs4i-oPOFE9F9DMntdm-rZ-7z6AL6Og9eN4RCbgQi0guOZ-QELFjjPFVwuNYBOR8qoxOW9OvaQYvFx28qqGmOVTOWPpFmr61awEV0qdyLd34QeBiAVE9gGa21NgYewOLMzJuyitC-OuUzPe7bHuMemvuHtAVnfFWD1RwyRLOhEyk-O_6AGUfPew</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2807203878</pqid></control><display><type>article</type><title>IconShop: Text-Guided Vector Icon Synthesis with Autoregressive Transformers</title><source>Publicly Available Content Database</source><creator>Wu, Ronghuan ; Su, Wanchao ; Ma, Kede ; Liao, Jing</creator><creatorcontrib>Wu, Ronghuan ; Su, Wanchao ; Ma, Kede ; Liao, Jing</creatorcontrib><description>Scalable Vector Graphics (SVG) is a popular vector image format that offers good support for interactivity and animation. Despite its appealing characteristics, creating custom SVG content can be challenging for users due to the steep learning curve required to understand SVG grammars or get familiar with professional editing software. Recent advancements in text-to-image generation have inspired researchers to explore vector graphics synthesis using either image-based methods (i.e., text -&gt; raster image -&gt; vector graphics) combining text-to-image generation models with image vectorization, or language-based methods (i.e., text -&gt; vector graphics script) through pretrained large language models. However, these methods still suffer from limitations in terms of generation quality, diversity, and flexibility. In this paper, we introduce IconShop, a text-guided vector icon synthesis method using autoregressive transformers. The key to success of our approach is to sequentialize and tokenize SVG paths (and textual descriptions as guidance) into a uniquely decodable token sequence. With that, we are able to fully exploit the sequence learning power of autoregressive transformers, while enabling both unconditional and text-conditioned icon synthesis. Through standard training to predict the next token on a large-scale vector icon dataset accompanied by textural descriptions, the proposed IconShop consistently exhibits better icon synthesis capability than existing image-based and language-based methods both quantitatively and qualitatively. Meanwhile, we observe a dramatic improvement in generation diversity, which is validated by the objective Uniqueness and Novelty measures. More importantly, we demonstrate the flexibility of IconShop with multiple novel icon synthesis tasks, including icon editing, icon interpolation, icon semantic combination, and icon design auto-suggestion.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Animation ; Descriptions ; Flexibility ; Grammars ; Image processing ; Inspection ; Learning curves ; Synthesis ; Transformers ; Visual observation</subject><ispartof>arXiv.org, 2023-06</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2807203878?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Wu, Ronghuan</creatorcontrib><creatorcontrib>Su, Wanchao</creatorcontrib><creatorcontrib>Ma, Kede</creatorcontrib><creatorcontrib>Liao, Jing</creatorcontrib><title>IconShop: Text-Guided Vector Icon Synthesis with Autoregressive Transformers</title><title>arXiv.org</title><description>Scalable Vector Graphics (SVG) is a popular vector image format that offers good support for interactivity and animation. Despite its appealing characteristics, creating custom SVG content can be challenging for users due to the steep learning curve required to understand SVG grammars or get familiar with professional editing software. Recent advancements in text-to-image generation have inspired researchers to explore vector graphics synthesis using either image-based methods (i.e., text -&gt; raster image -&gt; vector graphics) combining text-to-image generation models with image vectorization, or language-based methods (i.e., text -&gt; vector graphics script) through pretrained large language models. However, these methods still suffer from limitations in terms of generation quality, diversity, and flexibility. In this paper, we introduce IconShop, a text-guided vector icon synthesis method using autoregressive transformers. The key to success of our approach is to sequentialize and tokenize SVG paths (and textual descriptions as guidance) into a uniquely decodable token sequence. With that, we are able to fully exploit the sequence learning power of autoregressive transformers, while enabling both unconditional and text-conditioned icon synthesis. Through standard training to predict the next token on a large-scale vector icon dataset accompanied by textural descriptions, the proposed IconShop consistently exhibits better icon synthesis capability than existing image-based and language-based methods both quantitatively and qualitatively. Meanwhile, we observe a dramatic improvement in generation diversity, which is validated by the objective Uniqueness and Novelty measures. More importantly, we demonstrate the flexibility of IconShop with multiple novel icon synthesis tasks, including icon editing, icon interpolation, icon semantic combination, and icon design auto-suggestion.</description><subject>Animation</subject><subject>Descriptions</subject><subject>Flexibility</subject><subject>Grammars</subject><subject>Image processing</subject><subject>Inspection</subject><subject>Learning curves</subject><subject>Synthesis</subject><subject>Transformers</subject><subject>Visual observation</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNyt0KgjAYgOERBEl5D4OOhbVljs4i-oPOFE9F9DMntdm-rZ-7z6AL6Og9eN4RCbgQi0guOZ-QELFjjPFVwuNYBOR8qoxOW9OvaQYvFx28qqGmOVTOWPpFmr61awEV0qdyLd34QeBiAVE9gGa21NgYewOLMzJuyitC-OuUzPe7bHuMemvuHtAVnfFWD1RwyRLOhEyk-O_6AGUfPew</recordid><startdate>20230607</startdate><enddate>20230607</enddate><creator>Wu, Ronghuan</creator><creator>Su, Wanchao</creator><creator>Ma, Kede</creator><creator>Liao, Jing</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230607</creationdate><title>IconShop: Text-Guided Vector Icon Synthesis with Autoregressive Transformers</title><author>Wu, Ronghuan ; Su, Wanchao ; Ma, Kede ; Liao, Jing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28072038783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Animation</topic><topic>Descriptions</topic><topic>Flexibility</topic><topic>Grammars</topic><topic>Image processing</topic><topic>Inspection</topic><topic>Learning curves</topic><topic>Synthesis</topic><topic>Transformers</topic><topic>Visual observation</topic><toplevel>online_resources</toplevel><creatorcontrib>Wu, Ronghuan</creatorcontrib><creatorcontrib>Su, Wanchao</creatorcontrib><creatorcontrib>Ma, Kede</creatorcontrib><creatorcontrib>Liao, Jing</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wu, Ronghuan</au><au>Su, Wanchao</au><au>Ma, Kede</au><au>Liao, Jing</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>IconShop: Text-Guided Vector Icon Synthesis with Autoregressive Transformers</atitle><jtitle>arXiv.org</jtitle><date>2023-06-07</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Scalable Vector Graphics (SVG) is a popular vector image format that offers good support for interactivity and animation. Despite its appealing characteristics, creating custom SVG content can be challenging for users due to the steep learning curve required to understand SVG grammars or get familiar with professional editing software. Recent advancements in text-to-image generation have inspired researchers to explore vector graphics synthesis using either image-based methods (i.e., text -&gt; raster image -&gt; vector graphics) combining text-to-image generation models with image vectorization, or language-based methods (i.e., text -&gt; vector graphics script) through pretrained large language models. However, these methods still suffer from limitations in terms of generation quality, diversity, and flexibility. In this paper, we introduce IconShop, a text-guided vector icon synthesis method using autoregressive transformers. The key to success of our approach is to sequentialize and tokenize SVG paths (and textual descriptions as guidance) into a uniquely decodable token sequence. With that, we are able to fully exploit the sequence learning power of autoregressive transformers, while enabling both unconditional and text-conditioned icon synthesis. Through standard training to predict the next token on a large-scale vector icon dataset accompanied by textural descriptions, the proposed IconShop consistently exhibits better icon synthesis capability than existing image-based and language-based methods both quantitatively and qualitatively. Meanwhile, we observe a dramatic improvement in generation diversity, which is validated by the objective Uniqueness and Novelty measures. More importantly, we demonstrate the flexibility of IconShop with multiple novel icon synthesis tasks, including icon editing, icon interpolation, icon semantic combination, and icon design auto-suggestion.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_2807203878
source Publicly Available Content Database
subjects Animation
Descriptions
Flexibility
Grammars
Image processing
Inspection
Learning curves
Synthesis
Transformers
Visual observation
title IconShop: Text-Guided Vector Icon Synthesis with Autoregressive Transformers
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T17%3A31%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=IconShop:%20Text-Guided%20Vector%20Icon%20Synthesis%20with%20Autoregressive%20Transformers&rft.jtitle=arXiv.org&rft.au=Wu,%20Ronghuan&rft.date=2023-06-07&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2807203878%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_28072038783%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2807203878&rft_id=info:pmid/&rfr_iscdi=true