Loading…

Net2Vis - A Visual Grammar for Automatically Generating Publication-Tailored CNN Architecture Visualizations

To convey neural network architectures in publications, appropriate visualizations are of great importance. While most current deep learning papers contain such visualizations, these are usually handcrafted just before publication, which results in a lack of a common visual grammar, significant time...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on visualization and computer graphics 2021-06, Vol.27 (6), p.2980-2991
Main Authors: Bauerle, Alex, van Onzenoodt, Christian, Ropinski, Timo
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c349t-46155cafcddaa1620a5f1a289b42ab713c8bb472cc1be6b9b8ca935ea81690593
cites cdi_FETCH-LOGICAL-c349t-46155cafcddaa1620a5f1a289b42ab713c8bb472cc1be6b9b8ca935ea81690593
container_end_page 2991
container_issue 6
container_start_page 2980
container_title IEEE transactions on visualization and computer graphics
container_volume 27
creator Bauerle, Alex
van Onzenoodt, Christian
Ropinski, Timo
description To convey neural network architectures in publications, appropriate visualizations are of great importance. While most current deep learning papers contain such visualizations, these are usually handcrafted just before publication, which results in a lack of a common visual grammar, significant time investment, errors, and ambiguities. Current automatic network visualization tools focus on debugging the network itself and are not ideal for generating publication visualizations. Therefore, we present an approach to automate this process by translating network architectures specified in Keras into visualizations that can directly be embedded into any publication. To do so, we propose a visual grammar for convolutional neural networks (CNNs), which has been derived from an analysis of such figures extracted from all ICCV and CVPR papers published between 2013 and 2019. The proposed grammar incorporates visual encoding, network layout, layer aggregation, and legend generation. We have further realized our approach in an online system available to the community, which we have evaluated through expert feedback, and a quantitative study. It not only reduces the time needed to generate network visualizations for publications, but also enables a unified and unambiguous visualization design.
doi_str_mv 10.1109/TVCG.2021.3057483
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TVCG_2021_3057483</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9350177</ieee_id><sourcerecordid>2525790636</sourcerecordid><originalsourceid>FETCH-LOGICAL-c349t-46155cafcddaa1620a5f1a289b42ab713c8bb472cc1be6b9b8ca935ea81690593</originalsourceid><addsrcrecordid>eNpdkU1vEzEQhi0Eol_8AFSpstRLLxv87fUxikpAqgKHtFfLdmbBlXe32LuH8OtxSOiB09gzz7yamRehj5QsKCXm0_ZptV4wwuiCE6lFy9-gc2oEbYgk6m19E60bppg6QxelPBNChWjNe3TGuZSKUHKO0gYm9hQLbvAS1zi7hNfZ9b3LuBszXs7T2LspBpfSHq9hgFx_ww_8ffapZqc4Ds3WxTRm2OHVZoOXOfyME4RpznBSjL__cuUKvetcKvDhFC_R4-f77epL8_Bt_XW1fGgCF2ZqhKJSBteF3c45qhhxsqOOtcYL5rymPLTeC81CoB6UN74NznAJrqXKEGn4Jbo76r7k8dcMZbJ9LAFScgOMc7FMtFoL3TJe0dv_0OdxzkOdzjLJpDZEcVUpeqRCHkvJ0NmXHOuJ9pYSe7DCHqywByvsyYrac3NSnn0Pu9eOf7evwPURiADwWq6LEKo1_wPAd40e</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2525790636</pqid></control><display><type>article</type><title>Net2Vis - A Visual Grammar for Automatically Generating Publication-Tailored CNN Architecture Visualizations</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Bauerle, Alex ; van Onzenoodt, Christian ; Ropinski, Timo</creator><creatorcontrib>Bauerle, Alex ; van Onzenoodt, Christian ; Ropinski, Timo</creatorcontrib><description>To convey neural network architectures in publications, appropriate visualizations are of great importance. While most current deep learning papers contain such visualizations, these are usually handcrafted just before publication, which results in a lack of a common visual grammar, significant time investment, errors, and ambiguities. Current automatic network visualization tools focus on debugging the network itself and are not ideal for generating publication visualizations. Therefore, we present an approach to automate this process by translating network architectures specified in Keras into visualizations that can directly be embedded into any publication. To do so, we propose a visual grammar for convolutional neural networks (CNNs), which has been derived from an analysis of such figures extracted from all ICCV and CVPR papers published between 2013 and 2019. The proposed grammar incorporates visual encoding, network layout, layer aggregation, and legend generation. We have further realized our approach in an online system available to the community, which we have evaluated through expert feedback, and a quantitative study. It not only reduces the time needed to generate network visualizations for publications, but also enables a unified and unambiguous visualization design.</description><identifier>ISSN: 1077-2626</identifier><identifier>EISSN: 1941-0506</identifier><identifier>DOI: 10.1109/TVCG.2021.3057483</identifier><identifier>PMID: 33556010</identifier><identifier>CODEN: ITVGEA</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>architecture visualization ; Artificial neural networks ; Computer architecture ; Data visualization ; Documents ; Encoding ; Grammar ; graph layouting ; Layout ; Network architecture ; Neural networks ; On-line systems ; Visualization</subject><ispartof>IEEE transactions on visualization and computer graphics, 2021-06, Vol.27 (6), p.2980-2991</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c349t-46155cafcddaa1620a5f1a289b42ab713c8bb472cc1be6b9b8ca935ea81690593</citedby><cites>FETCH-LOGICAL-c349t-46155cafcddaa1620a5f1a289b42ab713c8bb472cc1be6b9b8ca935ea81690593</cites><orcidid>0000-0003-3886-8799 ; 0000-0002-7857-5512 ; 0000-0002-5951-6795</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9350177$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33556010$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Bauerle, Alex</creatorcontrib><creatorcontrib>van Onzenoodt, Christian</creatorcontrib><creatorcontrib>Ropinski, Timo</creatorcontrib><title>Net2Vis - A Visual Grammar for Automatically Generating Publication-Tailored CNN Architecture Visualizations</title><title>IEEE transactions on visualization and computer graphics</title><addtitle>TVCG</addtitle><addtitle>IEEE Trans Vis Comput Graph</addtitle><description>To convey neural network architectures in publications, appropriate visualizations are of great importance. While most current deep learning papers contain such visualizations, these are usually handcrafted just before publication, which results in a lack of a common visual grammar, significant time investment, errors, and ambiguities. Current automatic network visualization tools focus on debugging the network itself and are not ideal for generating publication visualizations. Therefore, we present an approach to automate this process by translating network architectures specified in Keras into visualizations that can directly be embedded into any publication. To do so, we propose a visual grammar for convolutional neural networks (CNNs), which has been derived from an analysis of such figures extracted from all ICCV and CVPR papers published between 2013 and 2019. The proposed grammar incorporates visual encoding, network layout, layer aggregation, and legend generation. We have further realized our approach in an online system available to the community, which we have evaluated through expert feedback, and a quantitative study. It not only reduces the time needed to generate network visualizations for publications, but also enables a unified and unambiguous visualization design.</description><subject>architecture visualization</subject><subject>Artificial neural networks</subject><subject>Computer architecture</subject><subject>Data visualization</subject><subject>Documents</subject><subject>Encoding</subject><subject>Grammar</subject><subject>graph layouting</subject><subject>Layout</subject><subject>Network architecture</subject><subject>Neural networks</subject><subject>On-line systems</subject><subject>Visualization</subject><issn>1077-2626</issn><issn>1941-0506</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNpdkU1vEzEQhi0Eol_8AFSpstRLLxv87fUxikpAqgKHtFfLdmbBlXe32LuH8OtxSOiB09gzz7yamRehj5QsKCXm0_ZptV4wwuiCE6lFy9-gc2oEbYgk6m19E60bppg6QxelPBNChWjNe3TGuZSKUHKO0gYm9hQLbvAS1zi7hNfZ9b3LuBszXs7T2LspBpfSHq9hgFx_ww_8ffapZqc4Ds3WxTRm2OHVZoOXOfyME4RpznBSjL__cuUKvetcKvDhFC_R4-f77epL8_Bt_XW1fGgCF2ZqhKJSBteF3c45qhhxsqOOtcYL5rymPLTeC81CoB6UN74NznAJrqXKEGn4Jbo76r7k8dcMZbJ9LAFScgOMc7FMtFoL3TJe0dv_0OdxzkOdzjLJpDZEcVUpeqRCHkvJ0NmXHOuJ9pYSe7DCHqywByvsyYrac3NSnn0Pu9eOf7evwPURiADwWq6LEKo1_wPAd40e</recordid><startdate>20210601</startdate><enddate>20210601</enddate><creator>Bauerle, Alex</creator><creator>van Onzenoodt, Christian</creator><creator>Ropinski, Timo</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3886-8799</orcidid><orcidid>https://orcid.org/0000-0002-7857-5512</orcidid><orcidid>https://orcid.org/0000-0002-5951-6795</orcidid></search><sort><creationdate>20210601</creationdate><title>Net2Vis - A Visual Grammar for Automatically Generating Publication-Tailored CNN Architecture Visualizations</title><author>Bauerle, Alex ; van Onzenoodt, Christian ; Ropinski, Timo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c349t-46155cafcddaa1620a5f1a289b42ab713c8bb472cc1be6b9b8ca935ea81690593</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>architecture visualization</topic><topic>Artificial neural networks</topic><topic>Computer architecture</topic><topic>Data visualization</topic><topic>Documents</topic><topic>Encoding</topic><topic>Grammar</topic><topic>graph layouting</topic><topic>Layout</topic><topic>Network architecture</topic><topic>Neural networks</topic><topic>On-line systems</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bauerle, Alex</creatorcontrib><creatorcontrib>van Onzenoodt, Christian</creatorcontrib><creatorcontrib>Ropinski, Timo</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on visualization and computer graphics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bauerle, Alex</au><au>van Onzenoodt, Christian</au><au>Ropinski, Timo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Net2Vis - A Visual Grammar for Automatically Generating Publication-Tailored CNN Architecture Visualizations</atitle><jtitle>IEEE transactions on visualization and computer graphics</jtitle><stitle>TVCG</stitle><addtitle>IEEE Trans Vis Comput Graph</addtitle><date>2021-06-01</date><risdate>2021</risdate><volume>27</volume><issue>6</issue><spage>2980</spage><epage>2991</epage><pages>2980-2991</pages><issn>1077-2626</issn><eissn>1941-0506</eissn><coden>ITVGEA</coden><abstract>To convey neural network architectures in publications, appropriate visualizations are of great importance. While most current deep learning papers contain such visualizations, these are usually handcrafted just before publication, which results in a lack of a common visual grammar, significant time investment, errors, and ambiguities. Current automatic network visualization tools focus on debugging the network itself and are not ideal for generating publication visualizations. Therefore, we present an approach to automate this process by translating network architectures specified in Keras into visualizations that can directly be embedded into any publication. To do so, we propose a visual grammar for convolutional neural networks (CNNs), which has been derived from an analysis of such figures extracted from all ICCV and CVPR papers published between 2013 and 2019. The proposed grammar incorporates visual encoding, network layout, layer aggregation, and legend generation. We have further realized our approach in an online system available to the community, which we have evaluated through expert feedback, and a quantitative study. It not only reduces the time needed to generate network visualizations for publications, but also enables a unified and unambiguous visualization design.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>33556010</pmid><doi>10.1109/TVCG.2021.3057483</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-3886-8799</orcidid><orcidid>https://orcid.org/0000-0002-7857-5512</orcidid><orcidid>https://orcid.org/0000-0002-5951-6795</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1077-2626
ispartof IEEE transactions on visualization and computer graphics, 2021-06, Vol.27 (6), p.2980-2991
issn 1077-2626
1941-0506
language eng
recordid cdi_crossref_primary_10_1109_TVCG_2021_3057483
source IEEE Electronic Library (IEL) Journals
subjects architecture visualization
Artificial neural networks
Computer architecture
Data visualization
Documents
Encoding
Grammar
graph layouting
Layout
Network architecture
Neural networks
On-line systems
Visualization
title Net2Vis - A Visual Grammar for Automatically Generating Publication-Tailored CNN Architecture Visualizations
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T23%3A20%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Net2Vis%20-%20A%20Visual%20Grammar%20for%20Automatically%20Generating%20Publication-Tailored%20CNN%20Architecture%20Visualizations&rft.jtitle=IEEE%20transactions%20on%20visualization%20and%20computer%20graphics&rft.au=Bauerle,%20Alex&rft.date=2021-06-01&rft.volume=27&rft.issue=6&rft.spage=2980&rft.epage=2991&rft.pages=2980-2991&rft.issn=1077-2626&rft.eissn=1941-0506&rft.coden=ITVGEA&rft_id=info:doi/10.1109/TVCG.2021.3057483&rft_dat=%3Cproquest_cross%3E2525790636%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c349t-46155cafcddaa1620a5f1a289b42ab713c8bb472cc1be6b9b8ca935ea81690593%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2525790636&rft_id=info:pmid/33556010&rft_ieee_id=9350177&rfr_iscdi=true