Loading…

Representable Matrices: Enabling High Accuracy Analog Computation for Inference of DNNs Using Memristors

Analog computing based on memristor technology is a promising solution to accelerating the inference phase of deep neural networks (DNNs). A fundamental problem is to map an arbitrary matrix to a memristor crossbar array (MCA) while maximizing the resulting computational accuracy. The state-of-the-a...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhang, Baogang, Uysal, Necati, Fan, Deliang, Ewetz, Rickard
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 543
container_issue
container_start_page 538
container_title
container_volume
creator Zhang, Baogang
Uysal, Necati
Fan, Deliang
Ewetz, Rickard
description Analog computing based on memristor technology is a promising solution to accelerating the inference phase of deep neural networks (DNNs). A fundamental problem is to map an arbitrary matrix to a memristor crossbar array (MCA) while maximizing the resulting computational accuracy. The state-of-the-art mapping technique is based on a heuristic that only guarantees to produce the correct output for two input vectors. In this paper, a technique that aims to produce the correct output for every input vector is proposed, which involves specifying the memristor conductance values and a scaling factor realized by the peripheral circuitry. The key insight of the paper is that the conductance matrix realized by an MCA is only required to be proportional to the target matrix. The selection of the scaling factor between the two regulates the utilization of the programmable memristor conductance range and the representability of the target matrix. Consequently, the scaling factor is set to balance precision and value range errors. Moreover, a technique of converting conductance values into state variables and vice versa is proposed to handle memristors with non-ideal device characteristics. Compared with the state-of-the-art technique, the proposed mapping results in 4X-9X smaller errors. The improvements translate into that the classification accuracy of a seven-layer convolutional neural network (CNN) on CIFAR-10 is improved from 20.5% to 71.8%.
doi_str_mv 10.1109/ASP-DAC47756.2020.9045101
format conference_proceeding
fullrecord <record><control><sourceid>acm_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9045101</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9045101</ieee_id><sourcerecordid>acm_books_10_1109_ASP_DAC47756_2020_9045101</sourcerecordid><originalsourceid>FETCH-LOGICAL-a2611-bc9c8bdbc557da8c34fddda19a8d15690a37082208a37e29f2ec288e2be5eb953</originalsourceid><addsrcrecordid>eNqVkNFOwjAUhqvGRESewJv6AMO2W7fOuwVQTACNSuJd03Zn0MhW0o4L3t4RMOHWq_PnfPlPTj6EHigZUkryx-LzPRoXoyTLeDpkhJFhThJOCb1AtzRjgiaUxeQS9RjlcZTm2ffVObhBgxCsJpzzLGEk6aH1B2w9BGhapTeA56r11kB4wpOmW9hmhad2tcaFMTuvzB4Xjdq4FR65ertrVWtdgyvn8WtTgYfGAHYVHi8WAS_DoTyH2tvQOh_u0HWlNgEGp9lHy-fJ12gazd5eXkfFLFIspTTSJjdCl9p0H5ZKmDipyrJUNFeipDzNiYozIhgjogvA8oqBYUIA08BB5zzuo_vjXQsAcuttrfxenix1lB2pMrXUzv0ESYk8qJWdWvmnVh7UnpXEv0tSewtV_Atst3qb</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Representable Matrices: Enabling High Accuracy Analog Computation for Inference of DNNs Using Memristors</title><source>IEEE Xplore All Conference Series</source><creator>Zhang, Baogang ; Uysal, Necati ; Fan, Deliang ; Ewetz, Rickard</creator><creatorcontrib>Zhang, Baogang ; Uysal, Necati ; Fan, Deliang ; Ewetz, Rickard</creatorcontrib><description>Analog computing based on memristor technology is a promising solution to accelerating the inference phase of deep neural networks (DNNs). A fundamental problem is to map an arbitrary matrix to a memristor crossbar array (MCA) while maximizing the resulting computational accuracy. The state-of-the-art mapping technique is based on a heuristic that only guarantees to produce the correct output for two input vectors. In this paper, a technique that aims to produce the correct output for every input vector is proposed, which involves specifying the memristor conductance values and a scaling factor realized by the peripheral circuitry. The key insight of the paper is that the conductance matrix realized by an MCA is only required to be proportional to the target matrix. The selection of the scaling factor between the two regulates the utilization of the programmable memristor conductance range and the representability of the target matrix. Consequently, the scaling factor is set to balance precision and value range errors. Moreover, a technique of converting conductance values into state variables and vice versa is proposed to handle memristors with non-ideal device characteristics. Compared with the state-of-the-art technique, the proposed mapping results in 4X-9X smaller errors. The improvements translate into that the classification accuracy of a seven-layer convolutional neural network (CNN) on CIFAR-10 is improved from 20.5% to 71.8%.</description><identifier>ISBN: 1728141230</identifier><identifier>ISBN: 9781728141237</identifier><identifier>EISSN: 2153-697X</identifier><identifier>EISBN: 1728141230</identifier><identifier>EISBN: 9781728141237</identifier><identifier>DOI: 10.1109/ASP-DAC47756.2020.9045101</identifier><language>eng</language><publisher>Piscataway, NJ, USA: IEEE Press</publisher><subject>Acceleration ; Asia ; Convolutional neural networks ; Design automation ; Matrix converters ; Memristors ; Neural networks</subject><ispartof>2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC), 2020, p.538-543</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9045101$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,23909,23910,25118,27902,54530,54907</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9045101$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhang, Baogang</creatorcontrib><creatorcontrib>Uysal, Necati</creatorcontrib><creatorcontrib>Fan, Deliang</creatorcontrib><creatorcontrib>Ewetz, Rickard</creatorcontrib><title>Representable Matrices: Enabling High Accuracy Analog Computation for Inference of DNNs Using Memristors</title><title>2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC)</title><addtitle>ASP-DAC</addtitle><description>Analog computing based on memristor technology is a promising solution to accelerating the inference phase of deep neural networks (DNNs). A fundamental problem is to map an arbitrary matrix to a memristor crossbar array (MCA) while maximizing the resulting computational accuracy. The state-of-the-art mapping technique is based on a heuristic that only guarantees to produce the correct output for two input vectors. In this paper, a technique that aims to produce the correct output for every input vector is proposed, which involves specifying the memristor conductance values and a scaling factor realized by the peripheral circuitry. The key insight of the paper is that the conductance matrix realized by an MCA is only required to be proportional to the target matrix. The selection of the scaling factor between the two regulates the utilization of the programmable memristor conductance range and the representability of the target matrix. Consequently, the scaling factor is set to balance precision and value range errors. Moreover, a technique of converting conductance values into state variables and vice versa is proposed to handle memristors with non-ideal device characteristics. Compared with the state-of-the-art technique, the proposed mapping results in 4X-9X smaller errors. The improvements translate into that the classification accuracy of a seven-layer convolutional neural network (CNN) on CIFAR-10 is improved from 20.5% to 71.8%.</description><subject>Acceleration</subject><subject>Asia</subject><subject>Convolutional neural networks</subject><subject>Design automation</subject><subject>Matrix converters</subject><subject>Memristors</subject><subject>Neural networks</subject><issn>2153-697X</issn><isbn>1728141230</isbn><isbn>9781728141237</isbn><isbn>1728141230</isbn><isbn>9781728141237</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2020</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNqVkNFOwjAUhqvGRESewJv6AMO2W7fOuwVQTACNSuJd03Zn0MhW0o4L3t4RMOHWq_PnfPlPTj6EHigZUkryx-LzPRoXoyTLeDpkhJFhThJOCb1AtzRjgiaUxeQS9RjlcZTm2ffVObhBgxCsJpzzLGEk6aH1B2w9BGhapTeA56r11kB4wpOmW9hmhad2tcaFMTuvzB4Xjdq4FR65ertrVWtdgyvn8WtTgYfGAHYVHi8WAS_DoTyH2tvQOh_u0HWlNgEGp9lHy-fJ12gazd5eXkfFLFIspTTSJjdCl9p0H5ZKmDipyrJUNFeipDzNiYozIhgjogvA8oqBYUIA08BB5zzuo_vjXQsAcuttrfxenix1lB2pMrXUzv0ESYk8qJWdWvmnVh7UnpXEv0tSewtV_Atst3qb</recordid><startdate>20200101</startdate><enddate>20200101</enddate><creator>Zhang, Baogang</creator><creator>Uysal, Necati</creator><creator>Fan, Deliang</creator><creator>Ewetz, Rickard</creator><general>IEEE Press</general><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20200101</creationdate><title>Representable Matrices: Enabling High Accuracy Analog Computation for Inference of DNNs Using Memristors</title><author>Zhang, Baogang ; Uysal, Necati ; Fan, Deliang ; Ewetz, Rickard</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a2611-bc9c8bdbc557da8c34fddda19a8d15690a37082208a37e29f2ec288e2be5eb953</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Acceleration</topic><topic>Asia</topic><topic>Convolutional neural networks</topic><topic>Design automation</topic><topic>Matrix converters</topic><topic>Memristors</topic><topic>Neural networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Baogang</creatorcontrib><creatorcontrib>Uysal, Necati</creatorcontrib><creatorcontrib>Fan, Deliang</creatorcontrib><creatorcontrib>Ewetz, Rickard</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Baogang</au><au>Uysal, Necati</au><au>Fan, Deliang</au><au>Ewetz, Rickard</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Representable Matrices: Enabling High Accuracy Analog Computation for Inference of DNNs Using Memristors</atitle><btitle>2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC)</btitle><stitle>ASP-DAC</stitle><date>2020-01-01</date><risdate>2020</risdate><spage>538</spage><epage>543</epage><pages>538-543</pages><eissn>2153-697X</eissn><isbn>1728141230</isbn><isbn>9781728141237</isbn><eisbn>1728141230</eisbn><eisbn>9781728141237</eisbn><abstract>Analog computing based on memristor technology is a promising solution to accelerating the inference phase of deep neural networks (DNNs). A fundamental problem is to map an arbitrary matrix to a memristor crossbar array (MCA) while maximizing the resulting computational accuracy. The state-of-the-art mapping technique is based on a heuristic that only guarantees to produce the correct output for two input vectors. In this paper, a technique that aims to produce the correct output for every input vector is proposed, which involves specifying the memristor conductance values and a scaling factor realized by the peripheral circuitry. The key insight of the paper is that the conductance matrix realized by an MCA is only required to be proportional to the target matrix. The selection of the scaling factor between the two regulates the utilization of the programmable memristor conductance range and the representability of the target matrix. Consequently, the scaling factor is set to balance precision and value range errors. Moreover, a technique of converting conductance values into state variables and vice versa is proposed to handle memristors with non-ideal device characteristics. Compared with the state-of-the-art technique, the proposed mapping results in 4X-9X smaller errors. The improvements translate into that the classification accuracy of a seven-layer convolutional neural network (CNN) on CIFAR-10 is improved from 20.5% to 71.8%.</abstract><cop>Piscataway, NJ, USA</cop><pub>IEEE Press</pub><doi>10.1109/ASP-DAC47756.2020.9045101</doi><tpages>6</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISBN: 1728141230
ispartof 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC), 2020, p.538-543
issn 2153-697X
language eng
recordid cdi_ieee_primary_9045101
source IEEE Xplore All Conference Series
subjects Acceleration
Asia
Convolutional neural networks
Design automation
Matrix converters
Memristors
Neural networks
title Representable Matrices: Enabling High Accuracy Analog Computation for Inference of DNNs Using Memristors
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T23%3A31%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Representable%20Matrices:%20Enabling%20High%20Accuracy%20Analog%20Computation%20for%20Inference%20of%20DNNs%20Using%20Memristors&rft.btitle=2020%2025th%20Asia%20and%20South%20Pacific%20Design%20Automation%20Conference%20(ASP-DAC)&rft.au=Zhang,%20Baogang&rft.date=2020-01-01&rft.spage=538&rft.epage=543&rft.pages=538-543&rft.eissn=2153-697X&rft.isbn=1728141230&rft.isbn_list=9781728141237&rft_id=info:doi/10.1109/ASP-DAC47756.2020.9045101&rft.eisbn=1728141230&rft.eisbn_list=9781728141237&rft_dat=%3Cacm_CHZPO%3Eacm_books_10_1109_ASP_DAC47756_2020_9045101%3C/acm_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a2611-bc9c8bdbc557da8c34fddda19a8d15690a37082208a37e29f2ec288e2be5eb953%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9045101&rfr_iscdi=true