Loading…

A Novel Voltage-Accumulation Vector-Matrix Multiplication Architecture Using Resistor-shunted Floating Gate Flash Memory Device for Low-power and High-density Neural Network Applications

We propose a novel processing-in-memory (PIM) architecture based on the voltage summation concept to accelerate the vector-matrix multiplication for neural network (NN) applications. The core device is formed by adding a buried shunt resistor to a floating gate Flash memory device. The NN string is...

Full description

Saved in:
Bibliographic Details
Main Authors: Lin, Yu-Yu, Lee, Feng-Min, Lee, Ming-Hsiu, Chen, Wei-Chen, Lung, Hsiang-Lan, Wang, Keh-Chung, Lu, Chih-Yuan
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We propose a novel processing-in-memory (PIM) architecture based on the voltage summation concept to accelerate the vector-matrix multiplication for neural network (NN) applications. The core device is formed by adding a buried shunt resistor to a floating gate Flash memory device. The NN string is constructed the same way as in NAND Flash by connecting the core devices in series. In perceptron operation the weighting factors are stored in the floating gate device and the sum-of-product is readily obtained by summing the voltage drop of the cells in each NN string. The energy consumption for 128 multiply-and-sum operations within a string can be as low as 0.2pJ. Finally, with the weight values stored in the non-volatile memory there is no need to move data around and this greatly improves the performance and energy efficiency for neural network applications.
ISSN:2156-017X
DOI:10.1109/IEDM.2018.8614688