Loading…
Invited Talk: Re-Engineering Computing with Neuro-Inspired Learning: Devices, Circuits, and Systems
Advances in machine learning, notably deep learning, have led to computers matching or surpassing human performance in several cognitive tasks including vision, speech and natural language processing. However, implementation of such neural algorithms in conventional "von-Neumann" architect...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Advances in machine learning, notably deep learning, have led to computers matching or surpassing human performance in several cognitive tasks including vision, speech and natural language processing. However, implementation of such neural algorithms in conventional "von-Neumann" architectures are several orders of magnitude more area and power expensive than the biological brain. Hence, we need fundamentally new approaches to sustain exponential growth in performance at high energy-efficiency beyond the end of the CMOS roadmap in the era of 'data deluge' and emergent data-centric applications. Exploring the new paradigm of computing necessitates a multi-disciplinary approach: exploration of new learning algorithms inspired from neuroscientific principles, developing network architectures best suited for such algorithms, new hardware techniques to achieve orders of improvement in energy consumption, and nanoscale devices that can closely mimic the neuronal and synaptic operations of the brain leading to a better match between the hardware substrate and the model of computation. In this presentation, we will discuss our work on spintronic device structures consisting of single-domain/domain-wall motion based devices for mimicking neuronal and synaptic units. Implementation of different neural operations with varying degrees of bio-fidelity (from "non-spiking" to "spiking" networks) and implementation of on-chip learning mechanisms (Spike-Timing Dependent Plasticity) will be discussed. Additionally, we also propose probabilistic neural and synaptic computing platforms that can leverage the underlying stochastic device physics of spin-devices due to thermal noise. System-level simulations indicate ~100x improvement in energy consumption for such spintronic implementations over a corresponding CMOS implementation across different computing workloads. Complementary to the above device efforts, we have explored different learning algorithms including stochastic learning with one-bit synapses that greatly reduces the storage/bandwidth requirement while maintaining competitive accuracy, saliency-based attention techniques that scales the computational effort of deep networks for energy-efficiency and adaptive online learning that efficiently utilizes the limited memory and resource constraints to learn new information without catastrophically forgetting already learnt data. |
---|---|
ISSN: | 2380-6923 |
DOI: | 10.1109/VLSID49098.2020.00017 |