Loading…

AI Computing in Light of 2.5D Interconnect Roadmap: Big-Little Chiplets for In-memory Acceleration

The demands on bandwidth, latency and energy efficiency are ever increasing in AI computing. Chiplets, connected by 2. 5D interconnect, promise a scalable platform to meet such needs. We present a pathfinding study to bridge AI algorithms with the chiplet architecture, covering in memory computing (...

Full description

Saved in:
Bibliographic Details
Main Authors: Wang, Zhenyu, Nair, Gopikrishnan Raveendran, Krishnan, Gokul, Mandal, Sumit K., Cherian, Ninoo, Seo, Jae-Sun, Chakrabarti, Chaitali, Ogras, Umit Y., Cao, Yu
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The demands on bandwidth, latency and energy efficiency are ever increasing in AI computing. Chiplets, connected by 2. 5D interconnect, promise a scalable platform to meet such needs. We present a pathfinding study to bridge AI algorithms with the chiplet architecture, covering in memory computing (IMC), network-on-package (NoP), and heterogeneous architecture. This study is enabled by our newly developed benchmarking tool, SIAM. We perform simulations on representative algorithms (DNNs, transformers and GCNs). Particular contributions include: (1) A roadmap of 2. 5D interconnect for technological exploration; (2) A generic mapping and optimization methodology that reveals various bandwidth needs in AI computing, where the evolution of 2.5D interconnect can or cannot support; (3) A big-little chiplet architecture that matches the non-uniform nature of AI algorithms and achieves >100Ă— improvement in EDP. Overall, heterogeneous big-little chiplets with 2. 5D interconnect advance AI computing to the next level of data movement and computing efficiency.
ISSN:2156-017X
DOI:10.1109/IEDM45625.2022.10019406