Loading…

LPU: A Latency-Optimized and Highly Scalable Processor for Large Language Model Inference

The explosive arrival of OpenAI's ChatGPT has fueled the globalization of large language model (LLM), which consists of billions of pretrained parameters that embodies the aspects of syntax and semantics. HyperAccel introduces latency processing unit (LPU), a latency-optimized and highly scalab...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-08
Main Authors: Moon, Seungjae, Jung-Hoon, Kim, Kim, Junsoo, Hong, Seongmin, Cha, Junseo, Kim, Minsu, Lim, Sukbin, Choi, Gyubin, Seo, Dongjin, Kim, Jongho, Lee, Hunjong, Park, Hyunjun, Ko, Ryeowook, Choi, Soongyu, Park, Jongse, Lee, Jinwon, Joo-Young, Kim
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The explosive arrival of OpenAI's ChatGPT has fueled the globalization of large language model (LLM), which consists of billions of pretrained parameters that embodies the aspects of syntax and semantics. HyperAccel introduces latency processing unit (LPU), a latency-optimized and highly scalable processor architecture for the acceleration of LLM inference. LPU perfectly balances the memory bandwidth and compute logic with streamlined dataflow to maximize performance and efficiency. LPU is equipped with expandable synchronization link (ESL) that hides data synchronization latency between multiple LPUs. HyperDex complements LPU as an intuitive software framework to run LLM applications. LPU achieves 1.25 ms/token and 20.9 ms/token for 1.3B and 66B model, respectively, which is 2.09x and 1.37x faster than the GPU. LPU, synthesized using Samsung 4nm process, has total area of 0.824 mm2 and power consumption of 284.31 mW. LPU-based servers achieve 1.33x and 1.32x energy efficiency over NVIDIA H100 and L4 servers, respectively.
ISSN:2331-8422