Loading…

Microscaling Data Formats for Deep Learning

Narrow bit-width data formats are key to reducing the computational and storage costs of modern deep learning applications. This paper evaluates Microscaling (MX) data formats that combine a per-block scaling factor with narrow floating-point and integer types for individual elements. MX formats bal...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-10
Main Authors: Bita Darvish Rouhani, Zhao, Ritchie, More, Ankit, Hall, Mathew, Khodamoradi, Alireza, Deng, Summer, Choudhary, Dhruv, Cornea, Marius, Dellinger, Eric, Denolf, Kristof, Stosic Dusan, Elango, Venmugil, Golub, Maximilian, Heinecke, Alexander, James-Roxby, Phil, Dharmesh Jani, Kolhe, Gaurav, Langhammer, Martin, Li, Ada, Melnick, Levi, Mesmakhosroshahi, Maral, Rodriguez, Andres, Schulte, Michael, Shafipour, Rasoul, Shao, Lei, Siu, Michael, Dubey, Pradeep, Micikevicius, Paulius, Naumov, Maxim, Verrilli, Colin, Wittig, Ralph, Burger, Doug, Chung, Eric
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Narrow bit-width data formats are key to reducing the computational and storage costs of modern deep learning applications. This paper evaluates Microscaling (MX) data formats that combine a per-block scaling factor with narrow floating-point and integer types for individual elements. MX formats balance the competing needs of hardware efficiency, model accuracy, and user friction. Empirical results on over two dozen benchmarks demonstrate practicality of MX data formats as a drop-in replacement for baseline FP32 for AI inference and training with low user friction. We also show the first instance of training generative language models at sub-8-bit weights, activations, and gradients with minimal accuracy loss and no modifications to the training recipe.
ISSN:2331-8422