Loading…

Reformulating the direct convolution for high-performance deep learning inference on ARM processors

We present two high-performance implementations of the convolution operator via the direct algorithm that outperform the so-called lowering approach based on the im2col transform plus the gemm kernel on an ARMv8-based processor. One of our methods presents the additional advantage of zero-memory ove...

Full description

Saved in:
Bibliographic Details
Published in:Journal of systems architecture 2023-02, Vol.135, p.102806, Article 102806
Main Authors: Barrachina, Sergio, Castelló, Adrián, Dolz, Manuel F., Low, Tze Meng, Martínez, Héctor, Quintana-Ortí, Enrique S., Sridhar, Upasana, Tomás, Andrés E.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We present two high-performance implementations of the convolution operator via the direct algorithm that outperform the so-called lowering approach based on the im2col transform plus the gemm kernel on an ARMv8-based processor. One of our methods presents the additional advantage of zero-memory overhead while the other employs an additional yet rather moderate workspace, substantially smaller than that required by the im2col+gemm solution. In contrast with a previous implementation of a similar zero-memory overhead direct convolution, this work exhibits the key advantage of preserving the conventional NHWC data layout for the input/output activations of the convolution layers.
ISSN:1383-7621
1873-6165
DOI:10.1016/j.sysarc.2022.102806