Energy efficient boosting of GEMM accelerators for DNN via reuse
Date
2022-06-06Source Title
ACM Transactions on Design Automation of Electronic Systems
Print ISSN
1084-4309
Electronic ISSN
1557-7309
Publisher
Association for Computing Machinery, Inc
Volume
27
Issue
5
Pages
1 - 26
Language
English
Type
ArticleItem Usage Stats
2
views
views
1
downloads
downloads
Abstract
Reuse-centric convolutional neural networks (CNN) acceleration speeds up CNN inference by reusing computations for similar neuron vectors in CNN’s input layer or activation maps. This new paradigm of optimizations is, however, largely limited by the overheads in neuron vector similarity detection, an important step in reuse-centric CNN. This article presents an in-depth exploration of architectural support for reuse-centric CNN. It addresses some major limitations of the state-of-the-art design and proposes a novel hardware accelerator that improves neuron vector similarity detection and reduces the energy consumption of reuse-centric CNN inference. The accelerator is implemented to support a wide variety of neural network settings with a banked memory subsystem. Design exploration is performed through RTL simulation and synthesis on an FPGA platform. When integrated into Eyeriss, the accelerator can potentially provide improvements up to 7.75× in performance. Furthermore, it can reduce the energy used for similarity detection up to 95.46%, and it can accelerate the convolutional layer up to 3.63× compared to the software-based implementation running on the CPU.