Aarhus University Seal / Aarhus Universitets segl

Kim Bjerge

A scalable and efficient convolutional neural network accelerator using HLS for a system-on-chip design

Publikation: Bidrag til tidsskrift/Konferencebidrag i tidsskrift /Bidrag til avisTidsskriftartikelForskningpeer review

  • Kim Bjerge
  • Jonathan Horsted Schougaard, Aarhus Universitet, Danmark
  • Daniel Ejnar Larsen, Aarhus Universitet, Danmark
This paper presents a configurable convolutional neural network accelerator (CNNA) for a system-on-chip (SoC). The goal was to accelerate inference in different deep learning networks on an embedded SoC platform. The presented CNNA has a scalable architecture that uses high-level synthesis (HLS) and SystemC for the hardware accelerator. It can accelerate any convolutional neural network (CNN) exported from Keras in Python and supports a combination of convolutional, max-pooling, and fully connected layers. A training method with
fixed-point quantised weights is proposed and presented in the paper. The CNNA is template-based, enabling it to scale for different targets of the Xilinx Zynq platform. This approach enables design space exploration, which makes it possible to explore several configurations of the CNNA during C and RTL simulation, fitting it to the desired platform and model. The CNN VGG16 was used to test the solution on a Xilinx Ultra96 board using productivity for Zynq (PYNQ). The result gave a high level of accuracy in training with an autoscaled
fixed-point Q2.14 format compared to a similar floating-point model. It was able to perform inference in 2.0 s while having an average power consumption of 2.63 W, which corresponds to a power efficiency of 6.0 GOPS/W.
OriginalsprogEngelsk
Artikelnummer104363
TidsskriftMicroprocessors and Microsystems
Vol/bind87
Antal sider13
ISSN0141-9331
DOI
StatusUdgivet - nov. 2021

Se relationer på Aarhus Universitet Citationsformater

ID: 225592949