Toggle Main Menu Toggle Search

Open Access padlockePrints

ExPAN(N)D: Exploring Posits for Efficient Artificial Neural Network Design in FPGA-Based Systems

Lookup NU author(s): Dr Farhad Merchant, Akash Kumar

Downloads


Licence

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).


Abstract

The high computational complexity, memory footprints, and energy requirements of machine learning models, such as Artificial Neural Networks (ANNs), hinder their deployment on resource-constrained embedded systems. Most state-of-the-art works have considered this problem by proposing various low bit-width data representation schemes and optimized arithmetic operators’ implementations. To further elevate the implementation gains offered by these individual techniques, there is a need to cross-examine and combine these techniques’ unique features. This paper presents ExPAN(N)D, a framework to analyze and ingather the efficacy of the Posit number representation scheme and the efficiency of fixed-point arithmetic implementations for ANNs. The Posit scheme offers a better dynamic range and higher precision for various applications than IEEE 754 single-precision floating-point format. However, due to the dynamic nature of the various fields of the Posit scheme, the corresponding arithmetic circuits have higher critical path delay and resource requirements than the single-precision-based arithmetic units. Towards this end, we propose a novel Posit to fixed-point converter for enabling high-performance and energy-efficient hardware implementations for ANNs with minimal drop in the output accuracy. We also propose a modified Posit-based representation to store the trained parameters of a network. With the proposed Posit to fixed-point converter-based designs, we provide multiple design points with varying accuracy-performance trade-offs for an ANN. For instance, compared to the lowest power dissipating Posit-only accelerator design, one of our proposed designs results in 80% and 48% reduction in power dissipation and LUT utilization respectively, with marginal increase in classification error for Imagenet dataset classification using VGG-16.


Publication metadata

Author(s): Nambi S, Ullah S, Sahoo S, Lohana A, Merchant F, Kumar A

Publication type: Article

Publication status: Published

Journal: IEEE Access

Year: 2021

Volume: 9

Pages: 103691-103708

Online publication date: 20/07/2021

Acceptance date: 08/07/2021

Date deposited: 06/04/2023

ISSN (electronic): 2169-3536

Publisher: IEEE

URL: https://doi.org/10.1109/ACCESS.2021.3098730

DOI: 10.1109/ACCESS.2021.3098730


Altmetrics

Altmetrics provided by Altmetric


Funding

Funder referenceFunder name
380524764

Share