0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      ExPAN(N)D: Exploring Posits for Efficient Artificial Neural Network Design in FPGA-based Systems

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The recent advances in machine learning, in general, and Artificial Neural Networks (ANN), in particular, has made smart embedded systems an attractive option for a larger number of application areas. However, the high computational complexity, memory footprints, and energy requirements of machine learning models hinder their deployment on resource-constrained embedded systems. Most state-of-the-art works have considered this problem by proposing various low bit-width data representation schemes, optimized arithmetic operators' implementations, and different complexity reduction techniques such as network pruning. To further elevate the implementation gains offered by these individual techniques, there is a need to cross-examine and combine these techniques' unique features. This paper presents ExPAN(N)D, a framework to analyze and ingather the efficacy of the \textit{Posit} number representation scheme and the efficiency of \textit{fixed-point} arithmetic implementations for ANNs. The Posit scheme offers a better dynamic range and higher precision for various applications than IEEE \(754\) single-precision floating-point format. However, due to the dynamic nature of the various fields of the Posit scheme, the corresponding arithmetic circuits have higher critical path delay and resource requirements than the single-precision-based arithmetic units. Towards this end, we propose a novel \textit{Posit to fixed-point converter} for enabling high-performance and energy-efficient hardware implementations for ANNs with minimal drop in the output accuracy. We also propose a modified Posit-based representation to store the trained parameters of a network. Compared to an \(8\)-bit fixed-point-based inference accelerator, our proposed implementation offers \(\approx46\%\) and \(\approx18\%\) reductions in the storage requirements of the parameters and energy consumption of the MAC units, respectively.

          Related collections

          Author and article information

          Journal
          24 October 2020
          Article
          2010.12869
          2829bf89-2363-4db0-9909-c13df60b1f42

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          cs.AR cs.AI cs.ET cs.PF

          Performance, Systems & Control,Artificial intelligence,General computer science,Hardware architecture

          Comments

          Comment on this article