75
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          With recent advancing of Internet of Things (IoTs), it becomes very attractive to implement the deep convolutional neural networks (DCNNs) onto embedded/portable systems. Presently, executing the software-based DCNNs requires high-performance server clusters in practice, restricting their widespread deployment on the mobile devices. To overcome this issue, considerable research efforts have been conducted in the context of developing highly-parallel and specific DCNN hardware, utilizing GPGPUs, FPGAs, and ASICs. Stochastic Computing (SC), which uses bit-stream to represent a number within [-1, 1] by counting the number of ones in the bit-stream, has a high potential for implementing DCNNs with high scalability and ultra-low hardware footprint. Since multiplications and additions can be calculated using AND gates and multiplexers in SC, significant reductions in power/energy and hardware footprint can be achieved compared to the conventional binary arithmetic implementations. The tremendous savings in power (energy) and hardware resources bring about immense design space for enhancing scalability and robustness for hardware DCNNs. This paper presents the first comprehensive design and optimization framework of SC-based DCNNs (SC-DCNNs). We first present the optimal designs of function blocks that perform the basic operations, i.e., inner product, pooling, and activation function. Then we propose the optimal design of four types of combinations of basic function blocks, named feature extraction blocks, which are in charge of extracting features from input feature maps. Besides, weight storage methods are investigated to reduce the area and power/energy consumption for storing weights. Finally, the whole SC-DCNN implementation is optimized, with feature extraction blocks carefully selected, to minimize area and power/energy consumption while maintaining a high network accuracy level.

          Related collections

          Most cited references23

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          A unified architecture for natural language processing

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Deep Learning: Methods and Applications

            Li Deng (2013)
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web]

              Li Deng (2012)
                Bookmark

                Author and article information

                Journal
                2016-11-17
                Article
                1611.05939
                6f853027-ec38-4f70-96ff-e5bcbeee50f0

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                This paper is accepted by 22nd ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2017
                cs.CV

                Computer vision & Pattern recognition
                Computer vision & Pattern recognition

                Comments

                Comment on this article