15
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Optimizing Memory Efficiency for Deep Convolutional Neural Networks on GPUs

      Preprint
      , , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Leveraging large data sets, deep Convolutional Neural Networks (CNNs) achieve state-of-the-art recognition accuracy. Due to the substantial compute and memory operations, however, they require significant execution time. The massive parallel computing capability of GPUs make them as one of the ideal platforms to accelerate CNNs and a number of GPU-based CNN libraries have been developed. While existing works mainly focus on the computational efficiency of CNNs, the memory efficiency of CNNs have been largely overlooked. Yet CNNs have intricate data structures and their memory behavior can have significant impact on the performance. In this work, we study the memory efficiency of various CNN layers and reveal the performance implication from both data layouts and memory access patterns. Experiments show the universal effect of our proposed optimizations on both single layers and various networks, with up to 27.9x for a single layer and up to 5.6x on the whole networks.

          Related collections

          Author and article information

          Journal
          2016-10-12
          Article
          1610.03618
          7c581471-1766-4f03-85ce-3e27be3efc99

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          Published as a conference paper International Conference on High Performance Computing, Networking, Storage, and Analysis (SC'16), 2016
          cs.DC cs.LG cs.NE cs.PF

          Performance, Systems & Control,Neural & Evolutionary computing,Artificial intelligence,Networking & Internet architecture

          Comments

          Comment on this article