6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Memory- and Communication-Aware Model Compression for Distributed Deep Learning Inference on IoT

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Model compression has emerged as an important area of research for deploying deep learning models on Internet-of-Things (IoT). However, for extremely memory-constrained scenarios, even the compressed models cannot fit within the memory of a single device and, as a result, must be distributed across multiple devices. This leads to a distributed inference paradigm in which memory and communication costs represent a major bottleneck. Yet, existing model compression techniques are not communication-aware. Therefore, we propose Network of Neural Networks (NoNN), a new distributed IoT learning paradigm that compresses a large pretrained 'teacher' deep network into several disjoint and highly-compressed 'student' modules, without loss of accuracy. Moreover, we propose a network science-based knowledge partitioning algorithm for the teacher model, and then train individual students on the resulting disjoint partitions. Extensive experimentation on five image classification datasets, for user-defined memory/performance budgets, show that NoNN achieves higher accuracy than several baselines and similar accuracy as the teacher model, while using minimal communication among students. Finally, as a case study, we deploy the proposed model for CIFAR-10 dataset on edge devices and demonstrate significant improvements in memory footprint (up to 24x), performance (up to 12x), and energy per node (up to 14x) compared to the large teacher model. We further show that for distributed inference on multiple edge devices, our proposed NoNN model results in up to 33x reduction in total latency w.r.t. a state-of-the-art model compression baseline.

          Related collections

          Most cited references5

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Sequence-Level Knowledge Distillation

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            DeepThings: Distributed Adaptive Deep Learning Inference on Resource-Constrained IoT Edge Clusters

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              MoDNN: Local distributed mobile computing system for Deep Neural Network

                Bookmark

                Author and article information

                Journal
                26 July 2019
                Article
                1907.11804
                cfd4e87d-6f01-42e3-8032-8a206f815177

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                This preprint is for personal use only. The official article will appear as part of the ESWEEK-TECS special issue and will be presented in the International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), 2019
                stat.ML cs.CV cs.DC cs.LG

                Computer vision & Pattern recognition,Machine learning,Artificial intelligence,Networking & Internet architecture

                Comments

                Comment on this article