42
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Reducing the Training Time of Neural Networks by Partitioning

      Preprint
      ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This paper presents a new method for pre-training neural networks that can decrease the total training time for a neural network while maintaining the final performance, which motivates its use on deep neural networks. By partitioning the training task in multiple training subtasks with sub-models, which can be performed independently and in parallel, it is shown that the size of the sub-models reduces almost quadratically with the number of subtasks created, quickly scaling down the sub-models used for the pre-training. The sub-models are then merged to provide a pre-trained initial set of weights for the original model. The proposed method is independent of the other aspects of the training, such as architecture of the neural network, training method, and objective, making it compatible with a wide range of existing approaches. The speedup without loss of performance is validated experimentally on MNIST and on CIFAR10 data sets, also showing that even performing the subtasks sequentially can decrease the training time. Moreover, we show that larger models may present higher speedups and conjecture about the benefits of the method in distributed learning systems.

          Related collections

          Author and article information

          Journal
          2015-11-09
          2016-01-03
          Article
          1511.02954
          aa6c3f2a-311c-4947-a89e-647c62d437a5

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          Figure 2b has lower quality due to file size constraints
          cs.NE cs.LG

          Neural & Evolutionary computing,Artificial intelligence
          Neural & Evolutionary computing, Artificial intelligence

          Comments

          Comment on this article