32
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      On the Number of Linear Regions of Deep Neural Networks

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We study the complexity of functions computable by deep feedforward neural networks with piecewise linear activations in terms of the symmetries and the number of linear regions that they have. Deep networks are able to sequentially map portions of each layer's input-space to the same output. In this way, deep models compute functions that react equally to complicated patterns of different inputs. The compositional structure of these functions enables them to re-use pieces of computation exponentially often in terms of the network's depth. This paper investigates the complexity of such compositional maps and contributes new theoretical results regarding the advantage of depth for neural networks with piecewise linear activation functions. In particular, our analysis is not specific to a single family of models, and as an example, we employ it for rectifier and maxout networks. We improve complexity bounds from pre-existing work and investigate the behavior of units in higher layers.

          Related collections

          Most cited references5

          • Record: found
          • Abstract: not found
          • Article: not found

          Multilayer feedforward networks are universal approximators

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Multi-column deep neural network for traffic sign classification.

            We describe the approach that won the final phase of the German traffic sign recognition benchmark. Our method is the only one that achieved a better-than-human recognition rate of 99.46%. We use a fast, fully parameterizable GPU implementation of a Deep Neural Network (DNN) that does not require careful design of pre-wired feature extractors, which are rather learned in a supervised way. Combining various DNNs trained on differently preprocessed data into a Multi-Column DNN (MCDNN) further boosts recognition performance, making the system insensitive also to variations in contrast and illumination.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Deep Belief Networks Are Compact Universal Approximators

                Bookmark

                Author and article information

                Journal
                2014-02-08
                2014-06-07
                Article
                1402.1869
                025feeec-248b-4fcd-ad2b-9a3593592da9

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                stat.ML cs.LG cs.NE

                Machine learning,Neural & Evolutionary computing,Artificial intelligence
                Machine learning, Neural & Evolutionary computing, Artificial intelligence

                Comments

                Comment on this article