19
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Beyond Manual Tuning of Hyperparameters

      , ,
      KI - Künstliche Intelligenz
      Springer Nature

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references25

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Deep Learning in Neural Networks: An Overview

          (2014)
          In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
            Bookmark
            • Record: found
            • Abstract: not found
            • Book Chapter: not found

            Sequential Model-Based Optimization for General Algorithm Configuration

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found
              Is Open Access

              Multi-column Deep Neural Networks for Image Classification

              Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks.
                Bookmark

                Author and article information

                Journal
                KI - Künstliche Intelligenz
                Künstl Intell
                Springer Nature
                0933-1875
                1610-1987
                November 2015
                July 11 2015
                November 2015
                : 29
                : 4
                : 329-337
                Article
                10.1007/s13218-015-0381-0
                b2345c9f-82af-4212-a7f1-2efa239ab310
                © 2015
                History

                Comments

                Comment on this article