16
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      AutoML: A Survey of the State-of-the-Art

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Deep learning has penetrated all aspects of our lives and brought us great convenience. However, the process of building a high-quality deep learning system for a specific task is not only time-consuming but also requires lots of resources and relies on human expertise, which hinders the development of deep learning in both industry and academia. To alleviate this problem, a growing number of research projects focus on automated machine learning (AutoML). In this paper, we provide a comprehensive and up-to-date study on the state-of-the-art AutoML. First, we introduce the AutoML techniques in details according to the machine learning pipeline. Then we summarize existing Neural Architecture Search (NAS) research, which is one of the most popular topics in AutoML. We also compare the models generated by NAS algorithms with those human-designed models. Finally, we present several open problems for future research.

          Related collections

          Most cited references24

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Squeeze-and-Excitation Networks

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Evolving neural networks through augmenting topologies.

            An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is significantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution.
              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Deformable Convolutional Networks

                Bookmark

                Author and article information

                Journal
                02 August 2019
                Article
                1908.00709
                8c659591-45ca-4b76-a795-66b4a3507023

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                17 pages
                cs.LG cs.CV stat.ML

                Computer vision & Pattern recognition,Machine learning,Artificial intelligence

                Comments

                Comment on this article