1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Few Shot Classification Methods Based on Multiscale Relational Networks

      , , , , , ,
      Applied Sciences
      MDPI AG

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Learning information from a single or a few samples is called few-shot learning. This learning method will solve deep learning’s dependence on a large sample. Deep learning achieves few-shot learning through meta-learning: “how to learn by using previous experience”. Therefore, this paper considers how the deep learning method uses meta-learning to learn and generalize from a small sample size in image classification. The main contents are as follows. Practicing learning in a wide range of tasks enables deep learning methods to use previous empirical knowledge. However, this method is subject to the quality of feature extraction and the selection of measurement methods supports set and the target set. Therefore, this paper designs a multi-scale relational network (MSRN) aiming at the above problems. The experimental results show that the simple design of the MSRN can achieve higher performance. Furthermore, it improves the accuracy of the datasets within fewer samples and alleviates the overfitting situation. However, to ensure that uniform measurement applies to all tasks, the few-shot classification based on metric learning must ensure the task set’s homologous distribution.

          Related collections

          Most cited references20

          • Record: found
          • Abstract: not found
          • Article: not found

          Learning representations by back-propagating errors

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            One-shot learning of object categories.

            Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by Maximum Likelihood (ML) and Maximum A Posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Building Machines That Learn and Think Like People.

              Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
                Bookmark

                Author and article information

                Contributors
                Journal
                ASPCC7
                Applied Sciences
                Applied Sciences
                MDPI AG
                2076-3417
                April 2022
                April 17 2022
                : 12
                : 8
                : 4059
                Article
                10.3390/app12084059
                8f2c854e-5f30-4564-9dcd-06403be65e0f
                © 2022

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article