17
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Transfer Learning for Visual Categorization: A Survey

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          <p class="first" id="d8162536e53">Regular machine learning and data mining techniques study the training data for future inferences under a major assumption that the future data are within the same feature space or have the same distribution as the training data. However, due to the limited availability of human labeled training data, training data that stay in the same feature space or have the same distribution as the future data cannot be guaranteed to be sufficient enough to avoid the over-fitting problem. In real-world applications, apart from data in the target domain, related data in a different domain can also be included to expand the availability of our prior knowledge about the target future data. Transfer learning addresses such cross-domain learning problems by extracting useful information from data in a related domain and transferring them for being used in target tasks. In recent years, with transfer learning being applied to visual categorization, some typical problems, e.g., view divergence in action recognition tasks and concept drifting in image classification tasks, can be efficiently solved. In this paper, we survey state-of-the-art transfer learning algorithms in visual categorization applications, such as object recognition, image classification, and human action recognition. </p>

          Related collections

          Most cited references62

          • Record: found
          • Abstract: not found
          • Article: not found

          Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            One-shot learning of object categories.

            Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by Maximum Likelihood (ML) and Maximum A Posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.
              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              HMDB: A large video database for human motion recognition

                Bookmark

                Author and article information

                Journal
                IEEE Transactions on Neural Networks and Learning Systems
                IEEE Trans. Neural Netw. Learning Syst.
                Institute of Electrical and Electronics Engineers (IEEE)
                2162-237X
                2162-2388
                May 2015
                May 2015
                : 26
                : 5
                : 1019-1034
                Article
                10.1109/TNNLS.2014.2330900
                25014970
                2a91bfd9-34b7-499a-9892-4260ac2fa82d
                © 2015
                History

                Comments

                Comment on this article