17
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      SALT: Subspace Alignment as an Auxiliary Learning Task for Domain Adaptation

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Unsupervised domain adaptation aims to transfer and adapt knowledge learned from a labeled source domain to an unlabeled target domain. Key components of unsupervised domain adaptation include: (a) maximizing performance on the source, and (b) aligning the source and target domains. Traditionally, these tasks have either been considered as separate, or assumed to be implicitly addressed together with high-capacity feature extractors. In this paper, we advance a third broad approach; which we term SALT. The core idea is to consider alignment as an auxiliary task to the primary task of maximizing performance on the source. The auxiliary task is made rather simple by assuming a tractable data geometry in the form of subspaces. We synergistically allow certain parameters derived from the closed-form auxiliary solution, to be affected by gradients from the primary task. The proposed approach represents a unique fusion of geometric and model-based alignment with gradient-flows from a data-driven primary task. SALT is simple, rooted in theory, and outperforms state-of-the-art on multiple standard benchmarks.

          Related collections

          Most cited references14

          • Record: found
          • Abstract: found
          • Article: not found

          Domain adaptation via transfer component analysis.

          Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.
            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Adversarial Discriminative Domain Adaptation

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              A theory of learning from different domains

                Bookmark

                Author and article information

                Journal
                10 June 2019
                Article
                1906.04338
                d63b740b-5151-4687-a0a7-0e3a7fa39320

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                stat.ML cs.CV cs.LG

                Computer vision & Pattern recognition,Machine learning,Artificial intelligence

                Comments

                Comment on this article