2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Time-Aware and View-Aware Video Rendering for Unsupervised Representation Learning

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The recent success in deep learning has lead to various effective representation learning methods for videos. However, the current approaches for video representation require large amount of human labeled datasets for effective learning. We present an unsupervised representation learning framework to encode scene dynamics in videos captured from multiple viewpoints. The proposed framework has two main components: Representation Learning Network (RL-NET), which learns a representation with the help of Blending Network (BL-NET), and Video Rendering Network (VR-NET), which is used for video synthesis. The framework takes as input video clips from different viewpoints and time, learns an internal representation and uses this representation to render a video clip from an arbitrary given viewpoint and time. The ability of the proposed network to render video frames from arbitrary viewpoints and time enable it to learn a meaningful and robust representation of the scene dynamics. We demonstrate the effectiveness of the proposed method in rendering view-aware as well as time-aware video clips on two different real-world datasets including UCF-101 and NTU-RGB+D. To further validate the effectiveness of the learned representation, we use it for the task of view-invariant activity classification where we observe a significant improvement (~26%) in the performance on NTU-RGB+D dataset compared to the existing state-of-the art methods.

          Related collections

          Most cited references2

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Deep Multimodal Feature Analysis for Action Recognition in RGB+D Videos

            Single modality action recognition on RGB or depth sequences has been extensively explored recently. It is generally accepted that each of these two modalities has different strengths and limitations for the task of action recognition. Therefore, analysis of the RGB+D videos can help us to better study the complementary properties of these two types of modalities and achieve higher levels of performance. In this paper, we propose a new deep autoencoder based shared-specific feature factorization network to separate input multimodal signals into a hierarchy of components. Further, based on the structure of the features, a structured sparsity learning machine is proposed which utilizes mixed norms to apply regularization within components and group selection between them for better classification performance. Our experimental results show the effectiveness of our cross-modality feature analysis framework by achieving state-of-the-art accuracy for action classification on five challenging benchmark datasets.
              Bookmark

              Author and article information

              Journal
              26 November 2018
              Article
              1811.10699
              4e50de44-000c-450c-b58f-344e924c6ddd

              http://arxiv.org/licenses/nonexclusive-distrib/1.0/

              History
              Custom metadata
              cs.CV

              Computer vision & Pattern recognition
              Computer vision & Pattern recognition

              Comments

              Comment on this article