Blog
About

6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Representation Flow for Action Recognition

      Preprint

      ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In this paper, we propose a convolutional layer inspired by optical flow algorithms to learn motion representations. Our representation flow layer is a fully-differentiable layer designed to optimally capture the `flow' of any representation channel within a convolutional neural network. Its parameters for iterative flow optimization are learned in an end-to-end fashion together with the other model parameters, maximizing the action recognition performance. Furthermore, we newly introduce the concept of learning `flow of flow' representations by stacking multiple representation flow layers. We conducted extensive experimental evaluations, confirming its advantages over previous recognition models using traditional optical flows in both computational speed and performance.

          Related collections

          Most cited references 5

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          HMDB: A large video database for human motion recognition

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Beyond Short Snippets: Deep Networks for Video Classification

            Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1% vs. 60.9%) and the UCF-101 datasets with (88.6% vs. 88.0%) and without additional optical flow information (82.6% vs. 72.8%).
              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Dynamic Image Networks for Action Recognition

                Bookmark

                Author and article information

                Journal
                02 October 2018
                Article
                1810.01455

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                Custom metadata
                cs.CV

                Computer vision & Pattern recognition

                Comments

                Comment on this article