Blog
About

  • Record: found
  • Abstract: found
  • Article: found
Is Open Access

Large-Scale Visual Speech Recognition

Preprint

Read this article at

Bookmark
      There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

      Abstract

      This work presents a scalable solution to open-vocabulary visual speech recognition. To achieve this, we constructed the largest existing visual speech recognition dataset, consisting of pairs of text and video clips of faces speaking (3,886 hours of video). In tandem, we designed and trained an integrated lipreading system, consisting of a video processing pipeline that maps raw video to stable videos of lips and sequences of phonemes, a scalable deep neural network that maps the lip videos to sequences of phoneme distributions, and a production-level speech decoder that outputs sequences of words. The proposed system achieves a word error rate (WER) of 40.9% as measured on a held-out set. In comparison, professional lipreaders achieve either 86.4% or 92.9% WER on the same dataset when having access to additional types of contextual information. Our approach significantly improves on other lipreading approaches, including variants of LipNet and of Watch, Attend, and Spell (WAS), which are only capable of 89.8% and 76.8% WER respectively.

      Related collections

      Most cited references 36

      • Record: found
      • Abstract: not found
      • Article: not found

      Long Short-Term Memory

        Bookmark
        • Record: found
        • Abstract: not found
        • Article: not found

        Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups

          Bookmark
          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          FaceNet: A Unified Embedding for Face Recognition and Clustering

           ,  ,   (2015)
          Despite significant recent advances in the field of face recognition, implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors. Our method uses a deep convolutional network trained to directly optimize the embedding itself, rather than an intermediate bottleneck layer as in previous deep learning approaches. To train, we use triplets of roughly aligned matching / non-matching face patches generated using a novel online triplet mining method. The benefit of our approach is much greater representational efficiency: we achieve state-of-the-art face recognition performance using only 128-bytes per face. On the widely used Labeled Faces in the Wild (LFW) dataset, our system achieves a new record accuracy of 99.63%. On YouTube Faces DB it achieves 95.12%. Our system cuts the error rate in comparison to the best published result by 30% on both datasets. We also introduce the concept of harmonic embeddings, and a harmonic triplet loss, which describe different versions of face embeddings (produced by different networks) that are compatible to each other and allow for direct comparison between each other.
            Bookmark

            Author and article information

            Journal
            13 July 2018
            1807.05162

            http://arxiv.org/licenses/nonexclusive-distrib/1.0/

            Custom metadata
            cs.CV cs.LG

            Computer vision & Pattern recognition, Artificial intelligence

            Comments

            Comment on this article