10
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Generating Intelligible Audio Speech From Visual Speech

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references25

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Speech Recognition with Deep Recurrent Neural Networks

          Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates \emph{deep recurrent neural networks}, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Perceptual linear predictive (PLP) analysis of speech.

            A new technique for the analysis of speech, the perceptual linear predictive (PLP) technique, is presented and examined. This technique uses three concepts from the psychophysics of hearing to derive an estimate of the auditory spectrum: (1) the critical-band spectral resolution, (2) the equal-loudness curve, and (3) the intensity-loudness power law. The auditory spectrum is then approximated by an autoregressive all-pole model. A 5th-order all-pole model is effective in suppressing speaker-dependent details of the auditory spectrum. In comparison with conventional linear predictive (LP) analysis, PLP analysis is more consistent with human hearing. The effective second formant F2' and the 3.5-Bark spectral-peak integration theories of vowel perception are well accounted for. PLP analysis is computationally efficient and yields a low-dimensional representation of speech. These properties are found to be useful in speaker-independent automatic-speech recognition.
              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Hybrid speech recognition with Deep Bidirectional LSTM

                Bookmark

                Author and article information

                Journal
                IEEE/ACM Transactions on Audio, Speech, and Language Processing
                IEEE/ACM Trans. Audio Speech Lang. Process.
                Institute of Electrical and Electronics Engineers (IEEE)
                2329-9290
                2329-9304
                September 2017
                September 2017
                : 25
                : 9
                : 1751-1761
                Article
                10.1109/TASLP.2017.2716178
                8bbbb2d9-70e7-4b17-bb43-dcbfc5bd0efe
                © 2017
                History

                Comments

                Comment on this article