2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Unified Speaker Adaptation Method for Speech Synthesis using Transcribed and Untranscribed Speech with Backpropagation

      Preprint

      ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          By representing speaker characteristic as a single fixed-length vector extracted solely from speech, we can train a neural multi-speaker speech synthesis model by conditioning the model on those vectors. This model can also be adapted to unseen speakers regardless of whether the transcript of adaptation data is available or not. However, this setup restricts the speaker component to just a single bias vector, which in turn limits the performance of adaptation process. In this study, we propose a novel speech synthesis model, which can be adapted to unseen speakers by fine-tuning part of or all of the network using either transcribed or untranscribed speech. Our methodology essentially consists of two steps: first, we split the conventional acoustic model into a speaker-independent (SI) linguistic encoder and a speaker-adaptive (SA) acoustic decoder; second, we train an auxiliary acoustic encoder that can be used as a substitute for the linguistic encoder whenever linguistic features are unobtainable. The results of objective and subjective evaluations show that adaptation using either transcribed or untranscribed speech with our methodology achieved a reasonable level of performance with an extremely limited amount of data and greatly improved performance with more data. Surprisingly, adaptation with untranscribed speech surpassed the transcribed counterpart in the subjective test, which reveals the limitations of the conventional acoustic model and hints at potential directions for improvements.

          Related collections

          Most cited references 13

          • Record: found
          • Abstract: not found
          • Article: not found

          Phoneme recognition using time-delay neural networks

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Tacotron: Towards End-to-End Speech Synthesis

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Statistical Parametric Speech Synthesis Incorporating Generative Adversarial Networks

                Bookmark

                Author and article information

                Journal
                18 June 2019
                Article
                1906.07414

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                Custom metadata
                Submitted to IEEE/ACM TASLP
                eess.AS cs.CL cs.LG cs.SD

                Comments

                Comment on this article