10
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      GraphSpeech: Syntax-Aware Graph Attention Network For Neural Speech Synthesis

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Attention-based end-to-end text-to-speech synthesis (TTS) is superior to conventional statistical methods in many ways. Transformer-based TTS is one of such successful implementations. While Transformer TTS models the speech frame sequence well with a self-attention mechanism, it does not associate input text with output utterances from a syntactic point of view at sentence level. We propose a novel neural TTS model, denoted as GraphSpeech, that is formulated under graph neural network framework. GraphSpeech encodes explicitly the syntactic relation of input lexical tokens in a sentence, and incorporates such information to derive syntactically motivated character embeddings for TTS attention mechanism. Experiments show that GraphSpeech consistently outperforms the Transformer TTS baseline in terms of spectrum and prosody rendering of utterances.

          Related collections

          Author and article information

          Journal
          23 October 2020
          Article
          2010.12423
          14eeb7b4-fa93-4b3c-b4cc-3079c7770c0b

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          This paper was submitted to ICASSP2021
          cs.LG cs.SD eess.AS

          Artificial intelligence,Electrical engineering,Graphics & Multimedia design

          Comments

          Comment on this article