132
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems.

          Related collections

          Most cited references13

          • Record: found
          • Abstract: not found
          • Article: not found

          Backpropagation through time: what it does and how to do it

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            A unified architecture for natural language processing

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Speech Recognition with Deep Recurrent Neural Networks

              Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates \emph{deep recurrent neural networks}, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.
                Bookmark

                Author and article information

                Journal
                2015-08-07
                2015-08-26
                Article
                1508.01745
                94cf006f-d18e-41de-bd58-821c71a293e7

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                To be appear in EMNLP 2015
                cs.CL

                Theoretical computer science
                Theoretical computer science

                Comments

                Comment on this article