32
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Backward and Forward Language Modeling for Constrained Sentence Generation

      Preprint
      , , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Recent language models, especially those based on recurrent neural networks (RNNs), make it possible to generate natural language from a learned probability. Language generation has wide applications including machine translation, summarization, question answering, conversation systems, etc. Existing methods typically learn a joint probability of words conditioned on additional information, which is (either statically or dynamically) fed to RNN's hidden layer. In many applications, we are likely to impose hard constraints on the generated texts, i.e., a particular word must appear in the sentence. Unfortunately, existing approaches could not solve this problem. In this paper, we propose a novel backward and forward language model. Provided a specific word, we use RNNs to generate previous words and future words, either simultaneously or asynchronously, resulting in two model variants. In this way, the given word could appear at any position in the sentence. Experimental results show that the generated texts are comparable to sequential LMs in quality.

          Related collections

          Author and article information

          Journal
          2015-12-21
          2016-01-03
          Article
          1512.06612
          06223bb7-cf05-48de-80d9-7f0d93605e0f

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          cs.CL cs.LG cs.NE

          Theoretical computer science,Neural & Evolutionary computing,Artificial intelligence

          Comments

          Comment on this article