7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Hierarchical Conflict Propagation: Sequence Learning in a Recurrent Deep Neural Network

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Recurrent neural networks (RNN) are capable of learning to encode and exploit activation history over an arbitrary timescale. However, in practice, state of the art gradient descent based training methods are known to suffer from difficulties in learning long term dependencies. Here, we describe a novel training method that involves concurrent parallel cloned networks, each sharing the same weights, each trained at different stimulus phase and each maintaining independent activation histories. Training proceeds by recursively performing batch-updates over the parallel clones as activation history is progressively increased. This allows conflicts to propagate hierarchically from short-term contexts towards longer-term contexts until they are resolved. We illustrate the parallel clones method and hierarchical conflict propagation with a character-level deep RNN tasked with memorizing a paragraph of Moby Dick (by Herman Melville).

          Related collections

          Author and article information

          Journal
          2016-02-25
          Article
          1602.08118
          5c44dc61-79a8-4802-a480-e5e9b1f80d49

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          68Txx
          cs.LG

          Artificial intelligence
          Artificial intelligence

          Comments

          Comment on this article