19
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Comparative Study of CNN and RNN for Natural Language Processing

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Deep neural networks (DNN) have revolutionized the field of natural language processing (NLP). Convolutional neural network (CNN) and recurrent neural network (RNN), the two main types of DNN architectures, are widely explored to handle various NLP tasks. CNN is supposed to be good at extracting position-invariant features and RNN at modeling units in sequence. The state of the art on many NLP tasks often switches due to the battle between CNNs and RNNs. This work is the first systematic comparison of CNN and RNN on a wide range of representative NLP tasks, aiming to give basic guidance for DNN selection.

          Related collections

          Most cited references5

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Document Modeling with Gated Recurrent Neural Network for Sentiment Classification

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Domain adaptation with structural correspondence learning

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              A large annotated corpus for learning natural language inference

              Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.
                Bookmark

                Author and article information

                Journal
                2017-02-07
                Article
                1702.01923
                d64002a6-7bb5-4e18-8eba-591932c60f30

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                7 pages, 11 figures
                cs.CL

                Theoretical computer science
                Theoretical computer science

                Comments

                Comment on this article