5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Deep Learning Based Abstractive Text Summarization: Approaches, Datasets, Evaluation Measures, and Challenges

      1 , 1
      Mathematical Problems in Engineering
      Hindawi Limited

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In recent years, the volume of textual data has rapidly increased, which has generated a valuable resource for extracting and analysing information. To retrieve useful knowledge within a reasonable time period, this information must be summarised. This paper reviews recent approaches for abstractive text summarisation using deep learning models. In addition, existing datasets for training and validating these approaches are reviewed, and their features and limitations are presented. The Gigaword dataset is commonly employed for single-sentence summary approaches, while the Cable News Network (CNN)/Daily Mail dataset is commonly employed for multisentence summary approaches. Furthermore, the measures that are utilised to evaluate the quality of summarisation are investigated, and Recall-Oriented Understudy for Gisting Evaluation 1 (ROUGE1), ROUGE2, and ROUGE-L are determined to be the most commonly applied metrics. The challenges that are encountered during the summarisation process and the solutions proposed in each approach are analysed. The analysis of the several approaches shows that recurrent neural networks with an attention mechanism and long short-term memory (LSTM) are the most prevalent techniques for abstractive text summarisation. The experimental results show that text summarisation with a pretrained encoder model achieved the highest values for ROUGE1, ROUGE2, and ROUGE-L (43.85, 20.34, and 39.9, respectively). Furthermore, it was determined that most abstractive text summarisation models faced challenges such as the unavailability of a golden token at testing time, out-of-vocabulary (OOV) words, summary sentence repetition, inaccurate sentences, and fake facts.

          Related collections

          Most cited references17

          • Record: found
          • Abstract: found
          • Article: not found

          An application of recurrent nets to phone probability estimation.

          This paper presents an application of recurrent networks for phone probability estimation in large vocabulary speech recognition. The need for efficient exploitation of context information is discussed; a role for which the recurrent net appears suitable. An overview of early developments of recurrent nets for phone recognition is given along with the more recent improvements that include their integration with Markov models. Recognition results are presented for the DARPA TIMIT and Resource Management tasks, and it is concluded that recurrent nets are competitive with traditional means for performing phone probability estimation.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            The Meteor metric for automatic evaluation of machine translation

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Introduction to the Special Issue on Summarization

                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                Journal
                Mathematical Problems in Engineering
                Mathematical Problems in Engineering
                Hindawi Limited
                1024-123X
                1563-5147
                August 24 2020
                August 24 2020
                : 2020
                : 1-29
                Affiliations
                [1 ]Princess Sumaya University for Technology, Amman, Jordan
                Article
                10.1155/2020/9365340
                a433f1fd-dc36-48bd-8b30-c2426d049143
                © 2020

                http://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article