Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Zero-shot Conversational Summarization Evaluations with small Large Language Models

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Large Language Models (LLMs) exhibit powerful summarization abilities. However, their capabilities on conversational summarization remains under explored. In this work we evaluate LLMs (approx. 10 billion parameters) on conversational summarization and showcase their performance on various prompts. We show that the summaries generated by models depend on the instructions and the performance of LLMs vary with different instructions sometimes resulting steep drop in ROUGE scores if prompts are not selected carefully. We also evaluate the models with human evaluations and discuss the limitations of the models on conversational summarization

          Related collections

          Author and article information

          Journal
          29 November 2023
          Article
          2311.18041
          0889d581-cc2e-40a1-9a3e-4568324a4efc

          http://creativecommons.org/licenses/by/4.0/

          History
          Custom metadata
          Accepted at RoF0Mo workshop at Neurips 2023
          cs.CL

          Theoretical computer science
          Theoretical computer science

          Comments

          Comment on this article