0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Human evaluation of modern high-quality machine translation systems is a difficult problem, and there is increasing evidence that inadequate evaluation procedures can lead to erroneous conclusions. While there has been considerable research on human evaluation, the field still lacks a commonly accepted standard procedure. As a step toward this goal, we propose an evaluation methodology grounded in explicit error analysis, based on the Multidimensional Quality Metrics (MQM) framework. We carry out the largest MQM research study to date, scoring the outputs of top systems from the WMT 2020 shared task in two language pairs using annotations provided by professional translators with access to full document context. We analyze the resulting data extensively, finding among other results a substantially different ranking of evaluated systems from the one established by the WMT crowd workers, exhibiting a clear preference for human over machine output. Surprisingly, we also find that automatic metrics based on pre-trained embeddings can outperform human crowd workers. We make our corpus publicly available for further research.

          Related collections

          Most cited references34

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Findings of the 2017 Conference on Machine Translation (WMT17)

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Findings of the 2016 Conference on Machine Translation

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              chrF: character n-gram F-score for automatic MT evaluation

                Bookmark

                Author and article information

                Journal
                Transactions of the Association for Computational Linguistics
                MIT Press
                2307-387X
                2021
                December 17 2021
                2021
                December 17 2021
                December 17 2021
                : 9
                : 1460-1474
                Affiliations
                [1 ]Google Research. freitag@google.com
                [2 ]Google Research. fosterg@google.com
                [3 ]Google Research. grangier@google.com
                [4 ]Google Research. vratnakar@google.com
                [5 ]Google Research. qijuntan@google.com
                [6 ]Google Research. wmach@google.com
                Article
                10.1162/tacl_a_00437
                53bbc968-c9e7-4c8e-be1a-c9f9554239ab
                © 2021

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article