Blog
About

1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Translationese in Machine Translation Evaluation

      Preprint

      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The term translationese has been used to describe the presence of unusual features of translated text. In this paper, we provide a detailed analysis of the adverse effects of translationese on machine translation evaluation results. Our analysis shows evidence to support differences in text originally written in a given language relative to translated text and this can potentially negatively impact the accuracy of machine translation evaluations. For this reason we recommend that reverse-created test data be omitted from future machine translation test sets. In addition, we provide a re-evaluation of a past high-profile machine translation evaluation claiming human-parity of MT, as well as analysis of the since re-evaluations of it. We find potential ways of improving the reliability of all three past evaluations. One important issue not previously considered is the statistical power of significance tests applied in past evaluations that aim to investigate human-parity of MT. Since the very aim of such evaluations is to reveal legitimate ties between human and MT systems, power analysis is of particular importance, where low power could result in claims of human parity that in fact simply correspond to Type II error. We therefore provide a detailed power analysis of tests used in such evaluations to provide an indication of a suitable minimum sample size of translations for such studies. Subsequently, since no past evaluation that aimed to investigate claims of human parity ticks all boxes in terms of accuracy and reliability, we rerun the evaluation of the systems claiming human parity. Finally, we provide a comprehensive check-list for future machine translation evaluation.

          Related collections

          Most cited references 9

          • Record: found
          • Abstract: not found
          • Article: not found

          Findings of the 2016 Conference on Machine Translation

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Findings of the 2014 Workshop on Statistical Machine Translation

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              (Meta-) evaluation of machine translation

                Bookmark

                Author and article information

                Journal
                24 June 2019
                1906.09833

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                Custom metadata
                17 pages, 8 figures, 9 tables
                cs.CL cs.AI

                Theoretical computer science, Artificial intelligence

                Comments

                Comment on this article