11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The practical ethics of bias reduction in machine translation: why domain adaptation is better than data debiasing

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This article probes the practical ethical implications of AI system design by reconsidering the important topic of bias in the datasets used to train autonomous intelligent systems. The discussion draws on recent work concerning behaviour-guiding technologies, and it adopts a cautious form of technological utopianism by assuming it is potentially beneficial for society at large if AI systems are designed to be comparatively free from the biases that characterise human behaviour. However, the argument presented here critiques the common well-intentioned requirement that, in order to achieve this, all such datasets must be debiased prior to training. By focusing specifically on gender-bias in Neural Machine Translation (NMT) systems, three automated strategies for the removal of bias are considered – downsampling, upsampling, and counterfactual augmentation – and it is shown that systems trained on datasets debiased using these approaches all achieve general translation performance that is much worse than a baseline system. In addition, most of them also achieve worse performance in relation to metrics that quantify the degree of gender bias in the system outputs. By contrast, it is shown that the technique of domain adaptation can be effectively deployed to debias existing NMT systems after they have been fully trained. This enables them to produce translations that are quantitatively far less biased when analysed using gender-based metrics, but which also achieve state-of-the-art general performance. It is hoped that the discussion presented here will reinvigorate ongoing debates about how and why bias can be most effectively reduced in state-of-the-art AI systems.

          Related collections

          Most cited references6

          • Record: found
          • Abstract: not found
          • Book Chapter: not found

          The ethics of persuasive technology

          B.J. Fogg (2003)
            Bookmark
            • Record: found
            • Abstract: not found
            • Book: not found

            Practical Ethics

              Bookmark
              • Record: found
              • Abstract: not found
              • Book: not found

              Nudge: Improving Decisions about Wealth, Health, and Happiness

                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                Journal
                Ethics and Information Technology
                Ethics Inf Technol
                Springer Science and Business Media LLC
                1388-1957
                1572-8439
                March 06 2021
                Article
                10.1007/s10676-021-09583-1
                292ed74d-a468-4d6c-9931-37e41e9f3ede
                © 2021

                https://creativecommons.org/licenses/by/4.0

                https://creativecommons.org/licenses/by/4.0

                History

                Comments

                Comment on this article