Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      No Language Left Behind: Scaling Human-Centered Machine Translation

      journal-article
      NLLB Team, 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1
      arXiv
      Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, I.2.7, 68T50

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. Finally, we open source all contributions described in this work, accessible at https://github.com/facebookresearch/fairseq/tree/nllb.

          Abstract

          190 pages

          Related collections

          Author and article information

          Journal
          arXiv
          2022
          11 July 2022
          12 July 2022
          08 August 2022
          09 August 2022
          25 August 2022
          26 August 2022
          July 2022
          Affiliations
          [1 ] NLLB Team
          Article
          10.48550/ARXIV.2207.04672
          dfa85759-cbcc-4ff9-92c8-d4c5f9a3c523

          Creative Commons Attribution Share Alike 4.0 International

          History

          Computation and Language (cs.CL),Artificial Intelligence (cs.AI),FOS: Computer and information sciences,I.2.7,68T50

          Comments

          Comment on this article