6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The detection of offensive language in the context of a dialogue has become an increasingly important application of natural language processing. The detection of trolls in public forums (Gal\'an-Garc\'ia et al., 2016), and the deployment of chatbots in the public domain (Wolf et al., 2017) are two examples that show the necessity of guarding against adversarially offensive behavior on the part of humans. In this work, we develop a training scheme for a model to become robust to such human attacks by an iterative build it, break it, fix it strategy with humans and models in the loop. In detailed experiments we show this approach is considerably more robust than previous systems. Further, we show that offensive language used within a conversation critically depends on the dialogue context, and cannot be viewed as a single sentence offensive detection task as in most previous work. Our newly collected tasks and methods will be made open source and publicly available.

          Related collections

          Most cited references10

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Bag of Tricks for Efficient Text Classification

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Searching for Safety Online: Managing "Trolling" in a Feminist Forum

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Adversarial Examples for Evaluating Reading Comprehension Systems

                Bookmark

                Author and article information

                Journal
                17 August 2019
                Article
                1908.06083
                8a1bface-7b94-4928-87aa-5c195291c19c

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                cs.CL

                Theoretical computer science
                Theoretical computer science

                Comments

                Comment on this article