22
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Lower Bounds for Adversarially Robust PAC Learning

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In this work, we initiate a formal study of probably approximately correct (PAC) learning under evasion attacks, where the adversary's goal is to \emph{misclassify} the adversarially perturbed sample point \(\widetilde{x}\), i.e., \(h(\widetilde{x})\neq c(\widetilde{x})\), where \(c\) is the ground truth concept and \(h\) is the learned hypothesis. Previous work on PAC learning of adversarial examples have all modeled adversarial examples as corrupted inputs in which the goal of the adversary is to achieve \(h(\widetilde{x}) \neq c(x)\), where \(x\) is the original untampered instance. These two definitions of adversarial risk coincide for many natural distributions, such as images, but are incomparable in general. We first prove that for many theoretically natural input spaces of high dimension \(n\) (e.g., isotropic Gaussian in dimension \(n\) under \(\ell_2\) perturbations), if the adversary is allowed to apply up to a sublinear \(o(||x||)\) amount of perturbations on the test instances, PAC learning requires sample complexity that is exponential in \(n\). This is in contrast with results proved using the corrupted-input framework, in which the sample complexity of robust learning is only polynomially more. We then formalize hybrid attacks in which the evasion attack is preceded by a poisoning attack. This is perhaps reminiscent of "trapdoor attacks" in which a poisoning phase is involved as well, but the evasion phase here uses the error-region definition of risk that aims at misclassifying the perturbed instances. In this case, we show PAC learning is sometimes impossible all together, even when it is possible without the attack (e.g., due to the bounded VC dimension).

          Related collections

          Most cited references7

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Towards Evaluating the Robustness of Neural Networks

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Learning in the Presence of Malicious Errors

                Bookmark

                Author and article information

                Journal
                13 June 2019
                Article
                1906.05815
                fa196cfe-78d1-484d-88d3-265be8a1a548

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                cs.LG cs.CC cs.CR stat.ML

                Theoretical computer science,Security & Cryptology,Machine learning,Artificial intelligence

                Comments

                Comment on this article