0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Improved Adversarial Robustness by Reducing Open Space Risk via Tent Activations

      Preprint
      ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Adversarial examples contain small perturbations that can remain imperceptible to human observers but alter the behavior of even the best performing deep learning models and yield incorrect outputs. Since their discovery, adversarial examples have drawn significant attention in machine learning: researchers try to reveal the reasons for their existence and improve the robustness of machine learning models to adversarial perturbations. The state-of-the-art defense is the computationally expensive and very time consuming adversarial training via projected gradient descent (PGD). We hypothesize that adversarial attacks exploit the open space risk of classic monotonic activation functions. This paper introduces the tent activation function with bounded open space risk and shows that tents make deep learning models more robust to adversarial attacks. We demonstrate on the MNIST dataset that a classifier with tents yields an average accuracy of 91.8% against six white-box adversarial attacks, which is more than 15 percentage points above the state of the art. On the CIFAR-10 dataset, our approach improves the average accuracy against the six white-box adversarial attacks to 73.5% from 41.8% achieved by adversarial training via PGD.

          Related collections

          Most cited references2

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Towards Evaluating the Robustness of Neural Networks

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Open Set Fingerprint Spoof Detection Across Novel Fabrication Materials

              Bookmark

              Author and article information

              Journal
              07 August 2019
              Article
              1908.02435
              0b28cd58-174f-48d5-8c11-217cc7140a07

              http://arxiv.org/licenses/nonexclusive-distrib/1.0/

              History
              Custom metadata
              cs.CV cs.LG

              Computer vision & Pattern recognition,Artificial intelligence
              Computer vision & Pattern recognition, Artificial intelligence

              Comments

              Comment on this article