1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Deep neural networks (DNNs) are vulnerable to subtle adversarial perturbations applied to the input. These adversarial perturbations, though imperceptible, can easily mislead the DNN. In this work, we take a control theoretic approach to the problem of robustness in DNNs. We treat each individual layer of the DNN as a nonlinear dynamical system and use Lyapunov theory to prove stability and robustness locally. We then proceed to prove stability and robustness globally for the entire DNN. We develop empirically tight bounds on the response of the output layer, or any hidden layer, to adversarial perturbations added to the input, or the input of hidden layers. Recent works have proposed spectral norm regularization as a solution for improving robustness against l2 adversarial attacks. Our results give new insights into how spectral norm regularization can mitigate the adversarial effects. Finally, we evaluate the power of our approach on a variety of data sets and network architectures and against some of the well-known adversarial attacks.

          Related collections

          Most cited references8

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Towards Evaluating the Robustness of Neural Networks

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Adversarial Examples Are Not Easily Detected

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              On the input-output stability of time-varying nonlinear feedback systems Part one: Conditions derived using concepts of loop gain, conicity, and positivity

              G. Zames (1966)
                Bookmark

                Author and article information

                Journal
                11 November 2019
                Article
                1911.04636
                bb03d9e8-846f-41b0-add4-e071757c7e1d

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                cs.LG cs.SY eess.SY stat.ML

                Performance, Systems & Control,Machine learning,Artificial intelligence
                Performance, Systems & Control, Machine learning, Artificial intelligence

                Comments

                Comment on this article