54
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We present DataGrad, a general back-propagation style training procedure for deep neural architectures that uses regularization of a deep Jacobian-based penalty. It can be viewed as a deep extension of the layerwise contractive auto-encoder penalty. More importantly, it unifies previous proposals for adversarial training of deep neural nets -- this list includes directly modifying the gradient, training on a mix of original and adversarial examples, using contractive penalties, and approximately optimizing constrained adversarial objective functions. In an experiment using a Deep Sparse Rectifier Network, we find that the deep Jacobian regularization of DataGrad (which also has L1 and L2 flavors of regularization) outperforms traditional L1 and L2 regularization both on the original dataset as well as on adversarial examples.

          Related collections

          Author and article information

          Journal
          2016-01-26
          2016-02-09
          Article
          1601.07213
          5b0d2bf1-b361-4737-b7ce-3b4d829f1301

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          cs.LG cs.NE

          Neural & Evolutionary computing,Artificial intelligence
          Neural & Evolutionary computing, Artificial intelligence

          Comments

          Comment on this article