13
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In recent years, deep learning has shown performance breakthroughs in many applications, such as image detection, image segmentation, pose estimation, and speech recognition. It was also applied successfully to malware detection. However, this comes with a major concern: deep networks have been found to be vulnerable to adversarial examples. So far successful attacks have been proved to be very effective especially in the domains of images and speech, where small perturbations to the input signal do not change how it is perceived by humans but greatly affect the classification of the model under attack. Our goal is to modify a malicious binary so it would be detected as benign while preserving its original functionality. In contrast to images or speech, small modifications to bytes of the binary lead to significant changes in the functionality. We introduce a novel approach to generating adversarial example for attacking a whole-binary malware detector. We append to the binary file a small section, which contains a selected sequence of bytes that steers the prediction of the network from malicious to be benign with high confidence. We applied this approach to a CNN-based malware detection model and showed extremely high rates of success in the attack.

          Related collections

          Most cited references6

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Practical Black-Box Attacks against Machine Learning

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Neural Network Methods for Natural Language Processing

                Bookmark

                Author and article information

                Journal
                13 February 2018
                Article
                1802.04528
                92510f00-cced-408d-8432-9216bf725aa0

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                cs.LG cs.CR

                Comments

                Comment on this article