7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Recent work has shown that deep-learning algorithms for malware detection are also susceptible to adversarial examples, i.e., carefully-crafted perturbations to input malware that enable misleading classification. Although this has questioned their suitability for this task, it is not yet clear why such algorithms are easily fooled also in this particular application domain. In this work, we take a first step to tackle this issue by leveraging explainable machine-learning algorithms developed to interpret the black-box decisions of deep neural networks. In particular, we use an explainable technique known as feature attribution to identify the most influential input features contributing to each decision, and adapt it to provide meaningful explanations to the classification of malware binaries. In this case, we find that a recently-proposed convolutional neural network does not learn any meaningful characteristic for malware detection from the data and text sections of executable files, but rather tends to learn to discriminate between benign and malware samples based on the characteristics found in the file header. Based on this finding, we propose a novel attack algorithm that generates adversarial malware binaries by only changing few tens of bytes in the file header. With respect to the other state-of-the-art attack algorithms, our attack does not require injecting any padding bytes at the end of the file, and it is much more efficient, as it requires manipulating much fewer bytes.

          Related collections

          Most cited references6

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Towards Evaluating the Robustness of Neural Networks

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            The Limitations of Deep Learning in Adversarial Settings

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Wild patterns: Ten years after the rise of adversarial machine learning

                Bookmark

                Author and article information

                Journal
                11 January 2019
                Article
                1901.03583
                ecb949e8-9d03-47ad-bd17-8a6a3a6726de

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                cs.CR

                Security & Cryptology
                Security & Cryptology

                Comments

                Comment on this article