7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Generating 3D Adversarial Point Clouds

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Machine learning models especially deep neural networks (DNNs) have been successfully applied to a variety of applications. However, DNNs are known to be vulnerable to adversarial examples which are carefully crafted instances aiming to cause learning models to make incorrect predictions. Recently, adversarial examples have been extensively studied for 2D image, natural language and audio datasets, while the robustness of 3D models has not yet been explored. Given the wide safety-critical applications of 3D models, such as PointNet for Lidar data in autonomous driving, it is important to understand the vulnerability of 3D models under various adversarial attacks. Due to the special format of point cloud data, it is challenging to generate adversarial examples in the point cloud space. In this work, we propose novel algorithms to generate adversarial point clouds against PointNet, which is the most widely used model dealing with point cloud data. We mainly propose two types of attacks on point clouds: unnoticeable adversarial point clouds, and manufacturable adversarial point clusters for physical attacks. For unnoticeable point clouds, we propose to either shift existing or add new points negligibly to craft "unnoticeable" perturbation. For adversarial point clusters, we propose to generate a small number of explicit "manufacturable adversarial point clusters" which are noticeable but of meaningful clusters. The goal of these adversarial point clusters is to realize "physical attacks" by 3D printing the synthesized objects and sticking them to the original object. In addition, we propose 7 perturbation measurement metrics tailored to different attacks and conduct extensive experiments to evaluate the proposed algorithms on the ModelNet40 dataset. Overall, our attack algorithms achieve about 100% attack success rate for all targeted attacks.

          Related collections

          Most cited references4

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          The Limitations of Deep Learning in Adversarial Settings

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Practical Black-Box Attacks against Machine Learning

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              3D ShapeNets: A deep representation for volumetric shapes

                Bookmark

                Author and article information

                Journal
                19 September 2018
                Article
                1809.07016
                ae5914e3-5016-4eaf-bc01-1a844ba59320

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                cs.CR cs.CV cs.LG

                Computer vision & Pattern recognition,Security & Cryptology,Artificial intelligence

                Comments

                Comment on this article