6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Defense Framework for Privacy Risks in Remote Machine Learning Service

      1 , 2 , 1 , 1 , 1
      Security and Communication Networks
      Hindawi Limited

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In recent years, machine learning approaches have been widely adopted for many applications, including classification. Machine learning models deal with collective sensitive data usually trained in a remote public cloud server, for instance, machine learning as a service (MLaaS) system. In this scene, users upload their local data and utilize the computation capability to train models, or users directly access models trained by MLaaS. Unfortunately, recent works reveal that the curious server (that trains the model with users’ sensitive local data and is curious to know the information about individuals) and the malicious MLaaS user (who abused to query from the MLaaS system) will cause privacy risks. The adversarial method as one of typical mitigation has been studied by several recent works. However, most of them focus on the privacy-preserving against the malicious user; in other words, they commonly consider the data owner and the model provider as one role. Under this assumption, the privacy leakage risks from the curious server are neglected. Differential privacy methods can defend against privacy threats from both the curious sever and the malicious MLaaS user by directly adding noise to the training data. Nonetheless, the differential privacy method will decrease the classification accuracy of the target model heavily. In this work, we propose a generic privacy-preserving framework based on the adversarial method to defend both the curious server and the malicious MLaaS user. The framework can adapt with several adversarial algorithms to generate adversarial examples directly with data owners’ original data. By doing so, sensitive information about the original data is hidden. Then, we explore the constraint conditions of this framework which help us to find the balance between privacy protection and the model utility. The experiments’ results show that our defense framework with the AdvGAN method is effective against MIA and our defense framework with the FGSM method can protect the sensitive data from direct content exposed attacks. In addition, our method can achieve better privacy and utility balance compared to the existing method.

          Related collections

          Most cited references5

          • Record: found
          • Abstract: found
          • Article: not found

          A guide to deep learning in healthcare

          Here we present deep-learning techniques for healthcare, centering our discussion on deep learning in computer vision, natural language processing, reinforcement learning, and generalized methods. We describe how these computational techniques can impact a few key areas of medicine and explore how to build end-to-end systems. Our discussion of computer vision focuses largely on medical imaging, and we describe the application of natural language processing to domains such as electronic health record data. Similarly, reinforcement learning is discussed in the context of robotic-assisted surgery, and generalized deep-learning methods for genomics are reviewed.
            Bookmark
            • Record: found
            • Abstract: not found
            • Book Chapter: not found

            Our Data, Ourselves: Privacy Via Distributed Noise Generation

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Dropout: A simple way to prevent neural networks from ooverfitting

                Bookmark

                Author and article information

                Contributors
                Journal
                Security and Communication Networks
                Security and Communication Networks
                Hindawi Limited
                1939-0122
                1939-0114
                June 18 2021
                June 18 2021
                : 2021
                : 1-13
                Affiliations
                [1 ]School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
                [2 ]No. 30, Institute of CETC, Chengdu, China
                Article
                10.1155/2021/9924684
                7b9e5c02-cc28-4739-abef-9ee8a62dd63f
                © 2021

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article