196
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      One-Click Submission System Now Available for SO Preprints, learn more on how this works in our blog post and don't forget to check the video, too!

      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      "The Norm Culture" advocates for the introduction of a security layer in continuously learning AI models to protect against data and label poisoning attacks

      Preprint
      In review
      research-article
        1 ,
      ScienceOpen Preprints
      ScienceOpen
      Norm Culture Method, Classification, AI Attacks, training data poisoning, label poisoning

            Abstract

            This paper presents a method to protect learning AI models against data and label poisoning attacks; The Norm Culture method posits that each class in an image classification problem possesses an inherent structure that serves as a primary defense against attacks—such as data or label poisoning—that can corrupt new training and testing samples during the parameter update phase of an AI predictive model within a conventional deep learning framework. The method requires calculating three elements from the essential training and testing samples. The first element is the flattened matrix representing the class image. The second element is the class alpha, a scalar that represents the weight norm of the class. The final element is the most recently validated AI predictive model. The experimental outcomes on a binary class image classification dataset from health domains indicate that the proposed method effectively identifies training and testing sample images compromised by either type of attack one or two. Additionally, there is potential for enhancing the method within the mathematical functions of the AI framework

            Content

            Author and article information

            Journal
            ScienceOpen Preprints
            ScienceOpen
            7 June 2024
            Affiliations
            [1 ] Independent Researcher- Istanbul, Türkiye;
            Author notes
            Author information
            https://orcid.org/0009-0004-4602-4101
            Article
            10.14293/PR2199.000907.v1
            a78ece37-7f93-4085-bfb3-b690bf2ece04

            This work has been published open access under Creative Commons Attribution License CC BY 4.0 , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Conditions, terms of use and publishing policy can be found at www.scienceopen.com .

            History
            : 7 June 2024
            Categories

            The datasets generated during and/or analysed during the current study are available in the repository: https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/overview/description
            Computer vision & Pattern recognition,Applied computer science,Security & Cryptology,Artificial intelligence
            AI Attacks,label poisoning,Classification,Norm Culture Method, training data poisoning

            Comments

            Comment on this article