6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Vulnerabilities of Connectionist AI Applications: Evaluation and Defense

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This article deals with the IT security of connectionist artificial intelligence (AI) applications, focusing on threats to integrity, one of the three IT security goals. Such threats are for instance most relevant in prominent AI computer vision applications. In order to present a holistic view on the IT security goal integrity, many additional aspects, such as interpretability, robustness and documentation are taken into account. A comprehensive list of threats and possible mitigations is presented by reviewing the state-of-the-art literature. AI-specific vulnerabilities, such as adversarial attacks and poisoning attacks are discussed in detail, together with key factors underlying them. Additionally and in contrast to former reviews, the whole AI life cycle is analyzed with respect to vulnerabilities, including the planning, data acquisition, training, evaluation and operation phases. The discussion of mitigations is likewise not restricted to the level of the AI system itself but rather advocates viewing AI systems in the context of their life cycles and their embeddings in larger IT infrastructures and hardware devices. Based on this and the observation that adaptive attackers may circumvent any single published AI-specific defense to date, the article concludes that single protective measures are not sufficient but rather multiple measures on different levels have to be combined to achieve a minimum level of IT security for AI applications.

          Related collections

          Most cited references80

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Deep Residual Learning for Image Recognition

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Gradient-based learning applied to document recognition

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Multilayer feedforward networks are universal approximators

                Bookmark

                Author and article information

                Contributors
                Journal
                Front Big Data
                Front Big Data
                Front. Big Data
                Frontiers in Big Data
                Frontiers Media S.A.
                2624-909X
                22 July 2020
                2020
                : 3
                : 23
                Affiliations
                Federal Office for Information Security , Bonn, Germany
                Author notes

                Edited by: Xue Lin, Northeastern University, United States

                Reviewed by: Ping Yang, Binghamton University, United States; Fuxun Yu, George Mason University, United States

                *Correspondence: Christian Berghoff christian.berghoff@ 123456bsi.bund.de

                This article was submitted to Cybersecurity and Privacy, a section of the journal Frontiers in Big Data

                †These authors have contributed equally to this work

                Article
                10.3389/fdata.2020.00023
                7931957
                29e0da61-4d83-4850-bbf5-10903ca058d9
                Copyright © 2020 Berghoff, Neu and von Twickel.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 20 March 2020
                : 10 June 2020
                Page count
                Figures: 8, Tables: 1, Equations: 0, References: 113, Pages: 18, Words: 15260
                Categories
                Big Data
                Review

                artificial intelligence,neural network,it security,interpretability,certification,adversarial attack,poisoning attack

                Comments

                Comment on this article