11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Robust SAR Automatic Target Recognition Based on Transferred MS-CNN with L 2-Regularization

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Though Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) via Convolutional Neural Networks (CNNs) has made huge progress toward deep learning, some key issues still remain unsolved due to the lack of sufficient samples and robust model. In this paper, we proposed an efficient transferred Max-Slice CNN (MS-CNN) with L 2-Regularization for SAR ATR, which could enrich the features and recognize the targets with superior performance. Firstly, the data amplification method is presented to reduce the computational time and enrich the raw features of SAR targets. Secondly, the proposed MS-CNN framework with L 2-Regularization is trained to extract robust features, in which the L 2-Regularization is incorporated to avoid the overfitting phenomenon and further optimizing our proposed model. Thirdly, transfer learning is introduced to enhance the feature representation and discrimination, which could boost the performance and robustness of the proposed model on small samples. Finally, various activation functions and dropout strategies are evaluated for further improving recognition performance. Extensive experiments demonstrated that our proposed method could not only outperform other state-of-the-art methods on the public and extended MSTAR dataset but also obtain good performance on the random small datasets.

          Related collections

          Most cited references44

          • Record: found
          • Abstract: found
          • Article: not found

          One-shot learning of object categories.

          Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by Maximum Likelihood (ML) and Maximum A Posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning

            Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Introduction to multi-layer feed-forward neural networks

                Bookmark

                Author and article information

                Contributors
                Journal
                Comput Intell Neurosci
                Comput Intell Neurosci
                CIN
                Computational Intelligence and Neuroscience
                Hindawi
                1687-5265
                1687-5273
                2019
                15 November 2019
                : 2019
                : 9140167
                Affiliations
                1Department of Intelligent Manufacturing, Wuyi University, Jiangmen 529020, China
                2School of Electronics and Information Engineering, Beihang University, Beijing 100191, China
                3Dipartimento di Informatica, Universita' Degli Studi di Milano, Via Celoria 18, 20133 Milan, Italy
                Author notes

                Academic Editor: Amparo Alonso-Betanzos

                Author information
                https://orcid.org/0000-0003-0154-9743
                https://orcid.org/0000-0003-2038-5844
                https://orcid.org/0000-0002-7707-0045
                https://orcid.org/0000-0002-6418-7316
                https://orcid.org/0000-0002-7559-0637
                Article
                10.1155/2019/9140167
                6930780
                58ad956d-0448-40d0-84cf-be933b15a6e4
                Copyright © 2019 Yikui Zhai et al.

                This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 13 June 2019
                : 21 August 2019
                Funding
                Funded by: National Natural Science Foundation of China
                Award ID: 61771347
                Funded by: Characteristic Innovation Project of Guangdong Province
                Award ID: 2017KTSCX181
                Funded by: Young Innovative Talents Project of Guangdong Province
                Award ID: 2017KQNCX206
                Funded by: Jiangmen Science and Technology Project
                Award ID: 268
                Funded by: Guangdong Science and Technology Plan Project
                Award ID: 2017A010101019
                Funded by: Wuyi University
                Award ID: 2015zk11
                Funded by: Opening Project of GuangDong Province Key Laboratory of Information Security Technology
                Award ID: 2017B030314131
                Funded by: 2018 Opening Project of GuangDong Province Key Laboratory of Digital Signal and Image Processing
                Categories
                Research Article

                Neurosciences
                Neurosciences

                Comments

                Comment on this article