2
views
0
recommends
+1 Recommend
2 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Classification of Digital Modulated COVID-19 Images in the Presence of Channel Noise Using 2D Convolutional Neural Networks

      1 , 1 , 1 , 2 , 3 , 1
      Wireless Communications and Mobile Computing
      Hindawi Limited

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The wireless environment poses a significant challenge to the propagation of signals. Different effects such as multipath scattering, noise, degradation, distortion, attenuation, and fading affect the distribution of signals adversely. Deep learning techniques can be used to differentiate among different modulated signals for reliable detection in a communication system. This study aims at distinguishing COVID-19 disease images that have been modulated by different digital modulation schemes and are then passed through different noise channels and classified using deep learning models. We proposed a comprehensive evaluation of different 2D Convolutional Neural Network (CNN) architectures for the task of multiclass (24-classes) classification of modulated images in the presence of noise and fading. It is used to differentiate between images modulated through Binary Phase Shift Keying, Quadrature Phase Shift Keying, 16- and 64-Quadrature Amplitude Modulation and passed through Additive White Gaussian Noise, Rayleigh, and Rician channels. We obtained mixed results under different settings such as data augmentation, disharmony between batch normalization (BN), and dropout (DO), as well as lack of BN in the network. In this study, we found that the best performing model is a 2D-CNN model using disharmony between BN and DO techniques trained using 10-fold cross-validation (CV) with a small value of DO before softmax and after every convolution and fully connected layer along with BN layers in the presence of data augmentation, while the least performing model is the 2D-CNN model trained using 5-fold CV without augmentation.

          Related collections

          Most cited references35

          • Record: found
          • Abstract: found
          • Article: not found

          Deep learning.

          Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            ImageNet classification with deep convolutional neural networks

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Representation learning: a review and new perspectives.

              The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.
                Bookmark

                Author and article information

                Contributors
                Journal
                Wireless Communications and Mobile Computing
                Wireless Communications and Mobile Computing
                Hindawi Limited
                1530-8677
                1530-8669
                July 10 2021
                July 10 2021
                : 2021
                : 1-15
                Affiliations
                [1 ]School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
                [2 ]Department of Electrical and Computer Engineering, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, Pakistan
                [3 ]CISTER Research Centre, ISEP, Politécnico do Porto, Portugal
                Article
                10.1155/2021/5539907
                80e8b50b-047b-409e-ac66-38718a07ff87
                © 2021

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article