4
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Optimization Algorithm of Moving Object Detection Using Multiscale Pyramid Convolutional Neural Networks

      research-article
      1 , 2 , , 1 , 2 , 1 , 2
      Computational Intelligence and Neuroscience
      Hindawi

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Object detection and recognition is a very important topic with significant research value. This research develops an optimised model of moving target identification based on CNN to address the issues of insufficient positioning information and low target detection accuracy (convolutional neural network). In this article, the target classification information and semantic location information are obtained through the fusion of the target detection model and the depth semantic segmentation model. The classification and position portion of the target detection model is provided by the simultaneous fusion of the image features carrying various information and a pyramid structure of multiscale image features so that the matched image fusion characteristics can be used by the target detection model to detect targets of various sizes and shapes. According to experimental findings, this method's accuracy rate is 0.941, which is 0.189 higher than that of the LSTM-NMS algorithm. Through the migration of CNN and the learning of context information, this technique has great robustness and enhances the scene adaptability of feature extraction as well as the accuracy of moving target position detection.

          Related collections

          Most cited references21

          • Record: found
          • Abstract: not found
          • Article: not found

          Advanced Deep-Learning Techniques for Salient and Category-Specific Object Detection: A Survey

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Improving Computer-Aided Detection UsingConvolutional Neural Networks and Random View Aggregation

            Automated computer-aided detection (CADe) has been an important tool in clinical practice and research. State-of-the-art methods often show high sensitivities at the cost of high false-positives (FP) per patient rates. We design a two-tiered coarse-to-fine cascade framework that first operates a candidate generation system at sensitivities  ∼ 100% of but at high FP levels. By leveraging existing CADe systems, coordinates of regions or volumes of interest (ROI or VOI) are generated and function as input for a second tier, which is our focus in this study. In this second stage, we generate 2D (two-dimensional) or 2.5D views via sampling through scale transformations, random translations and rotations. These random views are used to train deep convolutional neural network (ConvNet) classifiers. In testing, the ConvNets assign class (e.g., lesion, pathology) probabilities for a new set of random views that are then averaged to compute a final per-candidate classification probability. This second tier behaves as a highly selective process to reject difficult false positives while preserving high sensitivities. The methods are evaluated on three data sets: 59 patients for sclerotic metastasis detection, 176 patients for lymph node detection, and 1,186 patients for colonic polyp detection. Experimental results show the ability of ConvNets to generalize well to different medical imaging CADe applications and scale elegantly to various data sets. Our proposed methods improve performance markedly in all cases. Sensitivities improved from 57% to 70%, 43% to 77%, and 58% to 75% at 3 FPs per patient for sclerotic metastases, lymph nodes and colonic polyps, respectively.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Using filter banks in Convolutional Neural Networks for texture classification

                Bookmark

                Author and article information

                Contributors
                Journal
                Comput Intell Neurosci
                Comput Intell Neurosci
                cin
                Computational Intelligence and Neuroscience
                Hindawi
                1687-5265
                1687-5273
                2023
                10 March 2023
                : 2023
                : 3320547
                Affiliations
                1School of Computer Science and Technology, Soochow University, Suzhou 215006, China
                2Provincial Key Laboratory for Computer Information Processing Technology, Suzhou 215006, China
                Author notes

                Academic Editor: Zhao kaifa

                Author information
                https://orcid.org/0000-0003-4980-0343
                Article
                10.1155/2023/3320547
                10024622
                36941949
                3d24f206-2ef0-42dd-a8fd-6236ccd25466
                Copyright © 2023 Zhe Yang et al.

                This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 21 July 2022
                : 10 August 2022
                : 12 August 2022
                Funding
                Funded by: National Natural Science Foundation of China
                Award ID: 62002253
                Funded by: Natural Science Research of Jiangsu Higher Education Institutions of China
                Funded by: Priority Academic Program Development of Jiangsu Higher Education Institutions
                Categories
                Research Article

                Neurosciences
                Neurosciences

                Comments

                Comment on this article