3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Bearing Fault Reconstruction Diagnosis Method Based on ResNet-152 with Multi-Scale Stacked Receptive Field

      , ,
      Sensors
      MDPI AG

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The axle box in the bogie system of subway trains is a key component connecting primary damper and the axle. In order to extract deep features and large-scale fault features for rapid diagnosis, a novel fault reconstruction characteristics classification method based on deep residual network with a multi-scale stacked receptive field for rolling bearings of a subway train axle box is proposed. Firstly, multi-layer stacked convolutional kernels and methods to insert them into ultra-deep residual networks are developed. Then, the original vibration signals of four fault characteristics acquired are reconstructed with a Gramian angular summation field and trainable large-scale 2D time-series images are obtained. In the end, the experimental results show that ResNet-152-MSRF has a low complexity of network structure, less trainable parameters than general convolutional neural networks, and no significant increase in network parameters and calculation time after embedding multi-layer stacked convolutional kernels. Moreover, there is a significant improvement in accuracy compared to lower depths, and a slight improvement in accuracy compared to networks than unembedded multi-layer stacked convolutional kernels.

          Related collections

          Most cited references25

          • Record: found
          • Abstract: not found
          • Article: not found

          Artificial intelligence for fault diagnosis of rotating machinery: A review

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Res2Net: A New Multi-scale Backbone Architecture

            Representing features at multiple scales is of great importance for numerous vision tasks. Recent advances in backbone convolutional neural networks (CNNs) continually demonstrate stronger multi-scale representation ability, leading to consistent performance gains on a wide range of applications. However, most existing methods represent the multi-scale features in a layer-wise manner. In this paper, we propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The proposed Res2Net block can be plugged into the state-of-the-art backbone CNN models, e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies and experimental results on representative computer vision tasks, i.e., object detection, class activation mapping, and salient object detection, further verify the superiority of the Res2Net over the state-of-the-art baseline methods. The source code and trained models are available on https://mmcheng.net/res2net/.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found

              Deep learning for classification and localization of COVID-19 markers in point-of-care lung ultrasound

              Deep learning (DL) has proved successful in medical imaging and, in the wake of the recent COVID-19 pandemic, some works have started to investigate DL-based solutions for the assisted diagnosis of lung diseases. While existing works focus on CT scans, this paper studies the application of DL techniques for the analysis of lung ultrasonography (LUS) images. Specifically, we present a novel fully-annotated dataset of LUS images collected from several Italian hospitals, with labels indicating the degree of disease severity at a frame-level, video-level, and pixel-level (segmentation masks). Leveraging these data, we introduce several deep models that address relevant tasks for the automatic analysis of LUS images. In particular, we present a novel deep network, derived from Spatial Transformer Networks, which simultaneously predicts the disease severity score associated to a input frame and provides localization of pathological artefacts in a weakly-supervised way. Furthermore, we introduce a new method based on uninorms for effective frame score aggregation at a video-level. Finally, we benchmark state of the art deep models for estimating pixel-level segmentations of COVID-19 imaging biomarkers. Experiments on the proposed dataset demonstrate satisfactory results on all the considered tasks, paving the way to future research on DL for the assisted diagnosis of COVID-19 from LUS data.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                Journal
                SENSC9
                Sensors
                Sensors
                MDPI AG
                1424-8220
                March 2022
                February 22 2022
                : 22
                : 5
                : 1705
                Article
                10.3390/s22051705
                c54f9790-4fd6-4f84-aba0-48bb54d2527d
                © 2022

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article