24
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Enhancing the Visibility of Delamination during Pulsed Thermography of Carbon Fiber-Reinforced Plates Using a Stacked Autoencoder

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The effectiveness of pulsed thermography (PT) for detecting delamination in carbon fiber-reinforced polymer (CFRP) plates has been widely verified. However, delaminations are usually characterized by weak visibility due to the influences of inspection factors and the delaminations with weak visibility are easily missed in real inspections. In this study, by introducing a deep learning algorithm—stacked autoencoder (SAE)—to PT, we propose a novel approach (SAE-PT) to enhance the visibility of delaminations. Based on the ability of SAE to learn unsupervised features from data, the thermal features of delaminations are extracted from the raw thermograms. The extracted features are then employed to construct SAE images, in which the visibility of delaminations is expected to be enhanced. To test the performance of SAE-PT, we inspected CFRP plates with prefabricated delaminations. By implementing SAE-PT on the raw inspection data, the delaminations were more clearly indicated in the constructed SAE images. We also compare SAE-PT to the widely used principal component thermography (PCT) method to further verify the validity of the proposed approach. The results reveal that compared to PCT, SAE-PT can show delaminations in CFRP with higher contrast. By effectively enhancing the delamination visibility, SAE-PT thus has potential for improving the inspection accuracy of PT for non-destructive testing (NDT) of CFRP.

          Related collections

          Most cited references39

          • Record: found
          • Abstract: found
          • Article: not found

          Learning hierarchical features for scene labeling.

          Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a $(320\times 240)$ image labeling in less than a second, including feature extraction.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Global Contrast Based Salient Region Detection.

            Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.
              Bookmark
              • Record: found
              • Abstract: not found
              • Book: not found

              Going deeper with convolutions

                Bookmark

                Author and article information

                Journal
                Sensors (Basel)
                Sensors (Basel)
                sensors
                Sensors (Basel, Switzerland)
                MDPI
                1424-8220
                25 August 2018
                September 2018
                : 18
                : 9
                : 2809
                Affiliations
                [1 ]College of Mechanical and Electronic Engineering, China University of Petroleum, Qingdao 266580, China; xiejing@ 123456upc.edu.cn (J.X.); 18754280118@ 123456163.com (C.W.); gaolemeihx@ 123456163.com (L.G.); gmchen@ 123456upc.edu.cn (G.C.)
                [2 ]Department of Mechanical Engineering, University of Houston, Houston, TX 77004, USA
                Author notes
                [* ]Correspondence: chxu@ 123456upc.edu.cn (C.X.); gsong@ 123456uh.edu (G.S.); Tel.: +86-532-8698-3503 (ext. 8707) (C.X.); +1-713-743-4525 (G.S.)
                Author information
                https://orcid.org/0000-0001-5135-5555
                Article
                sensors-18-02809
                10.3390/s18092809
                6164668
                30149654
                0cee18b6-43c3-4b3b-ba10-086c03121955
                © 2018 by the authors.

                Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).

                History
                : 21 July 2018
                : 22 August 2018
                Categories
                Article

                Biomedical engineering
                stacked autoencoder (sae),pulsed thermography (pe),delamination detection,carbon fiber-reinforced polymer

                Comments

                Comment on this article