11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A High-Performance Deep Learning Algorithm for the Automated Optical Inspection of Laser Welding

      , , , , , ,
      Applied Sciences
      MDPI AG

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The battery industry has been growing fast because of strong demand from electric vehicle and power storage applications.Laser welding is a key process in battery manufacturing. To control the production quality, the industry has a great desire for defect inspection of automated laser welding. Recently, Convolutional Neural Networks (CNNs) have been applied with great success for detection, recognition, and classification. In this paper, using transfer learning theory and pre-training approach in Visual Geometry Group (VGG) model, we proposed the optimized VGG model to improve the efficiency of defect classification. Our model was applied on an industrial computer with images taken from a battery manufacturing production line and achieved a testing accuracy of 99.87%. The main contributions of this study are as follows: (1) Proved that the optimized VGG model, which was trained on a large image database, can be used for the defect classification of laser welding. (2) Demonstrated that the pre-trained VGG model has small model size, lower fault positive rate, shorter training time, and prediction time; so, it is more suitable for quality inspection in an industrial environment. Additionally, we visualized the convolutional layer and max-pooling layer to make it easy to view and optimize the model.

          Related collections

          Most cited references8

          • Record: found
          • Abstract: found
          • Article: not found

          Focal Loss for Dense Object Detection

          The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https://github.com/facebookresearch/Detectron.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found
            Is Open Access

            Large-Margin Softmax Loss for Convolutional Neural Networks

            Cross-entropy loss together with softmax is arguably one of the most common used supervision components in convolutional neural networks (CNNs). Despite its simplicity, popularity and excellent performance, the component does not explicitly encourage discriminative learning of features. In this paper, we propose a generalized large-margin softmax (L-Softmax) loss which explicitly encourages intra-class compactness and inter-class separability between learned features. Moreover, L-Softmax not only can adjust the desired margin but also can avoid overfitting. We also show that the L-Softmax loss can be optimized by typical stochastic gradient descent. Extensive experiments on four benchmark datasets demonstrate that the deeply-learned features with L-softmax loss become more discriminative, hence significantly boosting the performance on a variety of visual classification and verification tasks.
              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Empirical Evaluation of Activation Functions in Deep Convolution Neural Network for Facial Expression Recognition

                Bookmark

                Author and article information

                Journal
                ASPCC7
                Applied Sciences
                Applied Sciences
                MDPI AG
                2076-3417
                February 2020
                January 31 2020
                : 10
                : 3
                : 933
                Article
                10.3390/app10030933
                7711bbb5-fa3e-4fc8-8e0a-a6ba96a0fb16
                © 2020

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article