10
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Improvement of illumination-insensitive features for face recognition under complex illumination conditions

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Complex illumination condition is one of the most critical challenging problems for practical face recognition. In this study, the authors propose a novel method to improve the illumination invariants for solving this challenge. Firstly, a new method based on the Lambert reflectance model is proposed to extract illumination invariant, which is insensitive to complex illumination variations. Secondly, in order to repair the defects caused by the process of illumination invariants extraction, the fast mean filter is utilised to smooth and remove noise. Lastly, for raising the richness of information in the output image, a nonlinear normalisation transformation is proposed. Compared with the state-of-the-arts, experimental results show that the proposed method can extract more robust illumination invariants. Apart from it, the richness of information in the processed image is greater and the performance of the face recognition rate is superior.

          Related collections

          Most cited references 26

          • Record: found
          • Abstract: not found
          • Article: not found

          The CMU pose, illumination, and expression database

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Enhanced local texture feature sets for face recognition under difficult lighting conditions.

             X Tan,  Bill Triggs (2010)
            Making recognition more reliable under uncontrolled lighting conditions is one of the most important challenges for practical face recognition systems. We tackle this by combining the strengths of robust illumination normalization, local texture-based face representations, distance transform based matching, kernel-based feature extraction and multiple feature fusion. Specifically, we make three main contributions: 1) we present a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition; 2) we introduce local ternary patterns (LTP), a generalization of the local binary pattern (LBP) local texture descriptor that is more discriminant and less sensitive to noise in uniform regions, and we show that replacing comparisons based on local spatial histograms with a distance transform based similarity metric further improves the performance of LBP/LTP based face recognition; and 3) we further improve robustness by adding Kernel principal component analysis (PCA) feature extraction and incorporating rich local appearance cues from two complementary sources--Gabor wavelets and LBP--showing that the combination is considerably more accurate than either feature set alone. The resulting method provides state-of-the-art performance on three data sets that are widely used for testing recognition under difficult illumination conditions: Extended Yale-B, CAS-PEAL-R1, and Face Recognition Grand Challenge version 2 experiment 4 (FRGC-204). For example, on the challenging FRGC-204 data set it halves the error rate relative to previously published methods, achieving a face verification rate of 88.1% at 0.1% false accept rate. Further experiments show that our preprocessing method outperforms several existing preprocessors for a range of feature sets, data sets and lighting conditions.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              A multiscale retinex for bridging the gap between color images and the human observation of scenes.

              Direct observation and recorded color images of the same scenes are often strikingly different because human visual perception computes the conscious representation with vivid color and detail in shadows, and with resistance to spectral shifts in the scene illuminant. A computation for color images that approaches fidelity to scene observation must combine dynamic range compression, color consistency-a computational analog for human vision color constancy-and color and lightness tonal rendition. In this paper, we extend a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition. This extension fails to produce good color rendition for a class of images that contain violations of the gray-world assumption implicit to the theoretical foundation of the retinex. Therefore, we define a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency. Extensive testing of the multiscale retinex with color restoration on several test scenes and over a hundred images did not reveal any pathological behaviour.
                Bookmark

                Author and article information

                Contributors
                Journal
                JOE
                The Journal of Engineering
                J. Eng.
                The Institution of Engineering and Technology
                2051-3305
                26 October 2018
                10 December 2018
                December 2018
                : 2018
                : 12
                : 1947-1953
                Affiliations
                [1 ] School of Communication and Information Engineering, Shanghai University , Shanghai, People's Republic of China
                [2 ] Faculty of Electronic and Information Engineering, Huaiyin Institute of Technology , Huaian 223003, Jiangsu, People's Republic of China
                [3 ] Key Laboratory of Advanced Displays and System Application, Ministry of Education , Shanghai 200444, People's Republic of China
                Article
                JOE.2018.5055 JOE.2018.5055
                10.1049/joe.2018.5055

                This is an open access article published by the IET under the Creative Commons Attribution-NonCommercial-NoDerivs License ( http://creativecommons.org/licenses/by-nc-nd/3.0/)

                Page count
                Pages: 0
                Product
                Funding
                Funded by: National Natural Science Foundation of China
                Award ID: 11176016
                Funded by: National Natural Science Foundation of China
                Award ID: 60872117
                Categories
                ee-sip
                Research Article

                Comments

                Comment on this article