3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Temporal phase unwrapping using deep learning

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The multi-frequency temporal phase unwrapping (MF-TPU) method, as a classical phase unwrapping algorithm for fringe projection techniques, has the ability to eliminate the phase ambiguities even while measuring spatially isolated scenes or the objects with discontinuous surfaces. For the simplest and most efficient case in MF-TPU, two groups of phase-shifting fringe patterns with different frequencies are used: the high-frequency one is applied for 3D reconstruction of the tested object and the unit-frequency one is used to assist phase unwrapping for the wrapped phase with high frequency. The final measurement precision or sensitivity is determined by the number of fringes used within the high-frequency pattern, under the precondition that its absolute phase can be successfully recovered without any fringe order errors. However, due to the non-negligible noises and other error sources in actual measurement, the frequency of the high-frequency fringes is generally restricted to about 16, resulting in limited measurement accuracy. On the other hand, using additional intermediate sets of fringe patterns can unwrap the phase with higher frequency, but at the expense of a prolonged pattern sequence. With recent developments and advancements of machine learning for computer vision and computational imaging, it can be demonstrated in this work that deep learning techniques can automatically realize TPU through supervised learning, as called deep learning-based temporal phase unwrapping (DL-TPU), which can substantially improve the unwrapping reliability compared with MF-TPU even under different types of error sources, e.g., intensity noise, low fringe modulation, projector nonlinearity, and motion artifacts. Furthermore, as far as we know, our method was demonstrated experimentally that the high-frequency phase with 64 periods can be directly and reliably unwrapped from one unit-frequency phase using DL-TPU. These results highlight that challenging issues in optical metrology can be potentially overcome through machine learning, opening new avenues to design powerful and extremely accurate high-speed 3D imaging systems ubiquitous in nowadays science, industry, and multimedia.

          Related collections

          Most cited references33

          • Record: found
          • Abstract: not found
          • Article: not found

          Synthetic aperture radar interferometry

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Structured-light 3D surface imaging: a tutorial

            Jason Geng (2011)
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Lensless computational imaging through deep learning

                Bookmark

                Author and article information

                Contributors
                chenqian@njust.edu.cn
                zuochao@njust.edu.cn
                Journal
                Sci Rep
                Sci Rep
                Scientific Reports
                Nature Publishing Group UK (London )
                2045-2322
                27 December 2019
                27 December 2019
                2019
                : 9
                : 20175
                Affiliations
                [1 ]ISNI 0000 0000 9116 9901, GRID grid.410579.e, School of Electronic and Optical Engineering, Nanjing University of Science and Technology, ; No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province 210094 China
                [2 ]ISNI 0000 0000 9116 9901, GRID grid.410579.e, Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, ; Nanjing, Jiangsu Province 210094 China
                [3 ]ISNI 0000 0000 9116 9901, GRID grid.410579.e, Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, ; Nanjing, Jiangsu Province 210094 China
                [4 ]ISNI 0000 0001 2188 4229, GRID grid.202665.5, Brookhaven National Laboratory, ; NSLS II 50 Rutherford Drive, Upton, New York, 11973-5000 United States
                [5 ]ISNI 0000000099214842, GRID grid.1035.7, Institute of Micromechanics and Photonics, Warsaw University of Technology, ; 8 Sw. A. Boboli Street, Warsaw, 02-525 Poland
                [6 ]ISNI 0000 0001 2224 0361, GRID grid.59025.3b, Centre for Optical and Laser Engineering (COLE), School of Mechanical and Aerospace Engineering, , Nanyang Technological University, ; Singapore, 639798 Singapore
                Author information
                http://orcid.org/0000-0002-9148-3401
                http://orcid.org/0000-0002-5645-5173
                Article
                56222
                10.1038/s41598-019-56222-3
                6934795
                31882669
                efd221e1-63c6-4031-8619-7928d68707d6
                © The Author(s) 2019

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 19 May 2019
                : 9 December 2019
                Funding
                Funded by: National Natural Science Foundation of China (61722506, 61705105, 11574152), National Key R$ & $D Program of China (2017YFF0106403), Final Assembly ``13th Five-Year Plan' Advanced Research Project of China (30102070102), Equipment Advanced Research Fund of China (61404150202), The Key Research and Development Program of Jiangsu Province (BE2017162), Outstanding Youth Foundation of Jiangsu Province (BK20170034), National Defense Science and Technology Foundation of China (0106173), "333 Engineering" Research Project of Jiangsu Province (BRA2016407), Fundamental Research Funds for the Central Universities (30917011204), China Postdoctoral Science Foundation (2017M621747), Jiangsu Planned Projects for Postdoctoral Research Funds (1701038A).
                Categories
                Article
                Custom metadata
                © The Author(s) 2019

                Uncategorized
                optical sensors,imaging and sensing
                Uncategorized
                optical sensors, imaging and sensing

                Comments

                Comment on this article