Three-Dimensional Mirror Surface Measurement Based on Local Blur Analysis of Phase Measuring Deflectometry System

Three-Dimensional Mirror Surface Measurement Based on Local Blur Analysis of Phase Measuring Deflectometry System

Hongyu Sun Le Wang Zhan SongGeng Chen 

College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao 266590, China

Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China

Mechanical and Automation Engineering Department, the Chinese University of Hong Kong, Hong Kong SAR, China

Corresponding Author Email: 
zhan.song@siat.ac.cn
Page: 
763-771
|
DOI: 
https://doi.org/10.18280/ts.370508
Received: 
21 May 2020
|
Revised: 
29 August 2020
|
Accepted: 
6 September 2020
|
Available online: 
25 November 2020
| Citation

© 2020 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Despite the marked progress in recent years, structured light-based three-dimensional (3D) measurement techniques still have difficulty in capturing mirror surface reflection. The accuracy of 3D reconstruction for mirror objects should be further improved to adapt to the high reflectivity and curvature of such objects. To improve the stripe definition and reconstruction accuracy of highly reflective mirror objects, this paper analyzes the local blur of defocus stripes in phase measuring deflectometry (PMD) system, and presents a method to analyze the spatially varying defocusing and de-blurring, with the aid of a 3D block matching algorithm, thereby focusing on defocus stripes. Experimental results show that the proposed method can achieve micron-level reconstruction accuracy of standard flat mirrors, and detect the defects on highly reflective mirror objects at a high precision.

Keywords: 

three-dimensional (3D) imaging, phase measuring deflectometry (PMD), local blur, integral reconstruction

1. Introduction

Optical three-dimensional (3D) imaging has obvious advantages in reconstructing objects with diffuse reflection, whether in terms of speed, accuracy, or stability. As a result, this technology has been widely used in industrial manufacturing, product inspection, reverse engineering, biomedicine, and other fields [1-3]. The early optical 3D imaging methods mostly adopt the contact measurement model: the coordinates of each point on the reflective surface are obtained by moving the stylus, and combined to obtain the 3D surface shape of the reflective object. Albeit its high accuracy, the traditional 3D imaging instruments raise strict requirements on the measuring conditions, because the contact system might damage the mirror surface.

Recently, non-contact 3D imaging has again wide attention, owing to its large measuring range and prevention of surface damage. Taking the binary shifting strip as structured light pattern, Song et al. [4] presented a highly dynamic range structured light means for the 3D measurement of specular surface, and experimentally proved that their means can precisely reconstruct specular targets with various shapes. Han et al. [5] proposed an accurate phase measuring deflectometry (PMD) method for 3D reconstruction of mirror surfaces, and demonstrated the high precision of the method through experiments. Song et al. [6] developed an advanced fusion strategy for the reconstruction of complex objects in micrometer-level 3D measurement, and adopted a novel scene-adaptive decoding algorithm based on a binary tree to improve the robustness of decoding and eliminate the effects of noise and occlusion on stripe detection.

PMD is a popular non-contact 3D imaging method [7-9]. By this technique, the surface phase of the target object is extracted from the deformed sine stripe pattern obtained by the charge-coupled device (CCD). The quality of the obtained stripe directly bears on the phase calculation and the reconstruction accuracy of the target object. However, the captured stripes are inevitably deformed by the limited field depth of the CCD. Besides, there is little report on the acquisition of high-quality stripes, which is a key step for structured light 3D measurement of object surfaces.

After obtaining stripes, researchers usually reduce measurement errors through algorithm compensation. Considering the nonlinearity of the stripe projection system, researchers have proposed various compensation methods, namely, the structured light method of Roach grating encoding, the calibration of nonlinear Gamma value, the look-up table (LUT) method, the light intensity compensation, and the pre-distortion stripe method. These methods are also applicable to PMD. In addition, different compensation methods have emerged to tackle the phase error induced by the quantization of stripe intensity in stripe projection, namely, a multi-period phase shift measurement method, and a phase measurement method that removes the peak and valley pixels.

The high-quality stripes are the prerequisite for high-precision measurement and reconstruction. With the aim to obtain high-quality stripes, this paper analyzes the local blurring of the deformed stripes in space, preprocesses the focus of the deformed stripes, and establishes a defocus model of the space. Next, high-precision stripes were obtained efficiently and quickly by the 3D block matching algorithm (BM3D), and the mirror object was reconstructed three dimensionally in high precision, using the regional wavefront reconstruction algorithm based on the Southwell model. In this way, the highly reflective mirror objects are reconstructed accurately.

The remainder of this paper is organized as follows: Section 2 presents the spatially varying defocusing and de-blurring analyzes, and the integral reconstruction algorithm; Section 3 verifies the proposed method through experiments, and discusses the experimental results; Section 4 puts forward the conclusions, and looks forward to the future research.

2. Principle and Key Algorithms of PMD

2.1 Construction and principle of PMD system

The hardware of PMD system encompasses a liquid crystal display (LCD) and a CCD camera. The display is synced with camera shooting under the control of a computer. Figures 1 and 2 are a photo and the principle of the PMD system. It can be seen that the display presents the standard sine stripes, while the camera captures the reflection condition of the reflector. In other words, the camera observes the stripe pattern on the display via the target mirror. The stripes reflected by the mirror carry the shape information of the surface deformation. Then, the stripe information is demodulated to derive the 3D surface topography of the target mirror. Figure 3 explains the workflow of 3D shape measurement of mirror objects by classical PMD.

Figure 1. The hardware of the PMD system

Figure 2. The principle of the PMD system

Figure 3. The workflow of 3D shape measurement of mirror objects by classical PMD

2.2 Calibration of PMD system

In 3D imaging detection, system calibration is the precondition for high-precision measurement. Here, the two-dimensional (2D) structured light is connected with the 3D shape of the target mirror, based on the mapping between the stripe phase determined by PMD and the surface gradient of the mirror.

The system calibration can be divided into two parts: the calibration of CCD camera, and the calibration of system geometry. The latter mainly focuses on the reference plane and LCD position. The main calibration parameters of the PMD system were determined by the calibration measurement system [10-12], namely, focal length F, principal point coordinates C of the image, distortion parameter K, rotation vector R, translation vector T, etc. [10-12]. The derivation of these parameters is detailed in the works of Huang et al. [7], Knauer et al. [8] and Zuo et al. [9].

Table 1. The configuration of system calibration parameters

Checkboard dimensions

9 rows

11columns

CCD resolution

2,080*1,552/HIKVISION

LCD resolution

1,920*1,080/HP24es

Cellular pixel

40

LCD pixel size

0.271mm*0.271mm

Cell size

10.84mm

Number of captured images

20

The first step of system calibration is to generate a checkerboard by computer for the calibration of the CCD camera. Here, the checkerboard has 9 rows and 11 columns. The size of each grid on the checkerboard was set to 40 pixels, according to the unit pixel size of LCD screen (0.271mm), and the side length of each grid (10.84mm).

Then, a high-precision mirror was placed on the reference plane, and rotated and moved in turn (Figure 4(a)). Meanwhile, the camera captured the checkerboard patterns of different positions (Figure 4(b)). After collecting 20 images, Zhang’s calibration method was introduced to calibrate the internal parameters of the camera, including F, C, and K. On this basis, the relationship between the reference plane and the LCD screen was derived, and the external parameters were calculated.

The Zhang’s method strikes a balance between the traditional calibration method and the self-calibration method. Unlike the traditional method, this method only requires a checkerboard, eliminating the need for high precision calibrator. Compared with the self-calibration method, this method is accurate and easy to operate. That is why Zhang’s method was selected in this research.

Figure 4. The calibration of the CCD camera

The geometric calibration focuses on the positions of the reference plane and LCD screen, trying to establish the correspondence between the two items. First, a checkerboard was placed on the high-precision reflecting surface, treating the latter as the reference surface (Figure 5). Then, the image of the checkerboard was captured by the calibrated camera, and presented on the LCD. After that, the camera was used to capture the checkerboard image on the reference surface that can reflect the display. Next, the physical coordinates and pixel coordinates of the corners of the two images were calculated in the small hole imaging model of the camera. Finally, the rotation vector RV, rotation relationship RR, translation vector TV, and translation relationship TR of the display and reference plane relative to the camera were derived from the internal parameters of the camera.

Figure 5. The calibration of reference surface and display

2.3 Analysis of wrapped phase and dephasing algorithm

The stripe reflection measurement was implemented in the following steps: secure the wrapped and absolute phases with transverse and longitudinal stripes; solve the gradient based on the phase-gradient relationship; integrate the gradient to reconstruct the 3D shape of the mirror object.

To obtain the gradient information of the 3D surface, the phase information of the points on the LCD screen, the reflection points of the object surface, and the imaging points of the corresponding image must be acquired through phase unwrapping from the deformed stripe pattern modulated by the measured surface.

The phase shifting method was adopted to extract the required data [8, 9, 13]. This method is more accurate than the traditional Fourier transform in phase calculation. It can achieve a high accurate, even if the structure is complicated by noises, high reflectivity, and varied materials. The measuring accuracy of this method mainly depends on the number of phase-shifting gratings and the quality of projection gratings (which is determined by hardware). Assuming that the intensity of the stripe image obeys the standard sine distribution, the light intensity distribution can be described as:

$\mathrm{I}(\mathrm{x}, \mathrm{y})=\mathrm{I}^{\prime}(\mathrm{x}, \mathrm{y})+\mathrm{I}^{\prime \prime}(\mathrm{x}, \mathrm{y}) \cos [\varphi(\mathrm{x}, \mathrm{y})+\delta]$       (1)

where, I'(x, y) is the mean gray value of the stripe image; I''(x, y) is the gray level modulation of the stripe image; δ is the phase shift; φ(x, y) is the phase principal value of the target stripe pattern.

To compute the values of I'(x, y), I''(x, y) and φ(x, y), at least three stripe images are needed to obtain the phase principal value of the deformed stripe pattern. Considering the high accuracy and noise suppression effect of standard n-step phase-shifting, this paper applies the standard four-step phase-shifting method to extract the phase principal value. The standard phase difference can be calculated by three-step phase unwrapping:

$\begin{aligned} I_{1} &=a+b \cos (\varphi(x, y)) \\ I_{2} &=a+b \cos (\varphi(x, y)+\pi / 2) \\ I_{3} &=a+b \cos (\varphi(x, y)+\pi) \\ I_{4} &=a+b \cos (\varphi(x, y)+3 \pi / 2) \end{aligned}$        (2)

The phase principal value of the grating stripes image is calculated by:

$\varphi(x, y)=\arctan \left(\frac{I_{4}-I_{2}}{I_{1}-I_{3}}\right)$       (3)

The traditional way of combing gray value coding and phase-shift algorithm is simple and easy to implement. However, this approach is sensitive to the color of the object surface, calling for high accuracy of image binarization. For example, if the object has rich or dark surface colors, white powder should be sprayed over the surface. What is worse, the projected coded image can only be used for phase unwrapping, and gray value coding is not helpful to the accuracy of phase calculation.

Figure 6. The phases corresponding to the multi-frequency heterodyne method

After comprehensive consideration, the multi-frequency heterodyne principle was chosen for phase calculation, thanks to its good stability and accuracy. This principle superimposes two phase functions with different frequencies into a phase function with lower frequency, and combines the merits of three-frequency unwrapping and heterodyne method, namely, high measuring efficiency and excellent phase unwrapping accuracy [14-16]. The phase principal value p12 can be calculated by:

$p_{12}=\frac{p_{1} * p_{2}}{p_{1}-p_{2}}$     (4)

where, p1 and p2 are the fundamental frequency and superposition frequency corresponding to the p12, respectively. The phases (PH12, PH23 and PH123) corresponding to the multi-frequency heterodyne method are shown in Figure 6. The results of three-band four-step horizontal unwrapping are displayed in Figure 7. It can be seen that the proposed method can achieve the purpose of graphical phase transform.

Figure 7. The results of three-band four-step horizontal unwrapping

2.4 Local blur analysis and super pixel segmentation

In the field of 3D reconstruction of mirror object, the camera with low focal length in traditional physical imaging system is sensitive to defocusing. The sensitivity, coupled with the shallow focusing depth, causes the defocusing blur of sine stripes during the capture of deformed stripes. At different distances, the field depth of the camera leads to multiple defocusing blurring, which varies with the depth of the scene. In addition, the blur map provides important information for depth estimation.

In the light of the above, it is particularly important to establish and remove spatially varying defocusing blur based on the blur of a single stripe pattern [17]. Our method of defocusing blur establishment and removal is illustrated in Figure 8. First, the local and global blur images were estimated by the blur detected on edge information. Then, the deformation blur stripe was handled by super-pixel segmentation and BM3D deconvolution [18-20].

Figure 8. The workflow of defocusing blur establishment and removal

Based on the analysis of spatially varying local blur, the defocusing blur degradation can be modeled as convolution process. In general, defocusing reduces the edge sharpness and contrast of the image. Once the defocused image is re-blurred, the amount of high-frequency blur will change significantly. The edges in a sharp image can be modeled as:

$f(x, y)=A u(x, y)+B$      (5)

where, u(x,y) is the step function; A and B are amplitude and offset, respectively. Based on A and B, the changes of edge information were recalculated, and used to estimate the edge sharpness:

$S=\frac{|\nabla I|-\left|\nabla I_{R}\right|}{|\nabla I|+\varepsilon}$      (6)

where, |I| and |IR| are the gradients of the blur image and the re-blurred image, respectively; ε is a small regular number. In addition, we have:

$\left|\nabla I_{R}(\mathrm{x}, y)\right|$

$=\left|\nabla\left((A u(x, y)+B) \otimes k\left(x, y, \sigma_{0}\right)\right)\right|$

$=\frac{1}{\sqrt{2 \pi}\left(\sigma^{2}+\sigma_{0}^{2}\right)} e^{-\frac{x^{2}+y^{2}}{2\left(\sigma^{2}+\sigma_{0}^{2}\right)}}$       (7)

where, $\sigma_{0}$ is the standard deviation of blur kernel. The edges were detected by Canny edge detection operators. Then, the edge sharpness S can be expressed as $S=1-\sqrt{\frac{\sigma^{2}}{\sigma^{2}+\sigma_{0}^{2}}}$. The blur amounts of all edges were combined into a sparse edge blur map (Figure 9).

Figure 9. The sparse edge blur map

After the bur amounts of all edges had been estimated, the k-nearest neighbors (KNN) [21] was introduced to predict the blur amount of the unknown region, creating a complete blur map. By the KNN, the non-local principle was applied to image matting, and the complete blur map represents the change of scene depth:

$E\left(\mathrm{~m}^{\prime}\right)=m^{\prime T}(L+\lambda D) m^{\prime}-2 \lambda r^{T} m^{\prime}+\lambda\left|r^{\prime}\right|$       (8)

where, m' and r' are global and local blur maps, respectively; λ is the regularization parameter; L is the Laplacian matrix of sparse affinity matrix A. According to prior knowledge, matrix L can be expressed as:

$L(i, j)=D(i, j)-A(i, j)$      (9)

where, D=diag(r). For each pixel, the k-nearest neighbors were found by the non-local principle of KNN. Specifically, A was made equivalent to B, and the blur kernel function was defined as

$\mathrm{K}(\mathrm{i}, \mathrm{j})=1-\frac{\|\mathrm{X}(\mathrm{i})-\mathrm{X}(\mathrm{j})\|}{\mathrm{C}}$      (10)

where, X(i) is the eigenvector calculated by the pixels around i; C is a constraint that limits the bur kernel within [0, 1].

By the preconditioned conjugate gradient (PCG) method, the global blur map was obtained with the optimal solution $\mathrm{m}^{\prime}=\frac{\lambda \mathrm{r} \prime}{\mathrm{L}+\lambda \mathrm{D}}$ on MATLAB.

As mentioned above, the blur amount is closely related to the field depth. The greater the field depth, the larger the blur amount. For most depth varying images, the pixels in the object have similar depths. To uniformize local depth and eliminate outliers, the full blur mapping was divided into several super-pixel modules. The mean of all pixels in each super-pixel was taken as the blur amount of that super-pixel:

$\sigma_{n}=\frac{\sum_{j \in M_{n}} \,\,\,m_{j}}{t}, n \in[1, l]$      (11)

where, n is the number of super-pixel modules; $\sigma_{\mathrm{n}}$ and $\mathrm{m}_{\mathrm{j}}$ are the blur amount of the nth super-pixel blur and the total blur amount, respectively; n is serial number of super-pixel module; t is the exact number of pixels in the module. The number of super-pixels l can be self-selected. The blur kernel of the nth super-pixel module can be defined as:

$k_{n}\left(x, y, \sigma_{n}\right)=\frac{1}{\sqrt{2 \pi} \sigma_{n}} e^{\frac{-x^{2}+y^{2}}{2 \sigma_{n}^{2}}}, n \in[1, l]$      (12)

In this way, spatial variation deblurring is transformed into a local spatially invariant deblurring problem in each super-pixel. Then, each hyper-pixel was restored separately, and the deblurring effect was included to obtain a focused image. Figure 10 shows the super-pixel segmentation blur map.

After segmenting the full blur mapping, the local kernel of each hyper-pixel was obtained. The full focus stripes pattern was formed by random deconvolution of each super-pixel module:

$L=\sum_{n=l}^{l} L_{n}^{\prime}(x, y)$      (13)

Figure 10. The super-pixel segmentation blur map

2.5 Integral reconstruction algorithm

In traditional structured light means (e.g. phase measurement profilometry (PMP)), the height information is directly obtained from the phase information. In the PMD, the two vertical gradients need to be integrated before obtaining the height distribution of the object; then, the 3D topography can be obtained according to the gradient integral [22]. Hence, the accuracy of the integral is related to the quality of surface reconstruction. The relationship between gradient and height can be expressed as:

$g_{x}(x, y)=\frac{\partial z(x, y)}{\partial x}$

$g_{y}(x, y)=\frac{\partial z(x, y)}{\partial x}$       (14)

Let $g_{x}^{r}(x, y)$ and $g_{y}^{c}(x, y)$  be the gradients of the measured surface in the direction of Xr and Yr, respectively. For direct integration on the pixel plane, the gradients were converted into $\mathrm{g}_{\mathrm{x}}^{\mathrm{c}}(\mathrm{x}, \mathrm{y})$ and $\mathrm{g}_{\mathrm{y}}^{\mathrm{c}}(\mathrm{x}, \mathrm{y})$, and the camera coordinates of each pixel were obtained. To realize high-precision integration, the common ways include Fourier transform and regional wavefront reconstruction algorithm [23, 24].

The Fourier transform is a typical global integration method. The advantages include the fast computation for reconstructing the gradient of massive data, and the highly accurate reconstruction of smooth and small local deformation surfaces. However, the successful implementation of Fourier transform has a precondition: the boundary must conform to the periodic extension condition, and lies in the middle between the two integral directions. Otherwise, the reconstructed edge will have a large error, or the reconstruction will simply fail. Moreover, if the measured data have poor integrity, it would be difficult to restore the 3D surface of the object, in the presence of complex connected regions and data of non-equidistant distribution (Figure 11).

(a) Equal spacing in X and Y directions

(b) Equal spacing in X or Y direction

(c) unequal spacing in X and Y directions

Figure 11. The distribution of gradient data points

In contrast, the regional wavefront reconstruction method [25] can effectively process the data of arbitrary shape and non-equidistant distribution gradient. This method not only suppresses high-frequency noise, but also achieves high reconstruction accuracy. The regional wavefront can be expressed as:

$\frac{z_{m, n+1}-z_{m, n}}{x_{m, n+1}-x_{m, n}} \approx f_{m, n+\frac{1}{2}}\left(g^{x}\right)=\frac{g_{m, n}^{x}+g_{m, n+1}^{x}}{2}$

$\frac{z_{m+1, n}-z_{m, n}}{y_{m+1, n}-y_{m, n}} \approx f_{m+\frac{1}{2}, n}\left(g^{y}\right)=\frac{g_{m, n}^{y}+g_{m+1, n}^{y}}{2}$       (15)

where, xm,n, ym,n and zm,n are the physical coordinates of pixel (m, n); gx are gy the gradients of pixel (m, n); $\mathrm{f}_{\mathrm{m}, \mathrm{n}+\frac{1}{2}}\left(\mathrm{~g}^{\mathrm{x}}\right)$ and $\mathrm{f}_{\mathrm{m}+\frac{1}{2}, \mathrm{n}}\left(\mathrm{g}^{\mathrm{y}}\right)$ are the values of gradient gx on pixel $\left(m, n+\frac{1}{2}\right)$ and gradient gy on pixel $\left(\mathrm{m}+\frac{1}{2}, \mathrm{n}\right)$, respectively.

(a) Equal spacing in X or Y direction

(b) Unequal spacing in X and Y directions

Figure 12. The neighboring pixels

In the case of Figure 12(b), the relationship between height and gradient should be reconsidered:

$\left\{\begin{array}{l}z_{m, n+1}-z_{m, n} \approx \Delta h_{m, n+\frac{1}{2}}^{x}+\Delta h_{m, n+\frac{1}{2}}^{y} \\ z_{m+1, n}-z_{m, n} \approx \Delta h_{m+\frac{1}{2}, n}^{x}+\Delta h_{m+\frac{1}{2}, n}^{y}\end{array}\right.$      (16)

where, $\Delta \mathrm{h}_{\mathrm{m}, \mathrm{n}+\frac{1}{2}}^{\mathrm{x}}$ and $\Delta \mathrm{h}_{\mathrm{m}, \mathrm{n}+\frac{1}{2}}^{\mathrm{y}}$ are the height increments from point a to point b along the direction of x and y, respectively; $\Delta \mathrm{h}_{\mathrm{m}+\frac{1}{2}, \mathrm{n}}^{\mathrm{x}}$ and $\Delta \mathrm{h}_{\mathrm{m}+\frac{1}{2} \mathrm{n}}^{\mathrm{y}}$ the height increments from point a to point c along the direction of x and y, respectively:

$\left\{\begin{array}{l}\Delta h_{m, n+\frac{1}{2}}^{x}=f_{m, n+\frac{1}{2}}\left(g^{x}\right)\left(x_{m, n+1}-x_{m, n}\right) \\ \Delta h_{m, n+\frac{1}{2}}^{y}=f_{m, n+\frac{1}{2}}\left(g^{y}\right)\left(y_{m, n+1}-y_{m, n}\right) \\ \Delta h_{m+\frac{1}{2}, n}^{x}=f_{m+\frac{1}{2} n}\left(g^{x}\right)\left(x_{m+1, n}-x_{m, n}\right) \\ \Delta h_{m+\frac{1}{2}, n}^{y}=f_{m+\frac{1}{2}, n}\left(g^{y}\right)\left(y_{m+1, n}-x_{m, n}\right)\end{array}\right.$       (17)

3. Experiments and Results Analysis

For experimental verification, a circular plane mirror was constructed with a diameter of 100mm, and a rectangular high-precision mirror was made of reinforced aluminum with a chamfer of 70*100mm. The results of mirror reconstruction by our method is recorded in Figure 13.

(a)Horizontal grating stripes

(b)Vertical grating stripes

(c) Circular reconstruction results

(d) Rectangular reconstruction results

(e) Circular reconstruction error

(f) Rectangular reconstruction error

Figure 13. The results of mirror reconstruction

The left and right columns of Figure 13 are the reconstruction result and error of circular and rectangular surfaces, respectively. The fitting plane was obtained by optimizing the discrete points in the space, that is, minimizing the sum of the distances between these points and a certain plane. Under the prior knowledge that the plane must pass through the mean of the scattered points, the normal vector of the fitting plane was found. Through the singular value decomposition (SVD) of the covariance matrix, the singular vector corresponding to the minimum singular value was taken as the normal vector of the plane. The errors (e) and (f) were obtained by subtracting the reconstruction planes (c) and (d) with their respective fitting planes. The maximum plane error of the circular mirror was 22μm, the minimum was -27μm, and the standard deviation was 9.1μm. The maximum error of rectangular mirror was 11μm, the minimum was -27μm, and the standard deviation was 7.8μm. The experimental results show that the precision of phase resolution can be improved by preprocessing the captured deformed stripes, and the reconstruction accuracy of mirror object can be enhanced by the super pixel segmentation module, which leads to the target reconstruction at micro precision.

4. Conclusions

Under the framework of the PMD, a 3D reconstruction system was developed for highly reflective mirrors. By comparing the key algorithms of the PMD system, the defocusing blur model of spatially varying deformed stripe pattern was proposed to improve the accuracy of phase measurement, facilitating the subsequent 3D target reconstruction. The proposed method was verified through experiments on the 3D reconstruction of two mirrors, including a circular plane mirror with a diameter of 100mm, and a rectangular reinforced aluminum high-precision mirror with a chamfer of 70*100mm. The experimental results show that our method achieved an accuracy on the micron level. The future research will probe deep into integral reconstruction, and further improve the reconstruction speed and accuracy of our method.

Acknowledgment

This work was supported in part by the Grants from The National Key Research and Development Program of China (Grant No.: 2017YFB1103602), the National Natural Science Foundation of China (Grant No.: 61701284), the Key-Area Research and Development Program of Guangdong Province (Grant No.: 2019B010149002) and Dongguan City Core Technology Research Frontier Project (Grant No.: 2019622101001).  

  References

[1] Song, Z., Chung, R., Zhang, X.T. (2013). An accurate and robust strip-edge-based structured light means for shiny surface micro-measurement in 3-d. IEEE Transactions on Industrial Electronics, 60(3): 1023-1032. https://doi.org/10.1109/TIE.2012.2188875

[2] Sam, V.D.J., Dirckx, J.J.J. (2016). Real-time structured light profilometry: A review. Optics and Lasers in Engineering, 87: 18-31. https://doi.org/10.1016/j.optlaseng.2016.01.011

[3] Zhang, S. (2010). Recent progresses on real-time 3d shape measurement using digital fringe projection techniques. Optics & Lasers in Engineering, 48(2): 149-158. https://doi.org/10.1016/j.optlaseng.2009.03.008

[4] Song, Z., Jiang, H., Lin, H., Tang, S. (2017). A high dynamic range structured light means for the 3D measurement of specular surface. Optics and Lasers in Engineering, 95: 8-16. https://doi.org/10.1016/j.optlaseng.2017.03.008

[5] Han, H., Wu, S., Song, Z., Zhao, J. (2019). An accurate phase measuring deflectometry method for 3D reconstruction of mirror-like specular surface. In 2019 2nd International Conference on Intelligent Autonomous Systems (ICoIAS), pp. 20-24. https://doi.org/10.1109/ICoIAS.2019.00010

[6] Song, Z., Song, Z., Zhao, J., Gu, F. (2020). Micrometer-level 3D measurement techniques in complex scenes based on stripe-structured light and photometric stereo. Optics Express, 28(22): 32978-33001. https://doi.org/10.1364/OE.401850

[7] Huang, L., Idir, M., Zuo, C., Asundi, A. (2018). Review of phase measuring deflectometry. Optics and Lasers in Engineering, 107: 247-257. https://doi.org/10.1016/j.optlaseng.2018.03.026

[8] Knauer, M.C., Kaminski, J., Hausler, G. (2004). Phase measuring deflectometry: A new approach to measure specular free-form surfaces. In Optical Metrology in Production Engineering, 5457: 366-376. https://doi.org/10.1117/12.545704

[9] Zuo, C., Huang, L., Zhang, M., Chen, Q., Asundi, A. (2016). Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review. Optics & Lasers in Engineering, 85: 84-103. https://doi.org/10.1016/j.optlaseng.2016.04.022

[10] Zhang, X., Li, C., Wang, W. (2019). Calibration method and system for phase deflection measurement system. China.

[11] Zhao, B.Y. (2015). Research on 3D measurement method of high precision surface structured light. Chengdu: University of Electronic Science and Technology of China.

[12] Han, H., Wu, S. (2019). An accurate calibration means for the phase measuring deflectometry system. Sensors, 19(24): 5377. https://doi.org/10.3390/s19245377

[13] Wu, D., Chen, T., Li, A. (2016). A high precision approach to calibrate a structured light vision sensor in a robot-based three-dimensional measurement system. Sensors, 16(9): 1388. https://doi.org/10.3390/s16091388

[14] Karaali, A., Jung, C.R. (2017). Edge-based defocus blur estimation with adaptive scale selection. IEEE Transactions on Image Processing, 27(3): 1126-1137. https://doi.org/10.1109/TIP.2017.2771563

[15] Zhang, X., Wang, R., Jiang, X., Wang, W., Gao, W. (2016). Spatially variant defocus blur map estimation and deblurring from a single image. Journal of Visual Communication & Image Representation, 35: 257-264. https://doi.org/10.1016/j.jvcir.2016.01.002

[16] Li, R., Feipeng, D. (2018). Local blur analysis and phase error correction method for fringe projection profilometry systems. Applied Optics, 57(15): 4267-4276. https://doi.org/10.1364/AO.57.004267

[17] Ri, S., Takimoto, T., Xia, P., Wang, Q., Tsuda, H., Ogihara, S. (2020). Accurate phase analysis of interferometric fringes by the spatiotemporal phase-shifting method. Journal of Optics, 22(10): 105703. https://doi.org/10.1088/2040-8986/abb1d1

[18] Feng, S., Chen, Q., Zuo, C., Asundi, A. (2017). Fast three-dimensional measurements for dynamic scenes with shiny surfaces. Optics Communications, 382: 18-27. https://doi.org/10.1016/j.optcom.2016.07.057

[19] Zhu, X., Cohen, S., Schiller, S., Milanfar, P. (2013). Estimating spatially varying defocus blur from a single image. IEEE Transactions on Image Processing, 22(12): 4879-4891. https://doi.org/10.1109/TIP.2013.2279316

[20] Liu, Z., Wang, S., Zhang, M. (2018). Improved sparse 3d transform-domain collaborative filter for screen content image denoising. International Journal of Pattern Recognition & Artificial Intelligence, 32(3): 1854006. https://doi.org/10.1142/S021800141854006X

[21] Chen, Q., Li, D., Tang, C.K. (2013). KNN matting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(9): 2175-2188. https://doi.org/10.1109/TPAMI.2013.18

[22] Kewei, E., Li, D., Yang, L., Guo, G., Li, M., Wang, X., Xiong, Z. (2017). Novel method for high accuracy figure measurement of optical flat. Optics and Lasers in Engineering, 88: 162-166. https://doi.org/10.1016/j.optlaseng.2016.07.011

[23] Li, M., Li, D., Jin, C., Kewei, E., Wang, Q. (2017). Improved zonal integration method for high accurate surface reconstruction in quantitative deflectometry. Appl Opt, 56(13): F144. https://doi.org/10.1364/AO.56.00F144

[24] Zuo, C., Feng, S., Huang, L., Tao, T., Yin, W., Chen, Q. (2018). Phase shifting algorithms for fringe projection profilometry: A review. Optics and Lasers in Engineering, 109: 23-59. https://doi.org/10.1016/j.optlaseng.2018.04.019

[25] Southwell, W.H. (1980). Wave-front estimation from wave-front slope measurements. Journal of the Optical Society America, 70(8): 998-1006. https://doi.org/10.1364/JOSA.70.000998