SUPERFAST 3D SHAPE MEASUREMENT WITH APPLICATION TO FLAPPING WING MECHANICS ANALYSIS. A Dissertation. Submitted to the Faculty.

Size: px
Start display at page:

Download "SUPERFAST 3D SHAPE MEASUREMENT WITH APPLICATION TO FLAPPING WING MECHANICS ANALYSIS. A Dissertation. Submitted to the Faculty."

Transcription

1 SUPERFAST 3D SHAPE MEASUREMENT WITH APPLICATION TO FLAPPING WING MECHANICS ANALYSIS A Dissertation Submitted to the Faculty of Purdue University by Beiwen Li In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy August 2017 Purdue University West Lafayette, Indiana

2 ii THE PURDUE UNIVERSITY GRADUATE SCHOOL STATEMENT OF DISSERTATION APPROVAL Dr. Song Zhang, Chair School of Mechanical Engineering Dr. Xinyan Deng School of Mechanical Engineering Dr. Rebecca Kramer School of Mechanical Engineering Dr. Nathan Hartman Department of Computer Graphics Technology Approved by: Dr. Jay Gore Head of the School of Mechanical Engineering Graduate Program

3 iii ACKNOWLEDGMENTS I would like to take this opportunity to thank everyone who has helped me in my academic career. First and foremost, no words can express how grateful I am to my advisor Prof. Song Zhang. Working with such an innovative and productive leader is the best thing one can imagine while pursuing this highest degree. Prof. Zhang is always the one who gives me insights when I am puzzled, backs me up when I face challenges and cheers me up when I am down. Without his valuable instructions, it is difficult for me to imagine how I could be successful in finally becoming a tenure-track assistant professor upon graduation. Moreover, I would like to thank my Ph.D. advisory committee members for their tremendous help: Prof. Xinyan Deng, Prof. Nathan Hartman and Prof. Rebecca Kramer. They have been giving me great insights while making this dissertation research, and generously supported my career all the time. I just want to say thank you so much for all your generosity and willingness to help such a young fellow develop his career at early stage. Next, I would like to thank our kind and brilliant team members: Yatong An, Tyler Bell, Dr. Junfei Dai, Jae-Sang Hyun, Chufan Jiang, Ziping Liu, Bogdan Vlahov, Duo Wang, and Huitaek Yun. We are always working as a big family, and supporting each other whenever there is a need. Working with you is one of the best experience that I have ever had in my life! Then, my special thanks to my former colleague Dr. Yajun Wang. Thank you very much for being kind and patient to give me instructions when I was still a beginner. Without your tremendous help, it could take me a longer time to be able to conduct independent research. Finally, thank you mom and dad for everything! I love you!

4 iv TABLE OF CONTENTS LIST OF TABLES LIST OF FIGURES ABSTRACT Page viii xvi 1. INTRODUCTION Motivations Flapping Flight Study Manufacturing Inspection Related Works Passive Techniques Active Techniques Non-Contact 3D Microscopy Objectives Dissertation Organization CALIBRATION METHODS FOR STRUCTURED LIGHT SYSTEM WITH AN OUT-OF-FOCUS PROJECTOR Introduction Principles Fundamentals of Fringe Projection Model of Defocused Imaging System Phase-Domain Invariant Mapping Generic System Calibration Procedures System Calibration Procedures Using Optimal Fringe Angle D Reconstruction Based on Calibration Experiments System Setup D Shape Measurement under Different Defocusing Levels Dynamic 3D Shape Measurement Measurements Using Optimal Angle in a Well-Designed System Conclusion FLEXIBLE CALIBRATION METHOD FOR MICROSCOPIC STRUCTURED LIGHT SYSTEM USING TELECENTRIC LENS Introduction Principle vii

5 v Page 3.3 Procedures Experiments Conclusion SINGLE-SHOT ABSOLUTE 3D SHAPE MEASUREMENT WITH FOURIER TRANSFORM PROFILOMETRY Introduction Principles Fourier Transform Profilometry DFP System Model Absolute Phase Retrieval Using Geometric Constraints Experiment Discussion Conclusion MOTION INDUCED ERROR REDUCTION COMBINING FOURIER TRANSFORM PROFILOMETRY WITH PHASE-SHIFTING PROFILOM- ETRY Introduction Principle Fourier Transform Profilometry (FTP) Phase Shifting Profilometry (PSP) Phase Unwrapping Using Geometric Constraint Motion-Induced Error in PSP Proposed Hybrid Absolute Phase Computational Framework Experiments Discussion Summary NOVEL METHOD FOR MEASURING DENSE 3D STRAIN MAP OF ROBOTIC FLAPPING WINGS Introduction Results Superfast 3D Imaging of a Flapping Wing Robotic Bird Validation of Point Tracking Visualization of Strain Map Discussion Methods Superfast 3D Imaging Geodesic-Based Point Tracking Strain Computation SUMMARY AND FUTURE PROSPECTS Summary of Contributions

6 vi Page 7.2 Future Prospects Flapping Wing Robotics Design Engine Inspection Flexible Assembly Operations Additive Manufacturing (AM) REFERENCES VITA LIST OF PUBLICATIONS

7 vii Table LIST OF TABLES Page 3.1 Measurement result of two diagonals on calibration board (in mm) Measurement result of a linearly translated calibration target point in mm Validation of our geometry-based point tracking by comparing with marker based tracking, Diff is short for Difference

8 viii Figure LIST OF FIGURES Page 1.1 Schematic diagram of a stereo vision system Principles of the structured light technique. (a) Schematic diagram of a structured light system (Reprinted with permission from [34], Elsevier Limited); (b) correspondence detection through finding the intersecting point between distorted phase line and epipolar line Two different types of binary coding methods. (a) Illustration of three bits simple coding; and (b) corresponding gray coding D correspondence detection through binary coding: one camera pixel maps to multiple lines of projector pixels sharing the same binary codeword D correspondence detection through DFP: one camera pixel maps has a unique absolute phase value, which maps to a unique phase line on projector absolute phase, and a pixel line on the projector DMD plane Example of generating sinusoidal profile by defocusing binary structured patterns (Reprinted with permission from [44], Elsevier Limited). (a) The fringe pattern when the projector is in focus; (b)-(f) gradual resultant fringe patterns when the projector s amount of defocusing increases Schematic diagram of focus variation microscopy Schematic diagram of confocal microscopy with Nipkow disk Schematic diagram of wavefront interference Illustration of white light interferometry. (a) PSI; (b) VSI Illustration of a well-designed system. (a) System setup; (b) optimal fringe angle (vertical pattern); (c) worst fringe angle (horizontal pattern) Example patterns of different fringe angle α with fringe period T = 30 pixels. (a) α = π/4 rad; (b) α = π/3 rad; (c) α = 2π/3 rad; (d) α = 3π/4 rad

9 ix Figure Page 2.3 Absolute phase retrieval using three-frequency phase-shifting algorithm. (a) Picture of a spherical object; (b) wrapped phase map obtained from patterns with fringe period T = 18 pixels; (c) wrapped phase map obtained from patterns with fringe period T = 21 pixels; (d) wrapped phase map obtained from patterns with fringe period T = 144 pixels; (e) unwrapped phase map by applying the three-frequency phase-shifting algorithm Example of optimal fringe angle determination using a step height object. (a) Photograph of the step height object; (b) difference phase map Φ hd obtained from horizontal patterns; (c) difference phase map Φ vd obtained from vertical patterns; (d) - (e) cross sections of (b) and (c) respectively to visualize Φ h and Φ v Pinhole model of the structured light system Defocused optical system with the parameter ω Illustration of the optical transfer function(otf) for a defocusing system. (a) Example of OTF with defocusing parameter ω = 2λ; (b) a cross section of OTFs with different defocusing parameter ω Illustration of the point spread function(psf) for a defocusing system. (a) Example of normalized PSF with defocusing parameter ω = 2λ; (b) a cross section of normalized PSFs with different defocusing parameter ω Model of a structured-light system with an out-of-focus projector Design of calibration board Example of captured images. (a) Example of one captured fringe images with horizontal pattern projection; (b) example of one captured fringe images with vertical pattern projection; (c) example of one captured fringe images with pure white image projection Example of finding circle centers for the camera and the projector. (a) Example of one calibration pose; (b) Circle centers extracted from (a); (c) Mapped image for the projector from (b) Example of captured images. (a) Example of one captured fringe image with pure white image projection; (b) example of one captured fringe image with a fringe angle of α opt π/4; (c) Example of one captured fringe image with a fringe angle of α opt + π/ Illustration of coordinate system rotation

10 x Figure Page 2.15 Illustration of three different defocusing degrees. (a) One captured fringe image under defocusing degree 1 (projector in-focus); (b) one captured fringe image under defocusing degree 2 (projector slightly defocused); (c) one captured fringe image under defocusing degree 3 (projector greatly defocused); (d) - (f) corresponding cross sections of intensity of (a) - (c) Measurement result of a spherical surface under three different defocusing degrees, the rms errors estimated on (d), (h) and (l) are ±71 µm, ±77 µm and ±73 µm respectively. (a) One captured fringe image under defocusing degree 1 (projector in-focus); (b) reconstructed 3D result under defocusing degree 1 (projector in-focus); (c) a cross section of the 3D result and the ideal circle under defocusing degree 1 (projector in-focus); (d) the error estimated based on (b); (e) - (h) corresponding figures of (a) - (d) under defocusing degree 2 (projector slightly defocused); (i) - (l) corresponding figures of (a) - (d) under defocusing degree 3 (projector greatly defocused) Real-time 3D shape measurement result. (a) One captured fringe image; (b)-(d) three frames of the video we recorded Example of captured fringe images of the spherical object. (a) Original picture of the spherical object; (b) capture image using horizontal fringe pattern (i.e. α = 0); (c) capture image using vertical pattern (i.e. α = π/2); (d) capture image with pattern fringe angle of α = π/4; (e) capture image with pattern fringe angle of α = 3π/ Comparison of measurement results of the spherical surface. (a) Reconstructed 3D result using horizontal and vertical patterns; (b) - (c) two orthogonal cross sections of the 3D result shown in (a) and the ideal circles; (d) - (e) the corresponding errors estimated based on (b) - (c) with RMS errors of µm and 86.0 µm respectively; (f) - (j) corresponding results of (a) - (e) using patterns with fringe angles of α opt π/4 and α opt + π/4, the RMS errors estimated in (i) and (j) are 69.0 µm and 72.7 µm respectively Measurement results of an object with complex geometry. (a) The original picture of the object; (b) Reconstructed 3D result using horizontal and vertical patterns; (c) Reconstructed 3D result using patterns with fringe angles of α opt π/4 and α opt + π/4; (d) - (f) corresponding zoom-in views of (a) - (c) within in the areas shown in the red bounding boxes Model of telecentric camera imaging Model of pinhole projector imaging

11 xi Figure Page 3.3 Illustration of calibration process. (a) Calibration target; (b) camera image with circle centers; (c) capture image with horizontal pattern projection; (d) capture image with vertical pattern projection; (e) mapped circle center image for projector; (f) estimated 3D position of target points Reprojection error of the calibration approach. (a) Reprojection error for the camera (RMS: 1.8 µm); (b) reprojection error for the projector (RMS: 1.2 µm) Experimental result of measuring a flat plane. (a) 2D error map, with an RMS error of 4.5 µm; (b) a cross section of (a) Experimental result of measuring complex surface geometry. (a) Picture of a ball grid array; (b) reconstructed 3D geometry; (c) a cross section of (b); (d) - (f) corresponding figures for a flat surface with octagon grooves Different band pass filters used for FTP. (a) A smoothed circular window; (b) a hanning window A schematic diagram of a DFP system and z min plane Illustration of generating continuous phase map assisted by the minimum phase map obtained from geometric constraints (Reprinted with permission from [147], Optical Society of America). (a) Phase maps on the camera space at different depth z (within red dashed window is at z min ; within solid blue window is at z > z min ); (b) corresponding phase maps Φ min and Φ on the projector space; (c) cross sections of the original phase maps with 2π discontinuities, and continuous phase maps Φ min and Φ Fringe order K determination for different patterns periods with examples of having (a) three and (b) four pattern periods (Reprinted with permission from [147], Optical Society of America) Illustration of unwrapping procedure with a real object measurement. (a) The original picture of the measured object; (b) captured single-shot fringe image; (c) wrapped phase map obtained from single-shot FTP; (d) minimum phase map Φ min ; (e) unwrapped phase map; (f) reconstructed 3D geometry D measurement results of a sculpture. (a) Using standard phase-shifting method plus simple binary coding; (b) using single-shot FTP method with smoothed circular shaped band-pass filter; (c) using modified FTP method with smoothed circular shaped band-pass filter; (d) - (e) corresponding results of (b) - (c) using hanning window shaped band-pass filter Cross sections of the 3D results corresponding to Fig. 4.6(a),Fig. 4.6(b) and Fig. 4.6(c)

12 xii Figure Page 4.8 Difference in geometries. (a) Between FTP and phase-shifting (mean: 0.16 mm, RMS: 1.48 mm); (b) between modified FTP and phase-shifting (mean: 0.20 mm, RMS: 0.64 mm) D measurement results of two objects. (a) Captured fringe image; (b) using standard phase-shifting method plus simple binary coding; (c) using single-shot FTP method with smoothed circular shaped band-pass filter; (d) using modified FTP method with smoothed circular shaped band-pass filter; (e) - (f) corresponding results of (c) - (d) using hanning window shaped band-pass filter Illustration of the geometric mapping between the camera image region and the corresponding region on the projector sensor if a virtual plane is positioned as z min (Reprinted with permission from [147], Optical Society of America) Concept of removing 2π discontinuities using the minimum phase map determined from geometric constraints (Reprinted with permission from [147], Optical Society of America). (a) Regions acquired by the camera at different depth z plane: red dashed windowed region where z = z min and solid blue windowed region where z > z min ; (b) Φ min and Φ defined on the projector; (c) cross sections of the wrapped phase maps, φ 1 and φ, and their correctly unwrapped phase map Φ min and Φ; (d) case for using fringe patterns with four periods Simulation of motion-induced measurement error. (a) - (c) high frequency phase shifted patterns; (d) - (f) low frequency phase shifted patterns; (g) reconstructed 3D shape; (h) a cross section of (g) and the ideal sphere; (i) difference between the reconstructed sphere and the ideal sphere The pipeline of proposed hybrid absolute phase computational framework. The first step is to generate continuous relative phase map Φ r using single shot FTP and spatial phase unwrapping; the second step is to generate absolute phase map with error Φ e through PSP and geometric constraint; the final step is to retrieve absolute phase map by finding rigid fringe order shift k s Photograph of our experimental system setup A cycle of 6 continuously captured fringe images. (a) I h 1 ; (b) I h 2 ; (c) I h 3 ; (d) I l 1; (e) I l 2; (f) I l 3; (g) - (l) close-up views of the left ball in (a) - (f) A sample frame of result using enhanced two-frequency PSP method [159]. (a) Retrieved absolute phase map; (b) reconstructed 3D geometries

13 xiii Figure Page 5.8 Continuous relative phase map Φ r extraction from single-shot FTP. (a) Captured fringe image I h 3 ; (b) wrapped phase map obtained from (a) using FTP; (c) separately unwrapped phase map of each ball in (b) with spatial phase unwrapping Extraction of absolute phase map with motion-induced error Φ e from PSP. (a) One of the three phase shifted fringe pattern; (b) extracted wrapped phase map from I l 1 - I l 3 with motion-induced error before applying geometric constraints; (c) unwrapped phase map Φ e using geometric constraints; (d) difference fringe order map k e obtained from using low frequency phase map shown in Fig. 5.8(c) and phase map shown in (c) using Eq. (5.13) Histogram based fringe order determination. (a) Histogram of Fig. 5.9(d) for the left ball; (b) histogram of Fig. 5.9(d) for the right ball Absolute phase Φ a retrieval and 3D reconstruction. (a) Retrieved final absolute phase map Φ a ; (b) reconstructed 3D geometries Comparison between proposed computational framework and PSP based approach. (a) 3D result from PSP approach; (b) residual error of (a) after sphere fitting (RMS error: 6.92 mm); (c) a cross section of sphere fitting; (d) a cross section of residual error; (e) - (f) corresponding plots results from our proposed approach (RMS error: 0.26 mm) D shape measurement of multiple free-falling ping-pong balls. (a) A sample frame image; (b) 3D reconstructed geometry Illustration of artifacts induced by texture variation. (a) Zoom-in view of the ball inside of the red bounding box of Fig. 5.13(a); (b) zoom-in view of the 3D result inside of the red bounding box of Fig. 5.13(b) D measurement results of a flying bird robot with markers and anchor points for validation of our proposed point tracking. This markers are used to compare our point tracking scheme with marker based point tracking. (a) - (c) Three sample frames of 2D images; (d) - (f) three sample frames of 3D geometries D measurement results of a flying bird robot with anchor points only for strain computation. The markers are removed to reduce potential mechanics changes. (a) - (c) Three sample frames of 2D images; (d) - (f) three sample frames of 3D geometries

14 xiv Figure Page 6.3 Visualization of tracking for marker point 4 of the left wing. (a) - (c) Overlay the directly extracted marker points (red dashed lines) with tracked marker points (blue solid lines) using geodesic computation under X, Y and Z coordinate; (d) - (f) the difference plots of (a) - (c) obtained by taking the difference of curves, the mean difference for X, Y and Z are 0.17 mm, 0.18 mm and 0.02 mm respectively; the RMS difference for X, Y and Z are 0.42 mm, 0.67 mm and 0.29 mm respectively Visualization of tracking for marker point 5 of the left wing. (a) - (c) Overlay the directly extracted marker points (red dashed lines) with tracked marker points (blue solid lines) using geodesic computation under X, Y and Z coordinate; (d) - (f) the difference plots of (a) - (c) obtained by taking the difference of curves, the mean difference for X, Y and Z are 0.08 mm, 1.17 mm and 0.51 mm respectively; the RMS difference for X, Y and Z are 0.53 mm, 1.24 mm and 1.05 mm respectively Two sample frames of strain measurement result Finding the correspondence between a point p(t 0 ) on the initial surface configuration S(t 0 ) and a point p(t) on the current deformed configuration S(t); p(t) is identified by finding the intersecting point of the curves γ 1 (x, y, z) with equal geodesic distance d 1 and γ 2 (x, y, z) with equal geodesic distance d An example of computing the shortest distance using the Dijkstra s algorithm. The numbers between two different nodes denote the length of the path connecting the two nodes. (a) - (f) illustrate the computational procedure. Node 1 is the initial anchor point. Each unvisited vertex with the lowest distance becomes the new anchor points, and the old anchor points will not be visited again. Each visited node will have an updated distance value if smaller than previously marked distance value Optimization of Dijkstra s algorithm in accordance with our measured 3D data. Each grid point on the left figure denotes one 3D point corresponding to a camera pixel. For each currently visited point P 0, we pick its 7 7 neighborhood and search all possible marching directions as illustrated. For each searching path, we pick two more points in addition to the start and end point, and the distance is computed as the arc length of the interpolated cubic Bézier curve Notations in differential geometry. (G α, G β ) and (g α, g β ) are the base vectors of the tangent planes of the initial configuration S(t 0 ) and the deformed surface S(t); G 3 and g 3 are the corresponding normal vectors; r(t 0 ) and r ( t) are the position vectors; θ 1 and θ 2 denotes the surface parametrization which coincides with world coordinate x and y in our research

15 xv Figure Page 7.1 An illustration of geometry-based modeling and design The closed-loop control pipeline of geometry-based design An example of measuring a complex mechanical part (original picture from [1]). (a) A photograph of the part; (b) one of the captured fringe patterns; (c) reconstructed 3D geometry The closed-loop control of the initial alignment process of flexible assembly operations

16 xvi ABSTRACT Li, Beiwen PhD, Purdue University, August Superfast 3D Shape Measurement with Application to Flapping Wing Mechanics Analysis. Major Professor: Song Zhang, School of Mechanical Engineering. The goal of measurement is to allow a person to perceive the three-dimensional (3D) world around us, to know about a substance, and to obtain or produce new knowledge. However, performing accurate measurements of dynamically deformable objects has always been a challenging task, which in fact has huge potential to applications in areas of manufacturing, robotics, non-destructive evaluations, etc. Over a decade of efforts, scientists have made significant progresses along this direction. In particular, the binary defocusing method, which performs fringe analysis upon the camera captured distorted 1-bit binary patterns projected by an out-of-focus projector, has reached unprecedented superfast measurement speeds (e.g. khz) with high spatial resolutions. Despite of the speed breakthrough, there are still a number of challenges associated with such technology: (1) requiring an out-of-focus projector brings about difficulty in achieving high measurement accuracy; (2) motion induced artifacts and errors are still present if measuring a scene with fast moving objects; (3) it is difficult to perform subsequent analysis (e.g. deformation, mechanics, etc.) solely by interpreting those uncorrelated frames of acquired dynamic 3D data. The first challenge is mainly caused by the difficulty of performing an accurate calibration for a camera-projector system with an out-of-focus projector. To deal with this problem, we have theoretically proved and experimentally validated that a camera pixel can be virtually mapped to a projector pixel in phase domain even if the projector is substantially out-of-focus. Based on this foundation, we developed novel calibration approaches that can successfully achieve high accuracy under different scales: in a macro-scale measurement range [e.g. 150 mm(h) 250 mm(w) 200

17 xvii mm(d)], we achieved an accuracy up to 73 µm; in a medium-scale measurement range [e.g. 10 mm(h) 8 mm(w) 5 mm(d)], we achieved an accuracy up to 10 µm. The second challenge is quite common in dynamic measurements if the sampling rate is not high enough to keep up with the object motion. Although employing hardware with higher measurement speeds is always a potential solution, it is more desirable to innovate software algorithms to reduce the hardware cost. We developed two different software approaches to deal with the problems associated with object motion: (1) a single-shot absolute 3D recovery method to increase the sampling rate; (2) a motion induced error reduction framework. The first approach successfully overcame the difficulty of absolute 3D recovery for existing single-shot fringe analysis methods by taking advantage of the geometric constraints of a camera-projector system. The second approach successfully alleviated motion induced errors and artifacts by hybridizing the merits of two commonly used fringe analysis techniques: Fourier transform and phase shifting. Addressing both aforementioned challenges has enabled us to perform simultaneous superfast and high-accuracy 3D shape measurements with reduced motion induced errors or artifacts. Under such platform, we are seeking to introduce the technologies to a different field and explore an application. Finally, a particular topic presented in this dissertation is our research on 3D strain analysis of robotic flapping wings. Measuring dense 3D strain map of flapping wings could potentially produce new knowledge for the design of bio-inspired flapping wing robots. Such topic, however, is not well documented so far owing to the lack of an appropriate technological platform to measure dense 3D strain maps of the wings. In this dissertation research, we first measured the dynamic 3D geometries of a robotic bird with rapid flapping wings (e.g. 25 cycles/second) using a superfast image acquisition rate of 5,000 Hz. Then, we developed a novel geometry-based dense 3D strain measurement framework based on geodesic computation and Kirchhoff-Love shell theory. Such an innovation could potentially benefit bio-inspired robotics designers by introducing a new method

18 xviii of geometric and mechanical analysis, which could be used for better design of robotic flapping wings in the future. In summary, this dissertation research substantially advances the research of 3D shape measurement by achieving simultaneous superfast and high-accuracy measurements. Meanwhile, it demonstrates the potential of such technology by developing geometry-based 3D data analytics tools and exploring an application. Contributions of this research could potentially benefit a variety of different fields in both academic and industrial practices, where both speed and accuracy are major concerns and where subsequent mechanics analysis are necessary.

19 1 1. INTRODUCTION Over the past decades, the society has been embracing the advancements in 3D measurement technologies in many different areas. A simple example is the Hawk-Eye which is now a widely used 3D technology in sports games to confirm or overturn a referee s decision. In fact, accurate 3D shape measurements of dynamically moving or deformable objects are of great importance to a variety of other applications as well, including but not limited to manufacturing, robotics and non-destructive evaluations. However, despite of the progresses that have been made over the past decade, some challenges still remains: 1) how to achieve higher measurement speeds without sacrificing too much of the accuracies; 2) how to address measurement problems (e.g. errors, artifacts) associated with object motions; 3) how to correlate those independently captured 3D data in time domain to generate insightful information (e.g. deformation, mechanics, etc.). This dissertation introduces my selected research works that aim at addressing some of the existing limitations associated with the aforementioned problems. This chapter provides an overview of this dissertation research. The motivations of this research are introduced in Section 1.1. The relevant works are reviewed in Section 1.2. The objectives are elucidated in Section 1.3. The organization of this dissertation is introduced in Section 1.4. Part of this chapter was originally published in International Journal of Intelligent Robotics and Applications [1] (also listed as journal article [14] in LIST OF PUBLICATIONS ). 1.1 Motivations This section introduces the motivations of this dissertation research using two example applications: flapping flight study and manufacturing inspection.

20 Flapping Flight Study Flapping flight study has been an interesting topic to different fields such as aerospace engineering and bio-inspired robotics design. Specifically, for bio-inspired designers, the deformation and mechanics of insect wings contain important informations to reproduce the aerodynamics the biological counterparts. Current wing morphology studies mainly use a technology called high-speed stereo videography [2] which requires two or three high-speed cameras to record a video sequence of a flapping flight process. Some sparsely arranged fiducial marker points are necessary to provide feature points in multi-view camera images for joint-based topological reconstruction. However, given the fact that only those sparse marker points are precisely measured, it is difficult to obtain full-field strain maps of the wings. To realize dense 3D strain measurement of flapping wings with high-speed motion, it would be of great help if one can develop technologies that performs 1) superfast, high-resolution 3D shape measurement; 2) high-accuracy 3D topological reconstruction; 3) 3D geometric analysis for strain measurements Manufacturing Inspection Dimensional inspection of manufactured workpieces plays an important role in industrial quality control [3,4]. 3D inspection tools are particularly in demand where a complete 3D model of the inspected object surface needs to be analyzed. For some manufacturers, such as automotive and aerospace industries, a dense 3D measurement is required for appropriate quality control of different manufactured parts [4]. Some parts (e.g. aircraft engines) require measurement accuracies at micrometer (µm) level [5]. Coordinate measuring machines (CMM), which have evolved from slow laboratory systems to automated factory inspection systems, have become the extensively adopted tools for highly accurate 3D measurements [5, 6] with accuracies of sub-µm level (e.g. Zeiss XENOS). However, for on-line inspections of products at modern production line with fast moving speeds, the combination of speed and resolu-

21 3 tion requirement has hit the mechanical limits of conventional CMMs [5]. In this case, depending on the required measurement range, a high-speed 3D shape measurement system with µm accuracy could be of great help for industrial applications. The aforementioned two example applications provide us with a glimpse of the significance of 3D shape measurement in addressing practical problems of different fields. For different applications, one needs to consider a combination of different factors, such as speed, resolution, accuracy and measurement range. The next section provides a summary of related works in 3D shape measurements. 1.2 Related Works Consider the aforementioned speed limitation of contact 3D measurement techniques such as CMM, it would be desirable to use non-contact optical techniques where speed is a major concern. This section reviews the works related to non-contact optical means of 3D shape measurements, including passive techniques, active techniques and non-contact 3D microscopy Passive Techniques Depth from defocus (DFD): The fundamental principle of DFD technique is based on the fact that an image, formed by a typical optical system, will be focused at a certain distance along the optical axis, while it gets out-of-focus at other distances [7]. In fact, the amount of defocusing blur gets more and more significant as the imaging plane moves further away from the focal plane. By quantifying the amount of defocusing and relating it with depth information, the surface profile of the imaged object can be recovered. To quantify the relationship between the amount of blur and the depth, most approaches require two or more images which are captured at different focus or aperture settings [8 12]. Some other approaches are able to retrieve a 3D profile using a single defocused image based on more sophisticated models [13 16]. Since the depth

22 4 from defocus technique does not require any illumination devices nor active focusing control, it has the advantage of a simple system setup. The major drawback of this technology is its requirement of strong texture variation for blurring analysis, which usually results in a limited resolution [17]. Stereo vision: Stereo vision [18] is another image-based technique which basically imitates a human-vision system. Figure 1.1 schematically shows a stereo vision system that captures 2D images of a real world scene from two different perspectives. The geometric relationships between a real-world 3D point P, and its projections on 2D camera image planes P L and P R, form a triangle, and thus triangulation is used for the 3D reconstruction. In order to use triangulation, one needs to know (1) the geometric properties of two camera imaging systems, (2) the relationship between these two cameras, and (3) the precise corresponding point on one camera to a point on the other camera. The first two can be established through calibration (details are discussed in Chapter 2). The correspondence establishment is usually not easy solely from two camera images. To simplify the correspondence establishment problem, the geometric relationship, the so-called epipolar geometry, is often used [19, 20]. The epipolar geometry essentially constructs the single geometric system with two known focal points O L and O R of the lenses to which all image points should converge. For a given point P in a 3D world, its image point P L together with points O L and O R should form a plane, called the epipolar plane. The intersection line P R E R of this epipolar plane with the imaging plane of the righthand camera is called the epipolar line (red line). For point P L, all possible corresponding points on the righthand camera should lie on the epipolar line P R E R. By establishing the epipolar constraint, the correspondence point searching problem becomes 1D instead of 2D, and thus more efficient and potentially more accurate. Yet, even with this epipolar constraint, it is often difficult to find a correspondence point if the object surface texture does not vary drastically locally and appears random globally. For example, if two cameras capture a polished

23 5 metal plate, images captured from two different cameras do not provide enough cues to establish correspondence from one camera image to the other. P P1 P2 P3 PL PR OL Left Camera EL ER OR Right Camera Figure 1.1. Schematic diagram of a stereo vision system Active Techniques Time of flight (TOF): The TOF 3D shape measurement approach essentially emulates the mechanism of a bat s ultrasonic system [21, 22]. The laser source emits light pulses onto the object, and a laser range finder is used to detect the reflected pulse of light. The distance to a surface is calculated based on the round-trip traveling time of the emitted light pulse. Suppose the speed of light, c, and the round-trip time, t, are known; the distance is equal to c t/2. The majority of TOF technologies modulate the light source at a constant frequency and measure the phase difference before and after the round trip, and the depth is then determined from the phase difference, which is proportional to the depth [23]. The main advantage of the TOF scanner is its compact design, which is achievable as the light source and the sensor have the same viewing angle. Some recent advances have been devoted to improving its scanning efficiency with different applications [24 26]. There are also commercial products which have been made available for real-time 3D range scanning. The recent Microsoft Kinect 2 technology has achieved real-time

24 6 measurement capabilities (30 Hz) with a resolution of ( pixels). However, due to the extremely fast speed of the light, it is difficult to achieve a high depth resolution (e.g. a timing sensor resolution of 3.3 piscoseconds is required to resolve 1.00 mm in depth). Laser triangulation: Laser triangulation is one of the commonly adopted 3D shape measurement technologies [27 31] due to its simplicity and robustness [32]. It typically includes a laser emitter, a detector and a camera. The laser light source shines a laser dot, or stripes onto the object, and the reflected light will be sensed by the detector. The depth information can be retrieved by triangulating the laser emitting point, the reflecting point on the object surface, and the imaged point on the camera. A laser triangulation 3D scanner is capable of measuring large scale objects (e.g. car body) with millimeter scale accuracy [e.g. Mensi (Atlanta, GA) SOISIC]. However, the laser triangulation method is usually quite slow for measuring an area with complex surface geometry. Structured light: Structured light techniques actually evolved from the aforementioned stereo vision method. Structured light techniques resolved the correspondence finding problems of stereo vision techniques by replacing one of the cameras with a projector [33]. Instead of relying on the natural texture of the object surface, the structured light method uses a projector to project pre-designed structured patterns onto the scanned object, and the correspondence is established by using the actively projected pattern information. Figure 1.2(a) illustrates a typical structured light system using a phase-based method, in which the projection unit (D), the image acquisition unit (E), and the object (B) form a triangulation base. The projector illuminates one-dimensional varying encoded stripes onto the object. The object surface distorts the straight stripe lines into curved lines. A camera captures distorted fringe images from another perspective. Following the same epipolar geometry as shown in 1.2(b), for a given point P in a 3D world, its projector image point P L lies on a unique straight stripe line on the projector sensor; on the camera image plane,

25 7 the corresponding point P R is found at the intersecting point of the captured curved stripe line with the epipolar line. Phase line P Z Object point B Object Phase line D C Projector fringe Projector pixel Camera pixel Baseline A Camera image E OL PL EL Projector ER PR Camera OR (a) (b) Figure 1.2. Principles of the structured light technique. (a) Schematic diagram of a structured light system (Reprinted with permission from [34], Elsevier Limited); (b) correspondence detection through finding the intersecting point between distorted phase line and epipolar line. As discussed, in order to perform 3D reconstruction through triangulation, at least one-dimensional mapping (or correspondence) is required. Namely, we need to map a point on the camera to a line (or a predefined curve) on the projector. There are different techniques to provide cues for this one-dimensional correspondence detection, some well-studied existing methods include binary coding method, digital fringe projection (DFP) and binary defocusing technique. (1) Binary coding method Binary coding method is a straightforward approach to provide cues for correspondence detection. Essentially, a unique value is assigned to each unit (e.g., a stripe or a line) that varies in one direction. The unique value here is often regarded as the codeword. The codeword can be represented by a sequence of black (intensity 0) or white (intensity 1) structured patterns through a certain coding strategy [33]. There are two commonly used binary coding methods: simple coding and gray coding.

26 8 Figure 1.3(a) illustrates a simple coding example. The combination of a sequence of three patterns, as shown on the left of Fig. 1.3(a), produces a unique codeword for each stripe made up of 1s and 0s, (e.g.. 000, 001,...), as shown on the right of Fig. 1.3(a). The projector sequentially projects this set of patterns, and the camera captures the corresponding patterns distorted by the object. If these three captured patterns can be properly binarized (i.e., converting camera grayscale images to 0s and 1s), for each pixel, the sequence of 0s and 1s from these three images forms the codeword which is defined from the projector space. Therefore, by using these images, the one-to-many mapping can be established and thus 3D reconstruction can be carried out. Gray-coding is another way of encoding information. Figure 1.3(b) illustrates an example of using three images to represent the same amount of information as simple coding. The major difference between gray coding and simple coding is that, at a given location, gray coding only allows one bit of codeword status change (e.g., flip from 1 to 0 or 0 to 1 on one pattern), yet the simple coding method does not have such a requirement. For the example illustrated in the red bounding boxes of Fig. 1.3(a) and 1.3(b), simple binary coding has three bit changes while gray coding only has one. Fewer changes at a point means less chance of errors, and thus gray coding tends to be more robust for codeword recovery. The binary coding methods are simple and rather robust since only two binary states are used for any given point, but the achievable spatial resolution is limited to be larger than single camera and projector pixels. This is because the narrowest stripes must be larger than one pixel from the projector space to avoid sampling problems, and each captured stripe width also needs to be larger than one camera pixel to be able to properly find the binary state from the captured image. Figure 1.4 illustrates that the decoded codewords are not continuous, but discrete with a stair width larger than one pixel. The smallest achievable resolution is the stair width, since no finer correspondence can be precisely established. The difficulty of achieving pixel-level

27 9 spatial resolution limits the use of binary coding methods for high-resolution and high-accuracy measurement needs (a) (b) Figure 1.3. Two different types of binary coding methods. (a) Illustration of three bits simple coding; and (b) corresponding gray coding. CCD images DMD images Codeword u p DMD plane Figure D correspondence detection through binary coding: one camera pixel maps to multiple lines of projector pixels sharing the same binary codeword. (2) Digital fringe projection Digital fringe projection (DFP) methods resolve the limitation of the binary coding method and achieve camera pixel spatial resolution by using continuously varying structured patterns instead of binary patterns. Specifically, sinusoidally varying structured patterns are used in the DFP system, and these sinusoidal patterns are often regarded as fringe patterns. Therefore, the DFP technique is a special kind of

28 10 structured light techniques by using sinusoidal or fringe patterns. The major difference of the DFP technique lies in the fact that it does not use intensity for coding but rather uses phase. And one of the most popular methods to recover phase is the phase-shifting-based fringe analysis technique [35 37]. For example, a three-step phase-shifting algorithm with equal phase shifts can be mathematically formulated as I 1 (x, y) = I (x, y) + I (x, y) cos(φ 2π/3), (1.1) I 2 (x, y) = I (x, y) + I (x, y) cos(φ), (1.2) I 3 (x, y) = I (x, y) + I (x, y) cos(φ + 2π/3). (1.3) Here I (x, y) denotes the average intensity, I (x, y) stands for the intensity modulation and φ is the phase to be extracted. The phase can be computed by simultaneously solving Eq. (1.1) - (1.3). [ ] φ(x, y) = tan 1 3(I1 I 3 )/(2I 2 I 1 I 3 ). (1.4) The extracted phase φ ranges from π to +π with 2π discontinuities due to the nature of the arctangent function. To obtain absolute phase without the 2π discontinuities, a temporal phase unwrapping algorithm [38 40] is necessary which detects 2π discontinuities and removes them by adding or subtracting the integer k(x, y) of 2π, e.g., Φ(x, y) = φ(x, y) + k(x, y) 2π. (1.5) The absolute phase map can be used as the codeword to establish one-to-many mapping in the same way as binary coding methods. However, since the phase map obtained here is continuous NOT discrete, the mapping (or correspondence) can be established at camera pixel level, as is shown in Fig. 1.5, which greatly improves the measurement resolutions compared to binary codings. In recent years, DFP technology has been extensively adopted because of its speed and accuracy [41]. However, DFP technology is not trouble free. Since it requires the projection of 8-bit sinusoidal patterns, the measurement speed is confined by

29 11 Φ CCD images Absolute phase (camera) Absolute phase (projector) u p DMD plane Figure D correspondence detection through DFP: one camera pixel maps has a unique absolute phase value, which maps to a unique phase line on projector absolute phase, and a pixel line on the projector DMD plane. the maximum 8-bit image refreshing rate of the projector (typically 120 Hz) [34]. Moreover, commercial projectors tend to have nonlinear response to input grayscale values to accommodate human vision, which is usually called nonlinear gamma effect. This nonlinear gamma effect will result in error within the calculated phase map. (3) Binary defocusing technique It is always desirable to achieve higher-speed 3D image acquisition to reduce motion artifacts and to more rapidly capture changing scenes. Lei and Zhang [42] developed the 1-bit binary defocusing method to break the speed bottleneck of high-speed 3D imaging methods. Using 1-bit binary patterns reduces the data transfer rate and thus making it possible to achieve a 3D imaging rate faster than 120 Hz with the same DLP technology. For example, the DLP Discovery platform introduced by Texas Instruments can switch binary images at a rate up to over 30,000 Hz, and thus khz 3D imaging is feasible [43]. This method is based on the nature of defocusing: evenly squared binary patterns appear to be sinusoidal if the projector lens is properly defocused. Therefore, instead of directly projecting 8-bit sinusoidal patterns, we can approximate sinusoidal profiles through projecting 1-bit binary patterns and properly defocusing the projector. Figure 1.6 shows some captured fringe images with the projector at different defocusing levels. As one can see, when the projector is in focus, as shown in Fig. 1.6(a), it

30 12 preserves apparent squared binary structures, but when the projector is properly defocused (see Fig 13c), the squared binary structure will appear to have an approximately sinusoidal profile. Without a doubt, the sinusoidal structures will gradually diminish if the projector is overly defocused, which results in low fringe quality. Once sinusoidal patterns are generated, a phase-shifting algorithm can be applied to compute the phase and thus 3D geometry after system calibration (see details of calibration in Chapter 2). (a) (b) (c) (d) (e) (f) Figure 1.6. Example of generating sinusoidal profile by defocusing binary structured patterns (Reprinted with permission from [44], Elsevier Limited). (a) The fringe pattern when the projector is in focus; (b)-(f) gradual resultant fringe patterns when the projector s amount of defocusing increases Non-Contact 3D Microscopy Focus variation microscopy: Focus variation microscopy [45] is a measurement technology that does surface profilometry based on variation of focusing. An illustration of this technology is shown in Fig The light source is transmitted through a beam splitter and the objective to the sample surface; the light will be reflected into different directions depending on the topology and the diffusing properties of the sample surface. Part of the reflected light will be collected by the objective and directed onto the camera sensor. A driving unit is used to search the best focusing location of the optical element pointing to the specimen. A depth map of the specimen can be generated by performing this process at different lateral positions [46].

31 13 Camera sensor Analyzer Beam-splitter Light beam Light source Driving unit Objective Specimen Focus variation curve Figure 1.7. Schematic diagram of focus variation microscopy. This technology has enjoyed many applications within the cutting tool industry, the precision manufacturing field, etc. [47]. There are also commercial products available (e.g. BrassTrax-3D, Alicona Infinite Focus) which can achieve a depth resolution of 10 nm using this technique. Focus variation microscopy has a good capability in measuring steeply sloped surfaces [47]. However, the measurement speed of this technology could be slow since the specimen needs to be translated to analyze the focus variation. Confocal microscopy: Confocal microscopy [48] is another focus based scanning technology and has been frequently adopted in 3D surface analysis due to its good capability of depth discrimination [49]. A schematic diagram of confocal microscopy is shown in Fig For a single point scanning, if the point source, the illuminated surface point and the point on the detector are conjugate points of this entire optical system, the detected light intensity reaches its maximum. If the illuminated point is shifted along the optical axis, the detected light intensity will be significantly reduced,

32 14 and thus the axial position on the sampled surface can be determined. To realize parallel scanning, the light needs to be redirected into different spatial points on the sampled surface with different kinds of spatial filters. Over the years, there have been a variety of parallel scanning schemes developed, either by using Nipkow disks [50,51], micro-lens arrays [49, 52] or digital micromirror devices (DMD) [53]. Light source Beam-splitter Out-of-focus In-focus Out-of-focus Pinhole filter Detector Objective Intensity Position Figure 1.8. Schematic diagram of confocal microscopy with Nipkow disk. Confocal microscopy can achieve a very high depth resolution of several nm [e.g. NanoFocus (Oberhausen, Germany) µsurf series], while the speed of the measurement is constrained by how fast the mechanical moving part (e.g. Nipkow disk) moves. Interferometric microscopy: Interferometry is another popular non-contact technique used within industry especially for measuring small displacements. Essentially, this technology follows the superposition principle of the interference of two coherent waves. As shown in Fig. 1.9, if two coherent light sources S 1 and S 2 with equal frequency are interfered with each other, on the interference plane, a resultant pattern can be generated whose phase is determined by the path length difference of two interfered beams. This generated pattern can be used for deformation analysis. Over a long history, many different kinds of interferometers were developed with different optical setup, including Michelson [54], Mirau [55], Fizeau [56], Mach- Zehnder [57, 58], Fabry-Pérot [59] and Twyman-Green [60] interferometers.

33 15 Interference plane S 1 S 2 Figure 1.9. Schematic diagram of wavefront interference. Among interferometric techniques, two widely used techniques are phase shifting interferometry (PSI) and vertical scanning interferometry (VSI) [61]. Figure 1.10(a) illustrates the basic principle of PSI, in which a translation unit [e.g. piezoelectric transducer (PZT) stage] is used to translate the reference mirror such that the path length difference between the two interference light beams is changed to result in phase-shifted fringes. The phase-shifted fringes can be used to analyze the phase map and thus the depth variation on the scanned surface. Figure 1.10(b) shows the basic principle of VSI, in which the sample is translated by a translation stage (e.g. PZT stage). The surface heights are determined by recording the position of the peak contrast or peak fit along the vertical axis [62]. Currently, there are many interferometers that use laser as the light source. This is owing to the fact that laser has a long coherence length and thus makes it easy to obtain the interference fringes [63]. PSI is the commonly adopted technique in laser based interferometry systems. Over the years, numerous approaches were developed which mainly generate phase shifts by modulating different kinds of mechanical moving parts [64 66]. This method is widely adopted in 3D displacement measurement for strain and stress analysis [67 70].

34 16 Camera Camera Light source Reference mirror Light source Beam-splitter Objective PZT Stage Objective Beam-splitter Sample PZT Stage Sample (a) (b) Figure Illustration of white light interferometry. (a) PSI; (b) VSI. However, the ease of producing fringes could also be a drawback for laser interferometry, as any stray reflection could result in spurious fringes [63]. Meanwhile, the laser speckle noise could reduce the accuracy of the measurement. Thus scientists have also drawn attention to white light interferometry. Although having the optical path to match could be a challenging task, this requirement will also make the white light interference pattern to have high contrasts [63]. Over the years, this technology has been incorporated with different types of classical interferometers, including Michelson-type [71 74], Mirau-type [75, 76] and Twyman-Green [77] interferometers. Moreover, currently there are commercial surface profilers using white light interferometry with both VSI (e.g. Nikon BW-series) and PSI (e.g. Zygo NexView) available. Both of them can reach a very high depth resolution: around 0.1 nm with a field of view of several mm. In general, PSI performs better than VSI when analyzing smooth surfaces, yet VSI performs better in analyzing surfaces with step height discontinuities [78, 79]. Structured light microscopy: The above introduced microscopic 3D measurement technologies can all achieve very high depth resolutions of several nm, yet their

35 17 field-of-view (FOV) are typically within several mm for a single scan. Therefore, scientists have been attempting to transport the structured light technology into microscopic 3D measurements to increase the FOV for a single scan. Structured light microscopy has the advantages of fast measurement capabilities with field sizes from 1 mm 2 to several cm 2 [80] for a single scan. Owing to the aforementioned merits of structured light technology, it has the potential to achieve high accuracy while maintaining a relatively large lateral FOV. To develop structured light microscopy, scientists first attempted to insert different types of projection units, such as digitalmicromirror-device (DMD), liquid-crystal-display (LCD) or liquid-crystal-on-silicon (LCoS) chips, into one channel of a stereo microscope [80 84]. Another direction is to modify the optics of a regular structured light system with small FOV and long working diatance (LWD) lenses [85 89]. All of these technologies that uses non-telecentric lenses have the advantage of the existence of well-developed calibration strategies. However, the depth of focus (DOF) of measurement is typically limited to an order of sub-millimeter [90]. Recently, incorporating telecentric lenses within a structured light system has been regarded as an alternative approach to realize high accuracy measurement owing to its unique properties of orthographic projection, low distortion, magnification constancy and large DOF (e.g. several mm) [90, 91]. However, the calibration of such lenses becomes a challenging task since the telecentricity will result in an insensitivity of depth variation along its optical axis. Existing research works, though achieving good accuracies, either required an expensive precision linear translation stage [91], or required strong constraints on system parameters [90, 92]. Therefore, a simple, flexible and accurate calibration for telecentric lenses is still in demand. 1.3 Objectives In particular, the major focuses of this dissertation research are the following:

36 18 Develop simultaneous superfast (e.g. khz) and high-accuracy 3D shape measurement. Among all aforementioned 3D shape measurement technologies, the binary defocusing technology has an apparent speed advantage in that it can realize superfast measurement speeds over khz. However, as previously introduced, the binary defocusing technology requires that the projector to be substantially out-of-focus, which brings about challenges in achieving accurate system calibration and thus accurate 3D measurements. In this dissertation research, we aim at addressing the accuracy limitation of the superfast binary defocusing technique by developing novel calibration frameworks for an out-of-focus projector. The details of this research will be introduced in Chapter 2. Develop flexible and accurate calibration for medium-scale 3D shape measurement. Compared to other 3D shape measurement technologies, the structured light technology is most suitable for performing medium-scale (e.g. a measurement volume of several cm 3 ) 3D shape measurement with a µm-level depth resolution. As aforementioned, current structured light microscopic technologies can be divided into non-telecentric and telecentric methods. For nontelecentric methods, the DOF is limited to sub-milliliter level; yet for telecentric methods, it is challenging to perform flexible and accurate telecentric lens calibrations. In this dissertation research, we developed a novel telecentric calibration approach with the assistance of a pinhole lens, which simultaneously realizes flexible and accurate calibration for a telecentric lens. The details of this research will be introduced in Chapter 3. Investigate dynamic 3D measurements of scenes with high-speed motion. The research developments driven by the above mentioned two objectives can enable simultaneous superfast high-accuracy 3D shape measurements at different scales. However, for 3D shape measurements of extremely high-speed motions (e.g. rapid flapping wings with tens of cycles per second), motion in-

37 19 troduced errors or artifacts could still be present even with a khz measurement speeds. To overcome this challenge, there are always two different approaches: (1) using more expensive hardware for speed enhancement; (2) developing more advanced software algorithms. In this dissertation research, instead of investigating more expensive hardware, we would also like to develop software approaches to (1) perform absolute 3D shape measurements with increased temporal sampling rate; (2) reduce motion introduced errors or artifacts. The details of this research will be introduced in Chapters 4 and 5. Develop geometry-based mechanics analysis for robotic flapping wings. All above mentioned objectives essentially deal with existing challenges or limitations of superfast 3D shape measurements, which contribute to setting up a platform to precisely measure the dynamic motions and deformations of fastmoving objects. To produce insightful knowledge from the precisely measured 3D data, we would also like to develop advanced 3D data analytics tools and explore interdisciplinary research directions. The specific topic that this dissertation research focuses on is 3D strain measurements of robotic flapping wings. As aforementioned, for bio-inspired robotics designers, 3D strain analysis of flapping wings contains important information to reproduce the desired mechanical or aerodynamic properties of the biological counterparts. However, full-field dense 3D strain measurement of flapping wings is still not well-documented yet. Our research innovations has shined a light on such topic by providing dense and high-accuracy 3D shape measurements with superfast khz speeds. Given that, we further innovated geometry-based mechanics analysis framework to perform dense 3D strain computations. The details of this research will be introduced in Chapter 6.

38 Dissertation Organization Chapters 2 and 3 introduce innovated calibration techniques for superfast 3D shape measurement systems at macro-scale [e.g. 150(H) mm 250(W) mm 200(D) mm] and medium-scale [10(H) mm 8(W) mm 5(D) mm] levels, respectively. Chapter 4 introduces a novel single-shot absolute 3D shape reconstruction method that can potentially deal with high-speed motion. Chapter 5 introduces a computational framework for motion induced error reduction. Chapter 6 introduces a novel dense 3D strain measurement technologies for robotic flapping wings assisted by the developed high-accuracy, superfast 3D shape measurement platform technologies. Chapter 7 summarizes the research contributions and looks into the future research directions.

39 21 2. CALIBRATION METHODS FOR STRUCTURED LIGHT SYSTEM WITH AN OUT-OF-FOCUS PROJECTOR The previous chapter reviewed the related works and provided an overview of this dissertation research. As previously introduced, our first objective is to develop simultaneous superfast (e.g. khz) and high-accuracy 3D shape measurement. In particular, the problem to be addressed is the calibration difficulty associated with an out-of-focus projector in the superfast (e.g. khz) binary defocusing technology. This chapter introduces the details of our novel calibration approaches for a structured light system with an out-of-focus projector. The major content of this chapter was originally published in Applied Optics [93,94] (also listed as journal article [5] and [7] in LIST OF PUBLICATIONS ). 2.1 Introduction Three-dimensional (3D) shape measurement is an extensively studied field that enjoys wide applications in, for example, biomedical science, entertainment, and the manufacturing industry [34]. Researchers have been making great efforts in achieving 3D shape measurements with higher speed, higher resolution, and wider range. One of the crucial elements is to accurately calibrate each device (e.g., camera, projector) used in such a system. The calibration of the camera has been quite extensively studied over a long period of time. The camera calibration was first performed with 3D calibration targets [95,96] that required high-precision manufacturing and higher accuracy measurements of the calibration targets, which is usually not easy to obtain. To simplify the calibration process, Tsai [97] has proved that two-dimensional (2D) calibration targets with rigid out-of-plane shifts are sufficient to achieve high-accuracy calibration without requiring

40 22 complex 3D calibration targets. Zhang [98] proposed a flexible camera calibration method that further simplified the calibration process by allowing the use of a flat 2D target with arbitrary poses and orientations, albeit still requiring knowledge of the target geometry and the preselection of the corner points. Some recent advances of calibration techniques further improved the flexibility and the accuracy of calibration by using a not-measured or imperfect calibration target [99 102], or by using active targets [103, 104]. The structured-light system calibration is more complicated since it involves the use of a projector. Over the years, researchers have developed a variety of approaches to calibrate the structured-light system. Attempts were first made to calibrate the system by obtaining the exact system parameters (position, orientation) of both devices (camera, projector) [ ]. Then, to save the effort of the complex system setup required by those methods, some other methods [ ] improved the flexibility by establishing equations that estimate the relationship between the depth and the phase value. Another popular calibration approach was to treat the projector as a device with the inverse optics of a camera, such as the Levenberg-Marquardt method [112], and thus the projector calibration can be as simple as a camera calibration. The enabling technology was developed by Zhang and Huang [113] which enabled the projector to capture images like a camera through projecting a sequence of fringe patterns to establish one-to-one mapping between the projector and the camera. Following Zhang and Huang s work, researchers have tried to improve the calibration accuracy by linear interpolation [114], bundle adjustment [115], or residual error compensation with planar constraints [116]. All the aforementioned techniques have proven to be successful in calibrating the structured-light system, but they all require the projector to be at least nearly focused. Therefore, they cannot be directly applied to calibrate the structured-light system with an out-of-focus projector. Our recent efforts have been focusing on advancing the binary defocusing technology [42] because it has the merits of high speed [117], being gamma calibration free, and having no rigid requirement for precise synchronization. However, as aforemen-

41 23 tioned, none of the existing calibration methods can be directly applied to accurately calibrating our structured-light system in which the projector is substantially defocused. One attempt to calibrate the structured-light system with an out-of-focus projector was carried out by Merner et al. [118]. The method proposed by Merner et al. was able to achieve high-depth accuracy (±50 µm), but the spatial (along x or y) accuracy was limited (i.e., a few millimeters). For measurement conditions only requiring high-depth accuracy, that method is good. However, for generic 3D shape measurement, x and y calibration accuracy is equally important. Another problem of current calibration techniques is that a majority of them requires the usage of horizontal and vertical patterns. According to Wang and Zhang [119], once the system is set up, there exists an optimal fringe angle for pattern projection which is most sensitive to depth variation, while its orthogonal fringe direction is regarded as the worst angle, which has almost no response to depth variation. In practical experiments, either horizontal or vertical patterns are used in most cases. Therefore, for a well-designed system, the optimal fringe angle should be close to either horizontal or vertical direction. Figure 2.1 shows an example of a well-designed system. In this case, if the vertical pattern happened to be close to the optimal angle, illustrated in Fig. 2.1(b), the other fringe direction will be the worst angle (see Fig. 2.1(c); the pattern has no distortion), and vice versa. This could introduce a problem since the mapping between the camera points and projector points is performed in phase domain; if the patterns are not sensitive to depth variation, the phase obtained from different spatial locations could lead to inaccurate mapping and thus result in inaccurate calibration. In this chapter, we present our innovations in two different aspects: (1) calibration for a generic structured light system with an out-of-focus projector; (2) calibration for a well-designed system based on optimal fringe angle. First of all, with the projector being out of focus, no one-to-one mapping between the camera pixel and the projector pixel can be established as in the prior study [113]. To overcome this challenge, we will present the idea of virtually creating the one-to-one mapping between the camera

42 24 (a) (b) (c) Figure 2.1. Illustration of a well-designed system. (a) System setup; (b) optimal fringe angle (vertical pattern); (c) worst fringe angle (horizontal pattern). pixel and the center point of the projector pixel in the phase domain. Meanwhile, to improve the performance of calibration in a well-designed system, we will also present our calibration framework based on optimal fringe angle. Experiments will demonstrate that under a generic system, our calibration approach can reach an accuracy up to 73 µm in a of calibration volume of 150(H) mm 250(W) mm 200(D) mm; in a well-designed system, the usage of optimal fringe angle can increase the measurement accuracy up to 38% compared to the generic horizontal and vertical approaches within a volume of 300(H) mm 250(W) mm 500(D) mm. 2.2 Principles Fundamentals of Fringe Projection (1) Pattern generation with arbitrary fringe angle A sinusoidal fringe pattern P (i, j) with an arbitrary fringe angle α can be represented as P (i, j) = {1 + cos [(i cos α + j sin α) 2π/T ]}, 0 α < π, (2.1) 2

43 25 where P (i, j) denotes the intensity of the pixel in i-th row and j-th column; and T is the fringe period. Figure 2.2 shows some example patterns of different fringe angle α with fringe period T = 30 pixels. In reality, the fringe pattern is slightly different since it may be modified by its initial phase δ(t) as P (i, j, t) = {1 + cos [(i cos α + j sin α) 2π/T + δ(t)]}, 0 α < π. (2.2) 2 By properly modulating the initial phase δ(t), phase-shifting algorithms can be applied for phase retrieval in 3D shape measurement. (a) (b) (c) (d) Figure 2.2. Example patterns of different fringe angle α with fringe period T = 30 pixels. (a) α = π/4 rad; (b) α = π/3 rad; (c) α = 2π/3 rad; (d) α = 3π/4 rad. (2) Least squares phase-shifting algorithm Phase-shifting algorithms have been extensively employed in 3D shape measurement owing to their high speed and accuracy [37]. There are different kinds of phaseshifting algorithms developed, including three-step, four-step, least squares, and so forth. In general, the more steps used, the better the measurement accuracy that can be achieved. For the least squares phase-shifting algorithm, the k-th projected fringe image can be modeled as I k (x, y) = I (x, y) + I (x, y) cos(φ + 2kπ/N), (2.3)

44 26 where I (x, y) represents the average intensity, I (x, y) the intensity modulation, and φ(x, y) the phase to be solved for, [ N ] φ(x, y) = tan 1 k=1 I k sin(2kπ/n) N k=1 I. (2.4) k cos(2kπ/n) This equation provides the wrapped phase ranging [ π, +π). To remove those 2π discontinuities and obtain an absolute phase, a temporal phase unwrapping algorithm is needed. In this research, we adopted a three-frequency phase-shifting algorithm introduced in [40] for absolute phase retrieval. For all experiments, including the calibration and the 3D reconstruction (introduced in Sections 2.2.4, and 2.2.6, respectively), we used a set of nine-step (N = 9) phase-shifted patterns with fringe period of T = 18, and two additional sets of three-step (N = 3) phase-shifted patterns with fringe periods of T = 21 and T = 144 pixels. In total, 15 fringe images are needed to retrieve the absolute phase. An example of absolute phase retrieval using three-frequency phase-shifting algorithm is shown in Fig (a) (b) (c) (d) (e) Figure 2.3. Absolute phase retrieval using three-frequency phaseshifting algorithm. (a) Picture of a spherical object; (b) wrapped phase map obtained from patterns with fringe period T = 18 pixels; (c) wrapped phase map obtained from patterns with fringe period T = 21 pixels; (d) wrapped phase map obtained from patterns with fringe period T = 144 pixels; (e) unwrapped phase map by applying the three-frequency phase-shifting algorithm. (3) Determination of optimal fringe angle Figure 2.4 shows an example process of determining the optimal fringe angle under a particular system setup. As introduced in [119], the optimal fringe angle of a

45 27 particular system setup can be determined by measuring a step-height object, shown in Fig. 2.4(a). A sequence of horizontal and vertical patterns should be projected first on a reference plane, and then on the step-height object. After that, using the principle introduced in Section 2.2.1, four different absolute phase maps Φhr, Φvr, Φho, and Φvo can be obtained, where Φhr and Φvr are the absolute phases of the reference plane obtained, respectively, from horizontal and vertical patterns; Φho and Φvo are the corresponding absolute phases of the object. The difference phase maps Φhd and Φvd, as shown in Figs. 2.4(b) - 2.4(c), can then be obtained by Φhd = Φho Φhr, (2.5) Φvd = Φvo Φvr. (2.6) Once the difference phase maps are obtained, the phase difference Φh and Φv between the top and the bottom surface of the step-height object on the difference phase maps are needed, which can be visualized in the corresponding cross sections shown in Figs. 2.4(d) - 2.4(e). Finally, the optimal fringe angle αopt is determined by αopt = arctan( Φv / Φh ). (2.7) Its orthogonal direction will be the worst fringe angle. v h (a) (b) (c) (d) (e) Figure 2.4. Example of optimal fringe angle determination using a step height object. (a) Photograph of the step height object; (b) difference phase map Φhd obtained from horizontal patterns; (c) difference phase map Φvd obtained from vertical patterns; (d) - (e) cross sections of (b) and (c) respectively to visualize Φh and Φv. (4) Pinhole model of the structured light system

46 28 In this research, we adopted the standard pinhole model as shown in Fig. 5 for our structured light system, where the projector is regarded as an inverse camera. Here, (o w ; x w, y w, z w ) denotes the world coordinate system. (o c ; x c, y c, z c ) and (o p ; x p, y p, z p ) respectively, represent the camera and the projector coordinate systems, while (o c 0; u c, v c ) and (o p 0; u p, v p ) are their corresponding image coordinate systems. f c and f p stand for the focal lengths of the camera and the projector. The model of the whole system can be described using the following equations: Figure 2.5. Pinhole model of the structured light system.

47 s c s p u c v c 1 u p v p 1 = Ac [R c, t c ] z w 1 = Ap [R p, t p ] z w 1 x w y w x w y w 29, (2.8). (2.9) Here, [R c, t c ] and [R p, t p ] are the camera and the projector extrinsic matrices that describe the rotation (i.e. R c, R p ) and translation (i.e. t c, t p ) from the world coordinate to their corresponding coordinate systems. A c and A p are the camera and the projector intrinsic matrices, which can be expressed as α c γ c u c 0 A c = 0 β c v0 c, (2.10) α p γ p u p 0 A p = 0 β p v p 0, (2.11) where α c, α p, β c and β p are elements that imply the focal lengths along u c, u p, v c and v p axes. γ c is the skew factor of the u c and v c axes. γ p is the skew factor of the u p and v p axes. In practice, the camera (or projector) lenses can have nonlinear lens distortion, which is mainly composed of radial and tangential distortion coefficients. The nonlinear distortion can be described as a vector of five elements: [ ] T Dist = k 1 k 2 p 1 p 2 k 3, (2.12)

48 30 where k 1, k 2 and k 3 represent the radial distortion coefficients. Radial distortion can be corrected using the following formula: u = u(1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ), (2.13) v = v(1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ). (2.14) Here, (u, v) and (u, v ) respectively, stand for the camera (or projector) point coordinate before and after correction, and r = u 2 + v 2 denotes the Euclidean distance between the camera (or projector) point and the origin. Similarly, tangential distortion can be corrected using the following formula: u = u + [2p 1 uv + p 2 (r 2 + 2u 2 )], (2.15) v = v + [p 1 (r 2 + 2v 2 ) + 2p 2 uv]. (2.16) We coincided the world coordinate with the camera coordinate to simplify the system model as follows: R c = E 3, t c = 0, (2.17) R p = R, t p = t. (2.18) where E 3 is a 3 3 identity matrix, and 0 is a 3 1 translation vector. [R, t] describes the translation and rotation from the camera (or world) coordinate system to the projector coordinate system, which are the only extrinsic parameters that we have to estimate Model of Defocused Imaging System To perform system calibration, we need to calibrate both the camera and the projector. As aforementioned, the projector has the inverse optics with respect to the camera since it projects images rather than capturing them. In order to enable the projector to have a similar calibration procedure to the camera, we need to create captured images for the projector by establishing the correspondence between the

49 31 camera coordinate and the projector coordinate. However, in our system, we took advantage of binary defocusing technology [42]. In this case, a defocused projector will make the calibration procedure quite challenging. This is because the model for calibrating the projector briefly follows the model for calibrating the camera, since there does not exist a technology that could calibrate a defocused camera, not to mention the calibration for an out-of-focus projector. In this section, we will introduce the model of a defocused imaging system and the solution to the calibration of an out-of-focus projector. The model of an imaging system in general can be described as follows. According to Ellenberger [120], suppose that o(x, y) is the intensity distribution of a known object; its image i(x, y) after passing through an imaging system can be described by a convolution of the object intensity o(x, y) and the point spread function (PSF) psf(x, y) of the imaging system i(x, y) = o(x, y) psf(x, y). (2.19) Here, psf(x, y) is determined by a pupil function of the optical system f(u, v) psf(x, y) = f(u, v)e i(xu+yv) dudv 2π = F (x, y) 2, (2.20) where F (x, y) denotes the Fourier transform of the pupil function f(u, v). In general, the pupil function f(u, v) is described as t(u, v)e j 2π λ ω(u,v), for u 2 + v 2 1 f(u, v) =, (2.21) 0, for u 2 + v 2 > 1 where t(u, v) represents the transmittance of the pupil, and ω(u, v) describes all source of aberrations. When describing the system in Fourier domain by applying the convolution theorem, we can obtain I(s 0, t 0 ) = O(s 0, t 0 ) OT F (s 0, t 0 ). (2.22) Here, I(s 0, t 0 ) and O(s 0, t 0 ) represent the Fourier transform of the function denoted by their corresponding lowercase letters, and OT F (s 0, t 0 ) is the Fourier transform of

50 32 the PSF. The optical transfer function (OTF) OT F (s 0, t 0 ) is defined by its normalized form OT F (s 0, t 0 ) = OT F (s 0, t 0 ) OT F (0, 0). (2.23) Specifically, if the system is circular symmetric and aberration free with the only defect of defocusing, the pupil transfer function can be simplified as e j 2π λ ω(u2 +v 2), for u 2 + v 2 1 f(u, v) =, (2.24) 0, for u 2 + v 2 > 1 where ω is a circular symmetric function that describes the amount of defocusing, which can also be represented by the maximal optical distance between the emergent wavefront S and the reference sphere S r, as shown in Fig Meanwhile, the OTF degenerates to [121] Wavefront Optical axis Reference sphere Image plane Figure 2.6. Defocused optical system with the parameter ω. OT F (s) = + f(u + s, v)f (u s, v)dudv f(u,, (2.25) v) 2 dudv with s being (s t 2 0). The expression of the exact OTF can be very complicated and almost impossible to compute efficiently. However, according to Hopkins [122], if we neglect the diffraction properties of the light and approximate the OTF based on geometric optics, the OTF can be simplified as OT F (s) = 2J 1 (a)/a, a = ( 4π ωs), (2.26) λ

51 33 where J 1 is the Bessel function of the first kind with order n = 1. A visualization of the simplified OTF is shown in Fig Figure 2.7(a) shows an example of the OTF with defocusing parameter ω = 2λ, and Fig. 2.7(b) shows a cross section of OTFs with different defocusing parameter ω, from which we can see that when there is no defect of defocusing (ω = 0), the OTF has an uniform amplitude. However, when there exists a defect of defocusing, the OTF follows an Airy rings -like profile with the cut-off frequency being decreased when the defocusing degree ω increases. OTF w =0 w =0.5λ w = λ w =2λ s (a) (b) Figure 2.7. Illustration of the optical transfer function(otf) for a defocusing system. (a) Example of OTF with defocusing parameter ω = 2λ; (b) a cross section of OTFs with different defocusing parameter ω. A more intuitive understanding on how the defect of defocusing influences the resultant image can be obtained by looking at the PSF, as is illustrated in Fig. 2.8, which is the inverse Fourier transform of its corresponding OTF. Figure 2.8(a) shows an example of the normalized PSF with the defocusing parameter ω = 2λ, while Fig. 2.8(b) illustrates a cross section of the normalized PSFs with different defocusing parameters. The normalized PSF indicates when the optical system is in focus (ω = 0), the PSF becomes a unit impulse function centered at the origin, which means that a point on the object will still map to a point on the image after passing through the optical system, since the resultant image is simply a convolution between the object intensity distribution and the PSF. However, with the optical system becoming more

52 34 and more defocused, the PSF expands to be a blurred circular disk, which means that a point on the object will no longer map to a single pixel on the image plane, but rather spread to the nearby region. Normalized PSF w =0 w =0.5λ w = λ w =2λ x (a) (b) Figure 2.8. Illustration of the point spread function(psf) for a defocusing system. (a) Example of normalized PSF with defocusing parameter ω = 2λ; (b) a cross section of normalized PSFs with different defocusing parameter ω Phase-Domain Invariant Mapping Subsection showed that if the optical system is defocused, a point on the object will no longer converge to a point on the image plane, but rather a blurred circular disk. For a structured-light system with an out-of-focus projector, as is illustrated in Fig. 2.9, a projector s pixel does not correspond to the one single pixel on the camera; but rather pollutes to its nearby region, as shown in the dashed area A. However, considering the infinite light ray of the optical system, the center of the projector pixel still corresponds to the center of a camera pixel regardless the amount of defocusing if they indeed are corresponding to each other. Therefore, if the center of the pixel can be found, one-to-one mapping between the projector and the camera can still be virtually established. From our previous discussion, the center point corresponds to the peak value of the circular disk whose phase value maintains

53 35 regardless of the defocusing. Therefore, one-to-one mapping between the projector pixel center, (u p, v p ), which is actually the pixel itself, and the camera pixel center, (u c, v c ), can be established in the phase domain using the phase-shifting algorithm, albeit being impractical to generate the mapped projector images as proposed in [113]. DMD pixel Projector lens Focal plane Projection plane CMOS pixel A Camera lens Figure 2.9. Model of a structured-light system with an out-of-focus projector. Theoretically, the mapping in the phase domain is invariant between the central points of a projector pixel and a camera pixel, which can be seen from the system model in the frequency domain. Based on the aforementioned model of the imaging system, as shown in Eq. (2.22), the Fourier transform I p (u p, v p ) of the projector image i p (u, v) at the pixel center (u p, v p ) can be represented by I p (u p, v p ) = I c (u c, v c ) OT F p(0, 0) OT F c(0, 0), (2.27) where I c (u c, v c ) is the Fourier transform of the corresponding camera image i c (u, v) at the pixel center (u c, v c ), OT F p(0, 0) is the unnormalized OTF of the projector optical system at the center pixel, and OT F c(0, 0) is the unnormalized OTF of the camera optical system at the center pixel. From Eq. (2.26), it is indicated that the OTF is a circular symmetric and real-valued function that does not have contribution to

54 36 phase information. In other words, the phase of a point (u p, v p ) on the camera image is not altered after passing through the two optical systems, and has the same value as the phase of the point (u c, v c ) on the camera censor. Therefore, we can indeed establish one-to-one correspondence between the central points of a camera pixel and a projector pixel using the phase information. The basic principle of the mapping can be described as follows. Without loss of generality, if the horizontal patterns are projected onto the calibration board and the absolute phase φ va in the vertical gradient direction is retrieved, the camera coordinate can be mapped to the projector horizontal line using the constraint of equal phase values φ c va(u c, v c ) = φ p va(v p ) = φ va. (2.28) Similarly, if vertical patterns are projected and the absolute phase φ ha in the horizontal gradient direction is extracted, another constraint can be established as φ c ha(u c, v c ) = φ p ha (up ) = φ ha, (2.29) to correspond the one camera pixel to the vertical line of the projector image plane. The intersecting point of these two lines on the projector image plane (u p, v p ) is the unique mapping point of the camera pixel (u c, v c ) in (and only in) the phase domain Generic System Calibration Procedures Essentially, the system calibration is to estimate the intrinsic and the extrinsic matrices of the camera and the projector. The camera can be calibrated using different orientations of circle pattern images with the standard OpenCV camera calibration toolbox. An example circle pattern used in this research is shown in Fig. 2.10, in which the circle centers were extracted as feature points. However, it is not straightforward to do so for the projector since the projector cannot capture images by itself. Our previous section has shown that by creating one-to-one mapping in phase domain between points in a camera and projector sensor, the projector will be able to capture images like a camera, and then similar calibration procedures can be applied to

55 37 the projector as calibrating a camera. In this section, we will introduce the detailed calibration procedures. Figure Design of calibration board. Specifically, the system calibration requires the following major steps: Step 1: Image capture. The required images to calibrate our system include both fringe images and the actual circle pattern images for each pose of the calibration target. The fringe images were captured by projecting a sequence of horizontal and vertical phase-shifted fringe patterns for absolute phase recovery using the phase-shifting algorithm discussed in Section The circle board image was captured by projecting uniform white images onto the board. Figure 2.11 shows an example of the captured fringe images with horizontal pattern projection, vertical pattern projection, and pure white image projection. Step 2: Camera intrinsic calibration. The circle board images were then used to find the circle center and then used to estimate the intrinsic parameters and lens distortion parameters of the camera. Both circle center finding and the intrinsic calibration were performed by the OpenCV camera calibration toolbox. Figure 2.12(a) shows one of the circle board images, and Fig. 2.12(b) shows the circle center we detected with the OpenCV circle center finding software algorithm. The circle detected circle centers were stored for further analysis.

56 38 (a) (b) (c) Figure Example of captured images. (a) Example of one captured fringe images with horizontal pattern projection; (b) example of one captured fringe images with vertical pattern projection; (c) example of one captured fringe images with pure white image projection. Step 3: Projector circle center determination. For each calibration pose, we obtained the absolute horizontal and vertical gradient phase maps (i.e., φ c ha and φ c va) using the phase-shifting algorithm. For each circle center, (u c, v c ), found from Step 2 for this pose, the corresponding mapping point on the projector (u p, v p ) was determined by v p = φ c va(u c, v c ) T/2π, (2.30) u p = φ c ha(u c, v c ) T/2π, (2.31) where T is the fringe period for the narrowest fringe pattern (18 pixels in our example). These equations simply convert phase into projector pixel. The circle center phase values were obtained by bilinear interpolation because of the sub-pixel circle center detection algorithm for the camera image. Figure 2.12(c) shows mapped circle centers for the projector. From Eq. (2.31) - (2.31), we can deduce that the mapping accuracy is not affected by the accuracy of camera parameters. However, the mapping accuracy could be influenced by the accuracy of circle center extraction and the phase quality. Since the camera circle centers were extracted by the standard OpenCV toolbox, we could obtain the coordinates of the circle centers with high accuracy. For high-quality phase generation, in general, the narrower the fringe patterns used, the better the phase

57 39 accuracy that will be obtained; the more fringe patterns used, the lower the noise effect. In our research, we reduced the phase error by using a nine-step phase-shifting algorithm and the narrow fringe patterns (fringe period of T = 18 pixels). V (pixel) V (pixel) (a) U (pixel) (b) U (pixel) (c) Figure Example of finding circle centers for the camera and the projector. (a) Example of one calibration pose; (b) Circle centers extracted from (a); (c) Mapped image for the projector from (b). Step 4: Projector intrinsic calibration. Once the circle centers for the projector were found from Step 3, the same software algorithms for camera calibration were used to estimate the projector s intrinsic parameters. Again, the OpenCV camera calibration toolbox is used in this research. Our experiments found that it was not necessary to consider the lens distortion for the projector, and thus we used a linear model for the projector calibration. Step 5: Extrinsic calibration. As discussed in Section 2.2.1, the world coordinate system coincides with the camera coordinate system, which means that we only have to estimate the rotation R and the translation t from camera (or world) coordinate to projector coordinate. Therefore, the extrinsic matrix [R, t] can be estimated using the OpenCV stereo calibration toolbox together with the intrinsic parameters obtained in previous steps.

58 System Calibration Procedures Using Optimal Fringe Angle The calibration approach that we just introduced can work pretty well in majority cases. However, as aforementioned, when the system is in a well-designed scenario, the previously introduced approach may result in an inaccurate calibration because of the significant difference of depth sensing between horizontal and vertical patterns. In this section, we will introduce our developed novel calibration procedure based on optimal fringe angle. The major steps of this proposed calibration approach are: Step 1: Optimal fringe angle determination. A three-frequency phase-shifting method, as described in Section 2.2.1, is used to retrieve the horizontal and vertical absolute phases of both the step-height object and the reference plane. Then, following the approach described in Section 2.2.1, the optimal fringe angle α opt can be obtained. Step 2: Pattern generation. After the optimal fringe angle α opt is obtained, we then determine that patterns with two orthogonal directions α opt π/4 and α opt +π/4 will be used for calibration. The reason for choosing these two angles is that the system is equally sensitive to depth variation, reducing the bias error for projector mapping generation. Then, following the method introduced in Sections and 2.2.1, we can generate the three-frequency phase-shifted patterns in these two fringe directions (i.e. α opt π/4 and α opt + π/4). Step 3: Image capture. To calibrate the structured light system, both the actual circle pattern image and the fringe images should be captured for each orientation of the calibration target. To start with, a uniform white image as well as a sequence of orthogonal fringe patterns with fringe angles of α opt π/4 and α opt + π/4 needs to be generated. The circle pattern image is obtained by projecting a uniform white image on to the calibration board. The fringe images are obtained by projecting those orthogonal fringe patterns on to the

59 41 calibration board. As introduced in Section 2.2.1, 15 fringe images are required for absolute phase recovery. Therefore, a total number of 31 images, including the circle pattern image and the fringe images from both pattern directions, will be recorded for further analysis. Figure 2.13 shows an example of image capturing when the optimal fringe angle is close to π/2, in which Fig. 2.13(a) shows the captured image with pure white image projection. Figure 2.13(b) and Fig. 2.13(c) respectively show the fringe images with fringe angles of α opt π/4 and α opt + π/4. (a) (b) (c) Figure Example of captured images. (a) Example of one captured fringe image with pure white image projection; (b) example of one captured fringe image with a fringe angle of α opt π/4; (c) Example of one captured fringe image with a fringe angle of α opt + π/4. Step 4: Camera instrinsic calibration. The same as Step 2 in Section Step 5: Projector circle center determination. For each circle board orientation, we can obtain the absolute phase from patterns with orthogonal fringe angles (i.e. α opt π/4 and α opt +π/4). Suppose the absolute phases obtained from fringe angles α opt π/4 and α opt +π/4 are, respectively Φ 1 and Φ 2, their corresponding gradient directions are u p and v p (see Fig. 2.14). For each circle center, (u c, v c ),

60 42 found from the previous step for this orientation, the corresponding mapping point A(u p, v p ) on u p O v p coordinate system was determined by u p = Φ 1 (u c, v c ) T/2π, (2.32) v p = Φ 2 (u c, v c ) T/2π, (2.33) where T is the narrowest fringe period for the patterns used to retrieve the absolute phase (18 pixels in this case). The phase values of the circle centers were obtained through bilinear interpolation because of the subpixel accuracy of the circle center detection algorithm. Equations (2.32) - (2.33) convert phase to projector pixel. However, to reflect the real projector pixel geometry, we need to transform the mapping point A into a new coordinate system u p O v p, whose axes are in horizontal and vertical directions. This transformation is actually a rotation of the coordinate system through an angle 3π/4 α opt counterclockwise, as is shown in Fig. 2.14, which can be described by up = cos(3π/4 α opt) sin(3π/4 α opt ) sin(3π/4 α opt ) cos(3π/4 α opt ) v p up v p. (2.34) After this coordinate transformation, the projector circle center point A(u p, v p ) can be uniquely determined from the camera circle center point (u c, v c ). Figure Illustration of coordinate system rotation. Step 6: Projector intrinsic calibration. The same as Step 4 in Section

61 43 Step 7: Extrinsic calibration. The same as Step 5 in Section D Reconstruction Based on Calibration Equations (2.17) and (2.18) describes the system model. These equations can be further simplified as follows: M c = A c [E 3, 0], (2.35) M p = A p [R, t], (2.36) where M c and M p are the camera and the projector matrices, respectively, which combine their corresponding intrinsic and extrinsic parameters. These matrices are uniquely determined once the system is calibrated. From Eqs. (2.8) and (2.9) and Eqs. (2.35) and (2.36), we can deduce that u x c m c 34 m c 14 y = v (HT H) 1 H T c m c 34 m c 24 u z p m p 34 m p, (2.37) 14 v p m p 34 m p 24 where H = m c 11 u c m c 31 m c 12 u c m c 32 m c 13 u c m c 33 m c 21 u c m c 31 m c 22 u c m c 32 m c 23 u c m c 33 m p 11 u p m p 31 m p 12 u p m p 32 m p 13 u p m p 33 m p 21 u p m p 31 m p 22 u p m p 32 m p 23 u p m p 33. (2.38) Here, m c ij and m p ij are the camera and the projector matrix elements in i th row and j th column, respectively. Using Eq. (2.37) - (2.38), the 3D geometry in world coordinate system can be reconstructed based on calibration.

62 Experiments System Setup We setup two different test systems in this research: System 1 is used to validate our calibration approach that deals with projector defocusing; System 2 is used to validate our calibration approach for a well-designed system based on optimal fringe angle. In System 1, we used a digital light processing (DLP) projector (Model: LightCrafter 3000) with a resolution of It has a micromirror pitch of 7.6 µm. The camera that we used is a CMOS camera with an image resolution of and a sensor size of 4.8 µm 4.8 µm (Model: PointGrey FL3-U3-13Y3M-C). The lens used for the camera is a Computar M0814-MP2 lens with a focal length of 8 mm at f/1.4 to f/16. In System 2, we used a DLP projector (Dell M109S) and a digital CCD camera (Jai Pulnix TM-6740CL). The projector resolution is with a projection distance of m and a lens of F/2.0, f = mm. The digital micromirror device (DMD) used in the projector is a 0.45 in. Type Y chip. The camera uses a 12 mm focal length mega-pixel lens (Computar M1214-MP2) at F/1.4 to 16C. The camera has a resolution of with a maximum frame rate of 200 frames/sec. The camera pixel size is 7.4 µm 7.4 µm D Shape Measurement under Different Defocusing Levels To verify the performance of the proposed system calibration approach, we measured the spherical object shown in Fig. 2.3(a) with System 1 under three different defocusing degrees: (1) the projector is in focus, (2) the projector is slightly defocused, and (3) the projector is greatly defocused. Figure 2.15 shows the captured fringe images under the three defocusing degrees and their corresponding cross sections of intensity. It demonstrates that when the projector is in focus, the pattern in the dis-

63 45 torted fringe image has clear binary structure, as is shown in Fig. 2.15(d). However, as the projector becomes more and more defocused, the pattern will be more and more smoothed to approximate a sinusoidal structure, as is shown in Figs. 2.16(d) (h). The measurement results under three defocusing degrees are shown in Fig Figure 2.16(a) (c) show the measurement results under defocusing degree 1 (i.e., the projector is in focus), where Fig. 2.16(a) shows the reconstructed 3D surface. The smooth spherical surface indicates good accuracy. To further evaluate its accuracy, we took a cross section of the sphere and fitted it with an ideal circle. Figure 2.16(b) shows the overlay of the ideal circle and the measured data points. The difference between these two curves is shown in Fig. 2.16(c). The error is quite small with a rms error of mm or 71 µm. Figure 2.16(d) (f) and Fig. 2.16(g) (i) respectively, show the measurement results under defocusing degree 2 (i.e., the projector is slightly defocused) and defocusing degree 3 (i.e., the projector is greatly defocused). In both defocusing degrees, good measurement accuracies can also be achieved, with rms errors of 77 µm and 73 µm respectively. It is important to note that the whole volume of the calibration board poses was around 150(H) 250(W ) 200(D) mm 3. These experimental results clearly illustrate that for such a large calibration volume, the proposed method can consistently achieve fairly high accuracy from an in-focus condition to a greatly defocused condition Dynamic 3D Shape Measurement Furthermore, we measured a dynamically changing human face under defocusing degree 2 to demonstrate that our system can perform high-speed 3D shape measurement. In this experiment, the projection and capturing speeds were both set at 500 Hz. Moreover, in order to reduce the motion artifacts, we adopted the three-step phase-shifting algorithm for the smallest fringe period (T = 18 pixels) instead of the nine-step phase-shifting used previously. Figure 2.17 and its associated video demonstrate the real-time measurement results. This experiment demonstrated that

64 46 (a) (b) (c) Intensity X (pixel) (d) Intensity X (pixel) (e) Intensity X (pixel) (f) Figure Illustration of three different defocusing degrees. (a) One captured fringe image under defocusing degree 1 (projector in-focus); (b) one captured fringe image under defocusing degree 2 (projector slightly defocused); (c) one captured fringe image under defocusing degree 3 (projector greatly defocused); (d) - (f) corresponding cross sections of intensity of (a) - (c). high-quality 3D shape measurement can also be achieved even for the real-time 3D shape measurement Measurements Using Optimal Angle in a Well-Designed System We have demonstrated the success of our proposed calibration approach with both static and dynamic 3D measurements. However, to demonstrate the significance of optimal fringe angle in system calibration, particularly in a well-designed system, we set up the test system (System 2) close to the scenario shown in Fig. 2.1, where the optimal fringe angle is set close to α opt = π/2. Therefore, for our proposed

65 47 Z (mm) Measured Ideal Circle X (mm) Error (mm) X (mm) (a) (b) (c) Z (mm) Measured Ideal Circle X (mm) Error (mm) X (mm) (d) (e) (f) Z (mm) Measured Ideal Circle X (mm) Error (mm) X (mm) (g) (h) (i) Figure Measurement result of a spherical surface under three different defocusing degrees, the rms errors estimated on (d), (h) and (l) are ±71 µm, ±77 µm and ±73 µm respectively. (a) One captured fringe image under defocusing degree 1 (projector in-focus); (b) reconstructed 3D result under defocusing degree 1 (projector in-focus); (c) a cross section of the 3D result and the ideal circle under defocusing degree 1 (projector in-focus); (d) the error estimated based on (b); (e) - (h) corresponding figures of (a) - (d) under defocusing degree 2 (projector slightly defocused); (i) - (l) corresponding figures of (a) - (d) under defocusing degree 3 (projector greatly defocused).

66 48 (a) (b) (c) (d) Figure Real-time 3D shape measurement result. (a) One captured fringe image; (b)-(d) three frames of the video we recorded. method, the pattern fringe angles used for calibration will be αopt π/4 = π/4 and αopt + π/4 = 3π/4. Then we did the calibration and 3D reconstruction using our proposed method and compared it with the previous method that uses horizontal and vertical patterns. Here, we used three different orientations of the calibration board to calibrate the system, and the volume used for calibration is 300(H) mm 250(W) mm 500(D) mm. For each calibration pose and each measured object, we first project fringe patterns with fringe angles of αopt π/4 = π/4 and αopt + π/4 = 3π/4, and then project horizontal and vertical patterns. The camera capturing is properly synchronized with pattern projection. To evaluate the calibration accuracy of our proposed method, we first measured a spherical object, as shown in Fig. 2.18(a). We captured the fringe images using horizontal and vertical patterns (see Fig. 2.18(b) (c)), as well as using patterns with fringe angles of αopt π/4 and αopt + π/4 (i.e. π/4 and 3π/4) (see Fig. 2.18(d) (e)). From the camera point of view, it is clear that the horizontal pattern has almost no distortion, which means that it is almost insensitive to any depth variation, while the patterns in other directions are evidently distorted. To illustrate the influence that the choice of fringe angles has on calibration accuracy, we reconstructed the 3D shape of the spherical object under both calibration methods, as shown in Figs. 2.19(a) and Fig. 2.19(f). To quantify the calibration accuracies, we took two

67 49 orthogonal cross sections of the reconstructed 3D shapes and fit them with ideal circles, as shown in Figs. 2.19(b) and 2.19(c) and in Figs. 2.19(g) and 2.19(h). Their corresponding error plots are shown in Figs. 2.19(d) and 2.19(e) and in Figs. 2.19(i) and 2.19(j). From the reconstructed 3D shape obtained from horizontal and vertical patterns, we can see that one of the two cross sections deviates quite a bit from the ideal circle (see Fig. 2.19(d)), with a root-mean-square (RMS) error of µm, albeit the other direction is still reasonable (see Fig. 2.19(e)), with an RMS error of 86.0 µm. While both directions agree well with the ideal circles, when using our proposed method (see Fig. 2.19(i) (j)), with RMS errors of 69.0 µm and 72.7 µm, which can improve the accuracy of the horizontal and vertical method by 38% and 15%, respectively. (a) (b) (c) (d) (e) Figure Example of captured fringe images of the spherical object. (a) Original picture of the spherical object; (b) capture image using horizontal fringe pattern (i.e. α = 0); (c) capture image using vertical pattern (i.e. α = π/2); (d) capture image with pattern fringe angle of α = π/4; (e) capture image with pattern fringe angle of α = 3π/4. To visually demonstrate the advantage that our newly proposed calibration method, we also measured an object with complex surface geometry (Fig. 2.20(a)) under the same system setup. Figure 2.20(b) and 2.20(c) show the reconstructed 3D shape under generic calibration method (with horizontal and vertical patterns) and under our newly proposed method (with fringe angles of α opt π/4 and α opt +π/4), respectively. To better visualize their differences, we magnified the same area (see the red bounding boxes in Figs. 2.20(a) and 2.20(c)) of both the original picture and the 3D results,

68 Z (mm)60 Measured Ideal Circle X (mm) Z (mm) Measured Ideal Circle Y (mm) Error (mm) X (mm) Error (mm) Y (mm) (a) (b) (c) (d) (e) Z (mm)60 Measured Ideal Circle X (mm) Z (mm) Measured Ideal Circle Y (mm) Error (mm) X (mm) Error (mm) Y (mm) (f) (g) (h) (i) (j) Figure Comparison of measurement results of the spherical surface. (a) Reconstructed 3D result using horizontal and vertical patterns; (b) - (c) two orthogonal cross sections of the 3D result shown in (a) and the ideal circles; (d) - (e) the corresponding errors estimated based on (b) - (c) with RMS errors of µm and 86.0 µm respectively; (f) - (j) corresponding results of (a) - (e) using patterns with fringe angles of α opt π/4 and α opt + π/4, the RMS errors estimated in (i) and (j) are 69.0 µm and 72.7 µm respectively. and the zoom-in views are shown in Figs. 2.20(d) (f). From the zoom-in views, we can see that the result obtained from the generic method (Fig. 2.20(e)) shows less detailed structure in the vertical direction. In other words, the supposedly segmented small features (Fig. 2.20(d)) are vertically connected. However, the result obtained using our proposed method (see Fig. 2.20(f)) well preserves the detailed structures (i.e., the small features are well segmented). This experiment further proves that our proposed calibration approach can enhance the performance of the generic calibration approach. 2.4 Conclusion This chapter presents two calibration innovations for a structured light system: 1) a calibration method associated with an out-of-focus projector; 2) an optimal

69 51 (a) (b) (c) (d) (e) (f) Figure Measurement results of an object with complex geometry. (a) The original picture of the object; (b) Reconstructed 3D result using horizontal and vertical patterns; (c) Reconstructed 3D result using patterns with fringe angles of α opt π/4 and α opt + π/4; (d) - (f) corresponding zoom-in views of (a) - (c) within in the areas shown in the red bounding boxes. angle based calibration method associated with a well-design system. Our theoretical analysis provided the foundation that the out-of-focused projector can be calibrated accurately by creating one-to-one mapping between the camera pixel and the projector pixel center in the phase domain. For a calibration volume of 150(H) mm 250(W) mm 200(D) mm, our calibration approach has consistent performance over different

70 52 amounts of defocusing, and the accuracy can reach about 73 µm. We also found that for a well-designed system, the usage of optimal fringe angle greatly improves the calibration performance compared to the generic approach using horizontal and vertical patterns. In particular, for a well-designed system, our optimal angle based calibration approach can indeed improve the accuracy up to 38% for a calibration volume of 300(H) mm 250(W) mm 500(D) mm.

71 53 3. FLEXIBLE CALIBRATION METHOD FOR MICROSCOPIC STRUCTURED LIGHT SYSTEM USING TELECENTRIC LENS The previous chapter introduces our novel calibration methods for a structured light system with an out-of-focus projector, which successfully achieves high measurement accuracies for the superfast binary defocusing technology in a macro-scale level. This chapter further introduces our calibration development for accurate measurements in a medium-scale level with a spatial span of several cm 3. In particular, we innovated a flexible calibration method for a telecentric lens assisted by a pinhole lens. The major content of this chapter was originally published in Optics Express [123] (also listed as journal article [8] in LIST OF PUBLICATIONS ). 3.1 Introduction With recent advances in precision manufacturing, there has been an increasing demand for the development of efficient and accurate micro-level 3D metrology approaches. A structured light (SL) system with digital fringe projection technology is regarded as a potential solution to micro-scale 3D profilometry owing to its capability of high-speed, high-resolution measurement [124]. To migrate this 3D imaging technology into micro-scale level, a variety of approaches were carried out, either by modifying one channel of a stereo microscope with different projection technologies [80 84], or using small field-of-view (FOV), non-telecentric lenses with long working distance (LWD) [85 89]. Apart from the technologies mentioned above, an alternative approach for microscopic 3D imaging is to use telecentric lenses because of their unique properties of orthographic projection, low distortion and invariant magnification over a specific distance range [92]. However, the calibration of such optical system is not straight-

72 54 forward especially for Z direction, since the telecentricity will result in insensitivity of depth changing along optical axis. Zhu et al. [91] proposed to use a camera with telecentric lens and a speckle projector to perform deformation measurement with digital-image-correlation (DIC). Essentially the Z direction in this system was calibrated using a translation stage and a simple polynomial fitting method. To improve the calibration flexibility and accuracy, Li and Tian [92] has formulated the orthographic projection of telecentric lens into an intrinsic and an extrinsic matrix, and successfully employed this model to an SL system with two telecentric lenses in their later research [90]. This technology has shown the possibility to calibrate a telecentric SL system analogously to a regular pin-hole SL system. These aforementioned approaches for telecentric lens calibration have been proven successful to achieve different accuracies. However, Zhu s approach [91] is based on a polynomial fitting method along Z direction using a high-accuracy translation stage. And the use of a high-accuracy translation stage for system calibration is usually difficult to setup (e.g., moving direction perfectly perpendicular to Z axis) and expensive. While Li s method [90,92] increases the calibration flexibility and simplifies the calibration system setup, it is difficult for such a method to achieve high accuracy for extrinsic parameters calibration due to the strong constraints requirement (e.g. orthogonality of rotation matrices). Moreover, the magnification ratio and extrinsic parameters are calibrated separately, further complicating the calibration process and increasing the uncertainties of modeling accuracy since the magnification and extrinsic parameters are naturally coupled together and are difficult to separate. To address the aforementioned limitations of the state-of-art system calibration, we propose to use a LWD pin-hole lens to calibrate a telecentric lens. Namely, we developed a system that includes a camera with telecentric lens and a projector with small FOV and a projector with a LWD pin-hole lens. Since the pin-hole imaging model has been well-established and its calibration is well studied, we can use a calibrated pin-hole projector to assist the calibration of a camera with a telecentric lens. To the best of our knowledge, there are no flexible and accurate approach to

73 55 calibrate an SL system using a camera with telecentric lens and a projector with a pin-hole lens. In this research, we propose a novel framework to calibrate such a structured light system using a camera with a telecentric lens and a projector using a pin-hole lens. The pin-hole projector calibration follows the flexible and standard pin-hole camera calibration procedures enabled by making a projector to capture images like a camera, a method developed by Zhang and Huang [125]. With the calibrated projector, 3D coordinates of those feature points used for projector calibration are then estimated through iterative Levenberg-Marquardt optimization. Those reconstructed 3D feature points are further used to calibrate the camera with a telecentric lens. Since the same set of points are used for both projector and camera calibration, the calibration process is quite fast; and because a standard flat board with circle patterns are used and posed flexibly for the whole system calibration, the proposed calibration approach of a structured light system using a telecentric camera lens is very flexible. Section 3.2 introduces the principles of the telecentric and pinhole imaging systems. Section 3.3 illustrates the procedures of the proposed calibration framework. Section 3.4 demonstrates the experimental validation of this proposed calibration framework. Section 3.5 summaries the contributions of this research. 3.2 Principle The camera model with a telecentric lens is demonstrated in Fig Basically, the telecentric lens simply performs a magnification in both X and Y direction, while it is not sensitive to the depth in Z direction. By carrying out ray transfer matrix analysis for such an optical system, the relationship between the camera coordinate (o c ; x c, y c, z c ) and the image coordinate (o c 0; u c, v c ) can be described as follows: u c s c x 0 0 x c v c = 0 s c y 0 y c, (3.1)

74 56 Figure 3.1. Model of telecentric camera imaging. where s c x and s c y are respectively the magnification ratio in X and Y direction. While the transformation from the world coordinate (o w ; x w, y w, z w ) to the camera coordinate can be formulated as follows: x c r 11 c r12 c r13 c t c 1 y c = r 21 c r22 c r23 c t c 2 z w 1 x w y w, (3.2) where rij c and t c i respectively denotes the rotation and translation parameters. Combining Eq. (3.1) - (3.2), the projection from 3D object points to 2D camera image points can be formulated as follows: x u c m c 11 m c 12 m c 13 m c w 14 v c = m c 21 m c 22 m c 23 m c y 24 w z w (3.3) 1

75 57 Figure 3.2. Model of pinhole projector imaging. The projector model respects a well-known pin-hole imaging model is illustrated in Fig The projection from 3D object points (o w ; x w, y w, z w ) to 2D projector sensor points (o p 0; u p, v p ) can be described as follows: u p α γ u c 0 [ ] s p v p = 0 β v 0 c R p 3 3 t p 3 1 z w 1 x w y w. (3.4) Here, s p represents the scaling factor; α and β are respectively the effective focal lengths of the camera u and v direction. The effective focal length is defined as the distance from the pupil center to the imaging sensor plane. Since the majority software (e.g., OpenCV camera calibration toolbox) gives these two parameters in pixels, the actual effective focal lengths can be computed by multiplying the camera pixel size in u and v direction, respectively. (u c 0, v0) c are the coordinates of the principle point; R p 3 3 and t p 3 1 are the rotation and translation parameters. 3.3 Procedures The calibration framework includes seven major steps: Step 1: Image capture. Use a 9 9 circle board [see Fig. 3.3(a)] as the calibration target. Put the calibration target at different spatial orientations. Capture a set

76 58 (a) (b) (d) (c) (e) (f) Figure 3.3. Illustration of calibration process. (a) Calibration target; (b) camera image with circle centers; (c) capture image with horizontal pattern projection; (d) capture image with vertical pattern projection; (e) mapped circle center image for projector; (f) estimated 3D position of target points. of images for each target pose, which is composed of the projection of horizontal patterns, vertical patterns and a pure white frame. Step 2: Camera circle center determination. Pick the captured image with pure white fringe (no pattern) projection, extract the circle centers (uc, v c ) as the feature points. An example is shown in Fig. 3.3(b). Step 3: Absolute phase retrieval. To calibrate the projector, we need to generate a captured image for the projector since the projector cannot capture images by itself. This is achieved by mapping a camera point to a projector point using absolute phase. To obtain the phase information, information, we use a least-square phase-

77 59 shifting algorithm with 9 steps (N = 9). The k-th projected fringe image can be expressed as follows: I k (x, y) = I (x, y) + I (x, y) cos(φ + 2kπ/N), (3.5) where I (x, y) represents the average intensity, I (x, y) the intensity modulation, and φ(x, y) the phase to be solved for, [ N ] φ(x, y) = tan 1 k=1 I k sin(2kπ/n) N k=1 I. (3.6) k cos(2kπ/n) This equation produces wrapped phase map ranging [ π, +π). Using a multifrequency phase-shifting technique as described in [40], we can extract the absolute phase maps Φ ha and Φ va without 2π discontinuities respectively from captured images with horizontal [see Fig. 3.3(c)] and vertical pattern projection [see Fig. 3.3(d)]. Step 4: Projector circle center determination. Using the absolute phases obtained from step 3, the projector circle centers (u p, v p ) can be uniquely determined from the camera circle centers obtained from Step 2 : u p = φ c ha(u c, v c ) P 1 /2π, (3.7) v p = φ c va(u c, v c ) P 2 /2π, (3.8) where P 1 and P 2 are respectively the fringe periods of the horizontal and vertical patterns used in Step 3 for absolute phase recovery, which are 18 and 36 pixels in our experiments. This step simply converts the absolute phase values into projector pixels. Step 5: Projector intrinsic calibration. Using the projector circle centers extracted from the previous step, the projector intrinsic parameters (i.e. α, β, γ, u c 0, v c 0) can be estimated using standard OpenCV camera calibration toolbox. Step 6: Estimation of 3D target points. Align the world coordinate (o w ; x w, y w, z w ) with the projector coordinate (o p ; x p, y p, z p ), then the projector extrinsic matrix [ R p 3 3, t p 3 1] becomes [I3 3, ], which is composed of a 3 3 identity matrix I 3 3 and a 3 1 zeros vector To obtain the 3D world coordinates of target points, we

78 60 first define the target coordinate (o t ; x t, y t, 0) by assuming its Z coordinate to be zero, and assign the upper left circle center (point A in Fig. 3(a)) to be the origin. For each target pose, we estimate the transformation matrix [R t, t t ] from target coordinate to world coordinate using iterative Levenberg-Marquardt optimization method provided by OpenCV toolbox. Essentially this optimization approach iteratively minimizes the difference between the observed projections and the projected object points, which can be formulated as the following functional: x u p t min R i,t i v p M y p [R t, t t ] t, (3.9) where denotes the least square difference. M p denotes the projection from the world coordinate to image coordinate. After this step, the 3D coordinates of the target points (i.e. circle centers) can be obtained by applying this transformation to the points in target coordinate. Step 7: Camera calibration. Once the 3D coordinates of the circle centers on each target pose are determined, the camera parameters m c ij can be solved in the least-square sense using Eq. (3.3). After calibrating both the camera and the projector, using Eq. (3.3) and Eq. (3.4), we can compute the 3D coordinate (x w, y w, z w ) of a real-world object based on calibration. 3.4 Experiments We have conducted some experiments to validate the accuracy of our calibration model. The test system includes a digital CCD camera (Imaging Source DMK 23U274) with a pixel resolution of , and a DLP projector (LightCrafter PRO4500) with a pixel resolution of The telecentric lens used for the camera is Opto-engineering TC4MHR036-C with a magnification of It has a

79 61 working distance of mm a field depth of 5 mm. The LWD lens used for the projector has a working distance of 700 mm and an FOV of 400 mm 250 mm. Since both lenses used for the camera and the projector have a distortion ratio less than 0.1%, therefore, we ignored the lens distortion from both the camera and the projector lenses for simplicity. To validate the accuracy of our model, we examined the reprojection error for both the camera and the projector, as shown in Fig It indicates that our model is sufficient to describe both the camera and the projector imaging, since the errors for both the camera and the projector are mostly within ±5 µm. The root-mean-square (RMS) errors are respectively 1.8 µm and 1.2 µm. Since the camera calibration was based on a calibrated projector, one may notice that the projector calibration has slightly higher accuracy than camera calibration. This could be a result of the coupling error from the mapping besides optimization error, or because the camera has smaller pixel size than the projector. Y (µ m) X (µ m) (a) Y (µ m) X (µ m) (b) Figure 3.4. Reprojection error of the calibration approach. (a) Reprojection error for the camera (RMS: 1.8 µm); (b) reprojection error for the projector (RMS: 1.2 µm). We first measured the lengths of two diagonals AC and BD [see Fig. 3.3(a)] of the calibration target under 10 different orientations. The two diagonals are formed by the circle centers on the corner. The circle centers on this calibration target was

80 62 precisely manufactured with a distance d of ± mm. Therefore, the actual length of AC and BD can be expressed by 8 2d, or mm. We reconstructed the 3D geometry for each target pose, and then extracted the 3D coordinates of the 4 points (i.e. A, B, C, D). Finally, the euclidean distances AC and BD will be computed and compared with the actual value (i.e mm). The results are shown in Table 3.1, from which we can see that the measurement results are consistently accurate for different target poses. On average, the measurement error is around 9 µm with the maximum being 16 µm. Considering the length of the measured diagonal, the percentage error is quite small (around 0.10%), which is even comparable to the manufacturing uncertainty. The major error sources could come from the error introduced by circle center extraction or the bilinear interpolation of 3D coordinates. Table 3.1. Measurement result of two diagonals on calibration board (in mm). Pose No. AD Error BC Error Actual NA NA Average

81 63 We then put this calibration target on a precision vertical translation stage (Model: Newport M-MVN80, sensitivity: 50 nm) and translated it to different stage heights (spacing: 50 µm); We measured the 3D coordinates of the circle center point D [see Fig. 3.3(a)] at each stage, noted as D i (x, y, z), where i denotes the i-th stage position. Then we computed the rigid translation t i from D 1 (x, y, z) to D i (x, y, z). The magnitude of t i are then compared with the actual stage translation. The results are shown in Table 3.2. The results indicate that our calibration is able to provide a quite accurate estimation of a rigid translation. On average, the error is around 1.7 µm. Considering the spacing for the stage translation (i.e. 50 µm), this error is quite small. Table 3.2. Measurement result of a linearly translated calibration target point in mm. Stage No. t i Actual Error NA To further examine measurement uncertainty, we measured the 3D geometry a flat plane and compare it with an ideal plane obtained through least-square fitting. Figure 3.5(a) shows the 2D color-coded error map, and Figure 3.5(b) shows one of its cross section. The root-mean-square (RMS) error for the measured plane is 4.5 µm, which is very small comparing with the random noise level (approximately 20 µm). The major sources of error could come from the roughness of this measured surface, or/and the random noise of the camera. This result indicates that our calibration method can provide a good accuracy for 3D geometry reconstruction.

82 64 Z (mm) X (mm) (a) (b) Figure 3.5. Experimental result of measuring a flat plane. (a) 2D error map, with an RMS error of 4.5 µm; (b) a cross section of (a). (a) (d) (b) (e) Z (mm) Z (mm) X (mm) 2 0 (c) 10 5 X (mm) 0 (f) Figure 3.6. Experimental result of measuring complex surface geometry. (a) Picture of a ball grid array; (b) reconstructed 3D geometry; (c) a cross section of (b); (d) - (f) corresponding figures for a flat surface with octagon grooves. To visually demonstrate the success of our calibration method, we measured two different types of objects with complex geometry. We first measured a ball grid array [Fig. 3.6(a)] and then measured a flat surface with octagon grooves [Fig. 3.6(d)]. The reconstructed 3D geometries and the corresponding cross sections are shown in Fig. 3.6(b) - (c) and Fig. 3.6(e) - (f). From those results, it indicates that our calibration algorithm works well for different types of geometry (e.g. spheres, ramps, planes), which further confirms the success of our calibration framework. One may

83 65 notice that there is a small slope in the cross section shown in Fig. 3.6(c). This is owing to the fact that the ball grid array sample has a little tilted bottom surface, which deviates a little bit from Z plane. 3.5 Conclusion In this research, we have presented a novel calibration method for a unique type of microscopic SL system, which is comprised of a camera with telecentric lens and a regular pin-hole projector using LWD lens with small FOV. The proposed calibration approach is flexible since only standard flat board with circle patterns are used and the calibration targets are posed freely for the whole system calibration. The experimental results have demonstrated the success of our calibration framework by achieving very high measurement accuracy: approximately 10 µm accuracy with a calibration volume of 10(H) mm 8(W) mm 5(D) mm.

84 66 4. SINGLE-SHOT ABSOLUTE 3D SHAPE MEASUREMENT WITH FOURIER TRANSFORM PROFILOMETRY Chapter 2-3 introduced our innovations in calibration techniques which realized simultaneous superfast, high accuracy measurements under different scales. Starting this chapter, we will introduce technologies that deal with problems associated with high-speed motions. This chapter introduces our development of single-shot technique for absolute 3D recovery by taking advantage of the geometric constraints of a structured light system. This technology increases the sampling rate by reducing the number of fringes necessary for absolute 3D shape measurements, which can potentially benefit measurements with high-speed motions. The major content of this chapter was originally published in Applied Optics [126] (also listed as journal article [9] in LIST OF PUBLICATIONS ). 4.1 Introduction Three-dimensional (3D) shape measurement has become attractive to a variety of applications such as industrial quality control and biomedical imaging. Among existing 3D shape measurement technologies, phase-based approaches have the advantages of high resolution and noise resistance compared to intensity-based approaches. Within phase-based approaches, the well-known phase-shifting profilometry, which uses multiple phase-shifted patterns, is capable of measuring the 3D shape of an object with high quality [37], yet the requirement for projection of multiple patterns may introduce measurement errors when measuring dynamically deformable shapes [127]. Fourier transform profilometry (FTP) [128], which requires only a single fringe projection, has become a quite powerful tool for many applications [129, 130] such as vibration measurement of micro-mechanical devices [131] or instantaneous deforma-

85 67 tion analysis [132]. However, the retrieval of an absolute phase map within a single fringe image remains a nontrivial problem. For an accurate 3D shape measurement, the retrieval of an absolute phase map plays an important role in the 3D reconstruction process [133]. Most single-shot FTP frameworks use spatial phase unwrapping in which the obtained phase maps are relative and do not work for spatially isolated objects. Conventional absolute phase recovering methods typically require the projection of additional images such as an additional centerline image projection [113,134], multi-wavelength fringe projection [ ], or phase coding methods [39, 138]. There are also research works that use embedded markers [139] or pattern codifications [140] in different color channels. All these aforementioned frameworks cannot be used to perform FTP fringe analysis within one single-shot 8-bit image. To address this issue, researchers have been attempting to embed marker points [133], marker strips [ ], or special markers [144] into sinusoidal fringe patterns, and the methods proposed in [141] and [142] incorporated the marker retrieval process with phase shifting, which is not applicable to the single-shot FTP method. Guo and Huang [133] embedded a cross-shape marker into a single sinusoidal pattern and spatially unwrapped the phase from FTP by referring to the phase value of the marker point. Xiao et al [144] embedded a special mark into the sinusoidal grating for a similar purpose. The spectra of the mark are perpendicular to that of the projected fringe and can be retrieved with the bandpass filter in the other direction. Recently, Budianto et al. [143] embedded several lines of strip markers within a single fringe pattern and improved the robustness of strip marker detection with dual-tree complex wavelet transformation. However, in general, for all marker-based approaches, a fundamental limitation is that they cannot recover absolute phase for an isolated object if no encoded marker is on the object. Moreover, phase quality could deteriorate on the area covered by the markers. In this research, we propose a computational framework that performs singleshot absolute phase recovery for the FTP method without any additional marker or color encoding. After phase wrapping from the single-shot FTP method, using

86 68 the geometric constraints of a digital fringe projection (DFP) system, we create an artificial absolute phase map Φ min and extract the final unwrapped absolute phase map through pixel-by-pixel reference to Φ min. Experiments demonstrate that our computational framework is capable of reconstructing absolute 3D geometry of both single objects or spatially isolated objects with the single-shot FTP method. Section 4.2 introduces the principles of FTP and our proposed framework of absolute phase retrieval. Section 4.3 presents some experimental validations to demonstrate the success of our proposed method. Section 4.4 discusses the merits and limitations of the proposed method, and finally Section 4.5 summarizes this research. 4.2 Principles This section introduces the relevant principles of this research. In particular, we will explain the theoretical background of FTP, the imaging model of a DFP system, and our proposed absolute phase retrieval framework using geometric constraints Fourier Transform Profilometry The basic principle of the FTP method can be illustrated as follows: theoretically, a typical fringe pattern can be described as I(x, y) = I (x, y) + I (x, y) cos[φ(x, y)], (4.1) where I (x, y) represents the average intensity, I (x, y) denotes the intensity modulation, and φ(x, y) is the phase to be found. Using Euler s formula, Eq. (4.1) can be re-written as I(x, y) = I (x, y) + I (x, y) 2 [ e jφ(x,y) + e jφ(x,y)]. (4.2) After applying a bandpass filter, which only preserves one of the frequency components, we will have the final fringe expressed as I f (x, y) = I (x, y) e jφ(x,y). (4.3) 2

87 69 In practice, we can use different kinds of windows, such as a smoothed circular window [see Fig. 4.1(a)] or a Hanning window [145] [see Fig. 4.1(b)], as bandpass filters. After bandpass filtering, we can calculate the phase by φ(x, y) = tan 1 { Im [If (x, y)] Re [I f (x, y)] }. (4.4) Here Im [I f (x, y)] and Re [I f (x, y)], respectively, represent the imaginary and the real part of the fringe If I f (x, y). With this FTP approach, the wrapped phase with 2π discontinuities can be extracted from a single-shot fringe image. (a) (b) Figure 4.1. Different band pass filters used for FTP. (a) A smoothed circular window; (b) a hanning window. Apart from single-shot FTP, there is also a modified FTP method [146] that uses two fringe patterns to retrieve a phase map. One approach for performing double-shot phase retrieval is by using inverted patterns I 1 = I (x, y) + I (x, y) cos[φ(x, y)], (4.5) I 2 = I (x, y) I (x, y) cos[φ(x, y)]. (4.6) By subtracting the two images, we can obtain I = (I 1 I 2 )/2 = I (x, y) cos[φ(x, y)]. (4.7) In this way, the effect of a DC component can be significantly suppressed, which helps improve the phase quality, yet also sacrifices the measurement speed by introducing another fringe image.

88 DFP System Model In this research, we adopt the well-known pinhole model to formulate the imaging lenses of a DFP system. In this model, the projection from 3D world coordinate (x w, y w, z w ) to 2D imaging coordinate (u, v) can be formulated as the following equation: u f u γ u 0 r 11 r 12 r 13 t 1 s v = 0 f v v 0 r 21 r 22 r 23 t 2 z r 31 r 32 r 33 t 3 w 1 x w y w. (4.8) Here, s is the scaling factor; f u and f v are, respectively, the effective focal lengths of the imaging device along u and v directions; γ is the skew factor of the two axes; r ij and t i, respectively, represent the rotation and translation parameters; and (u 0, v 0 ) is the principle point. The model described in Eq. (4.8) can be further simplified by f u γ u 0 r 11 r 12 r 13 t 1 P = 0 f v v 0 r 21 r 22 r 23 t 2, (4.9) r 31 r 32 r 33 t 3 p 11 p 12 p 13 p 14 = p 21 p 22 p 23 p 24, (4.10) p 31 p 32 p 33 p 34 where the projection matrix P can be estimated using some well-developed camera calibration toolboxes. In reality, the projector shares the same imaging model with the camera whose optics are mutually inverted. If the camera and projector are calibrated under the same world coordinate system (x w, y w, z w ), we can obtain two sets of equations for the DFP system, [ ] t [ ] t s c u c v c 1 = P c x w y w z w 1, (4.11) [ ] t [ ] t s p u p v p 1 = P p x w y w z w 1. (4.12)

89 71 Here, superscript p indicates the projector; superscript c indicates the camera; and t represents matrix transpose. In this DFP model, Eqs. (4.11) and Eq. (4.12) provide six equations with seven unknowns (s c, s p, x w, y w, z w, u p, v p ). To reconstruct the 3D (x w, y w, z w ) geometry, we need one more equation, which can be obtained by the linear relationship between absolute phase Φ and a projector pixel line u p : u p = Φ T/(2π), (4.13) However, as described in Section 4.2.1, the phase extracted from FTP is wrapped with 2π discontinuities, and the absolute phase retrieval becomes challenging if we do not refer to any additional fringes or embedded markers. In the next section, we will introduce our proposed framework that retrieves absolute phase within one single fringe without using any embedded markers Absolute Phase Retrieval Using Geometric Constraints In theory, if we know the z w value from prior knowledge, we can then reduce one unknown from the seven unknowns for the six equations in Eqs. (4.11) and Eq. (4.12); reducing one unknown means that all (u p, v p ) values can be uniquely determined given (u c, v c ), and the absolute phase value can be computed by referring to Eq. (4.13). In other words, for a given z w = z min, one can create an artificial absolute phase map on the camera image. Furthermore, if z w = z min coincides with the closest depth plane of the measured volume, one can retrieve an absolute phase map Φ min, which is defined here as a minimum phase map: Φ min (u c, v c ) = f(z min, T, P c, P p ). (4.14) As one can see, Φ min is a function of z min, the fringe width T, and the projection matrices P c and P p. Since this minimum phase map Φ min (u c, v c ) is constructed with the camera pixel, the absolute phase retrieval can be performed by pixel-to-pixel reference to Φ min (u c, v c ).

90 72 The key to generating accurate minimum phase map Φ min lies in a good estimation of z min. Figure 4.2 illustrates the schematic diagram of a DFP system. In this research, we match the camera lens coordinate with the world coordinate system. From Fig. 4.2, one can see that z min plane has the minimum z w value from the camera perspective. In practice, one can determine the z min of interest by a variety of means, one of which being the use of more fringe patterns and measurement of a stationary object (e.g., a plane). Once we calibrated the DFP system, we obtained z min plane M Object f Lens p Lens c w c w z y ( y ) z ( z ) p p y x c w x ( x ) c f p c v c c 0 0 ( u, v ) CCD u c p p 0 0 ( u, v ) c 0 o0 v p DMD u o p p Figure 4.2. A schematic diagram of a DFP system and z min plane. all matrix parameters in P c and P p. Given z min, we can solve for the corresponding x w and y w for each camera pixel (u c, v c ) by simultaneously solving Eqs. (4.11) and Eq. (4.12) xw y w = A 1 b, (4.15)

91 73 where b = A = pc 31u c p c 11 p c 32u c p c 12 p c 31v c p c 21 p c 32v c p c 22 pc 14 p c 34u c (p c 33u c p c 13)z min p c 24 p c 34v c (p c 33v c p c 23)z min, (4.16). (4.17) Here p c ij denotes the matrix parameters of P c in i-th row and j-th column. Once (x w, y w ) are determined, we can calculate the corresponding (u p, v p ) for each camera pixel by solving Eq. (4.12) again as [ ] t s p u p v p 1 = P p [ x w y w z min 1 ] t. (4.18) Assuming the projected fringe patterns are along the v p direction, the artificial phase Φ min for (u c, v c ) can be defined as Φ min (u c, v c ) = u p 2π/T (4.19) where T is the fringe period in pixels used for 3D shape measurement, and the phase is defined as starting at 0 rad when u p = 0. Figure 4.3 illustrates the phase-unwrapping scheme using minimum phase Φ min, in which Fig. 4.3(a) shows the phase map extracted directly from the FTP method with 2π discontinuities. Figure 4.3(b) shows the continuous minimum phase map Φ min on the projector space. Figure 4.3(c) shows the cross sections of the phase maps. Assume that the region inside of the dashed red bounding boxes is what the camera captures at z = z min and that the captured region is shifted to the area inside of the solid blue bounding boxes when z > z min. Under both circumstances, we unwrap the phase map by adding 2π on the wrapped phase map where the phase values are below the corresponding points on Φ min. In a more general case where we have more fringe periods in the capture camera image, as shown in Figs. 4.4(a) and 4.4(b), we will add different integer K multiples of 2π to remove the discontinuities depending on the difference between the wrapped

92 74 min min min min min x x (a) (b) (c) min 6 min Figure Illustration min of generating continuous min phase map 6 assisted min by the minimum phase map obtained from geometric constraints (Reprinted with permission from [147], Optical Society of America). 2 (a) Phase maps on the camera space 2 at different depth z (within red 2 dashed min window min 2 is at z min ; within solid blue window is at z 2 > min z min ); min (b) corresponding phase maps Φ min and Φ on the projector space; (c) A B x A 2 B 2 C x cross sections of the original phase maps with 2π discontinuities, and A B x A B A C B x x A B C continuous phase maps Φ min and Φ. mi x min min 6 6 min x min x A A B B x x A BA CB Cx x (a) (b) Figure 4.4. Fringe order K determination for different patterns periods with examples of having (a) three and (b) four pattern periods (Reprinted with permission from [147], Optical Society of America). phase φ and the minimum phase Φ min, and the fringe order K is determined by the following equations: 2π (K 1) < Φ min φ < 2π K, (4.20)

93 75 or [ ] Φmin φ K(x, y) = ceil. (4.21) 2π Here, ceil[] is the ceiling operator that returns the closest upper integer value. Through this unwrapping framework, we can obtain the absolute phase map Φ without 2π discontinuities without any additional encodings or embedded markers. By using the linear phase constraints in Eq. (4.13) and the DFP system equations in Eq. (4.11) and Eq. (4.12), we can realize absolute 3D shape measurement within a single-shot fringe image through the FTP method. 4.3 Experiment To test the performance of our single-shot absolute 3D shape measurement framework with the FTP method, we developed a DFP system in which a CCD camera (The Imaging Source DMK 23U618) is used as the image capturing device which uses a 2/3-inch imaging lens (Computar M0814-MP2) with a focal length of 8 mm and an aperture of f 1.4. A digital light processing (DLP) projector (Dell M115HD) is used as the projection device whose lens has a focal length of mm with an aperture of f 2.0. Its projection distance ranges from 0.97 to 2.58 m. The resolutions for the camera and the projector are, respectively, pixels and pixels. We calibrated the system using the method discussed in Ref. [93] and chose the world coordinate system to be aligned with the camera lens coordinate system. We first tested our proposed framework by measuring a sculpture, as shown in Fig. 4.5(a). Figure 4.5(b) shows the single-shot fringe image that we captured, and Fig. 4.5(c) shows the wrapped phase map that we obtained using the FTP method. Figure 4.5(d) demonstrates the minimum phase map Φ min that we generated at the closest depth plane z min = 960 mm of the measurement volume. Figures 4.5(e) and 4.5(f) show the unwrapped phase map and the reconstructed 3D geometry, respectively, from the calibrated DFP system, from which we can see that our proposed framework can successfully reconstruct the 3D geometry from a single-shot fringe

94 76 image using FTP. The bandpass filter that we used to suppress unwanted spectral components is a smoothed circular window, as shown in Fig. 4.1(a). Since we did not involve any additional color or marker encoding, it addresses the phase-unwrapping challenge of conventional single-shot FTP as stated in Section 4.1. (a) (b) (c) (d) (e) (f) Figure 4.5. Illustration of unwrapping procedure with a real object measurement. (a) The original picture of the measured object; (b) captured single-shot fringe image; (c) wrapped phase map obtained from single-shot FTP; (d) minimum phase map Φ min ; (e) unwrapped phase map; (f) reconstructed 3D geometry.

95 77 (a) (b) (c) (d) (e) Figure D measurement results of a sculpture. (a) Using standard phase-shifting method plus simple binary coding; (b) using single-shot FTP method with smoothed circular shaped band-pass filter; (c) using modified FTP method with smoothed circular shaped band-pass filter; (d) - (e) corresponding results of (b) - (c) using hanning window shaped band-pass filter. To verify that our proposed framework indeed produces absolute 3D geometry, we also measured the same object using a standard three-step phase-shifting approach with a binary-coded temporal phase-unwrapping method [38]. The result is shown in Fig. 4.6(a), which overall agrees well with the 3D result from single-shot FTP with our proposed framework, as shown in Fig. 4.6(b). Their only difference is that the FTP method has a higher noise level, a fundamental limitation of the FTP method itself that is caused by the remainder of the DC component (see Section 4.2.1) after the bandpass filter. To suppress the effect of DC component, we also implemented our framework with the modified FTP method [146], as mentioned in Section The corresponding 3D result is shown in Fig. 4.6(c), which demonstrates that our proposed framework also works well with the modified FTP method, and the noise level is significantly reduced. However, these benefits come at the cost of reducing the measurement speed. We then took a cross section around the center of the nose from the three different 3D results and plotted them all in Fig To better examine their differences, we took the result from the standard phase-shifting approach as the

96 78 reference and subtracted it from the results obtained from both the FTP and modified FTP methods. The corresponding difference curves for the FTP and modified FTP methods are, respectively, shown in Figs. 4.8(a) and 4.8(b), where we can see that the overall 3D geometries of both methods indeed match very well with the standard phase shifting with binary coding approach; and the mean difference is almost zero, as expected. Compared with the modified FTP method, the phase obtained from the single-shot fringe pattern has a larger difference because the phase error estimated from a single fringe pattern is larger than that estimated from the modified FTP method, which is anticipated. We also tried a Hanning window [145] as a different band-pass filter [see Fig. 4.1(b)] for both single-shot and modified FTP methods, and the results are shown in Figs. 4.6(d) and 4.6(e), respectively. From these figures, we can see that different bandpass filters will not affect the overall reconstructed geometry of the measured object, but only produce different high frequency noise levels -960 Z (mm) FTP Modified FTP Phase shifting X (mm) Figure 4.7. Cross sections of the 3D results corresponding to Fig. 4.6(a),Fig. 4.6(b) and Fig. 4.6(c). To demonstrate that our proposed framework, as opposed to a spatial phaseunwrapping framework, also works for spatially isolated objects, we then put a spherical object beside the sculpture and performed 3D shape measurements with all the same methods (i.e., phase-shifting, single-shot FTP, and modified FTP methods) used in the previous experiment. The measurement results are shown in Fig. 4.9, where

97 79 Z (mm) Z (mm) X (mm) (a) X (mm) (b) Figure 4.8. Difference in geometries. (a) Between FTP and phaseshifting (mean: 0.16 mm, RMS: 1.48 mm); (b) between modified FTP and phase-shifting (mean: 0.20 mm, RMS: 0.64 mm). Fig. 4.9(a) shows the captured fringe image, and Figs. 4.9(b) -4.9(f) show the reconstructed 3D geometry using the same methods used to produce the 3D results shown in Figs. 4.6(a) - 4.6(e). From the 3D results, we can see that our proposed framework produces similar measurement qualities as the previous experiment for isolated objects, i.e., the overall profiles obtained from both single-shot and modified FTP methods agree well with the one obtained from the standard three-step phase-shifting method. This experiment confirms that our proposed framework has the capability of measuring the 3D shape of spatially isolate objects within one single-shot fringe image. 4.4 Discussion This proposed absolute phase-recovery framework has the following advantages compared to marker-based absolute phase-recovery techniques: Single-shot, absolute pixel-by-pixel 3D recovery. Using an artificial absolute phase map generated in camera image space, our proposed framework is capable of recovering absolute 3D geometries pixel-by-pixel within one single-shot, 8-bit grayscale fringe image, which is not possible, to the best of our knowledge, with

98 80 (a) (b) (c) (d) (e) (f) Figure D measurement results of two objects. (a) Captured fringe image; (b) using standard phase-shifting method plus simple binary coding; (c) using single-shot FTP method with smoothed circular shaped band-pass filter; (d) using modified FTP method with smoothed circular shaped band-pass filter; (e) - (f) corresponding results of (c) - (d) using hanning window shaped band-pass filter. any existing technologies, making it valuable to absolute 3D shape measurement for extremely high-speed motion capture. Simultaneous multiple objects measurement. Since phase unwrapping is pixelby-pixel, as demonstrated in Fig. 4.9, the proposed method can be used to measure multiple objects at the exact same time, which is extremely difficult to do for conventional Fourier transform methods.

99 81 Robustness in fringe order determination. The determination of fringe order K is crucial for absolute phase retrieval, yet the success of marker-based approaches rely on the detection of markers that encode fringe order, which could be problematic when those markers are not clear on a captured fringe image. In contrast, we use an artificially generated absolute phase map Φ min for temporal phase unwrapping. Since Φ min is ideal and not dependent on the captured fringe quality (besides the inherent camera sensor noise), the robustness of fringe order determination is greatly improved. 3D reconstruction of complicated scenes. As one may notice, the major prerequisite for the success of our proposed approach lies in the assumption that all sampled points of the entire scene should not cause more than 2π phase difference from z min, regardless of the complexity (e.g., number of objects, object isolation, and hidden surfaces) of the measured scenes or the existence of abrupt depth change as long as the corresponding phase jumps are less than 2π. However, this proposed framework is not trouble free. The major limitations are: Confined measurement depth range. As mentioned previously, the maximum measurement depth range that our proposed approach can handle is within 2π in phase domain from one pixel to its neighborhood pixels. Therefore, in cases where there are abrupt jumps that introduce 2π phase changes from one pixel to the next pixel, our proposed framework could produce incorrect unwrapped phase. However, the limit of the depth change depends upon the angle between the projector and the camera; the smaller the angle between the projector and the camera, the larger the actual abrupt depth change the proposed method can handle. For instance, in our experiment, we used a fringe period of 18 pixels and an angle of 6 0 between camera and projector optical axes; the effective depth range is approximately 25% of the lateral sensing range, which results in fairly good measurement depth volume. In the meantime, we are still working

100 82 on extending the effective sensing depth range with a given hardware setup through a software approach. Inherent FTP limitations. Our proposed framework does not contribute to improving the phase quality of FTP itself, and thus some common problems for the FTP approach remain in our proposed technique. For example, complex surface geometry variations and rich surface texture deteriorate phase quality and thus reduce measurement accuracy since the carrier phase cannot be accurately retrieved. However, those methods developed to improve the phase quality of FTP can also be adopted to improve measurement quality of the proposed method. As demonstrated in our research, the modified FTP method using two fringe patterns can substantially improve measurement quality. One can also adopt the windowed Fourier transform (WFT) method [148, 149] to improve the robustness of FTP to noise and enhance phase quality. 4.5 Conclusion In this research, we developed a computational framework that only requires one single grayscale fringe pattern for absolute 3D shape reconstruction. This framework performs phase unwrapping by pixel-to-pixel reference to the minimum phase map that is generated at the closest depth plane of the measured volume. Because our framework does not involve any additional encodings or embedded markers, it overcomes the current phase-unwrapping limitation of the single-shot FTP method. Our experiments have demonstrated the success of our proposed framework by measuring both single object and spatially isolated objects.

101 83 5. MOTION INDUCED ERROR REDUCTION COMBINING FOURIER TRANSFORM PROFILOMETRY WITH PHASE-SHIFTING PROFILOMETRY The previous chapter presented one of our developed software approach to deal with measurements with high-speed motions. This chapter introduces our another developed software framework which aims at reducing motion induced errors and artifacts. Essentially, we developed a software framework that hybridizes Fourier transform profilometry with phase-shifting profilometry. Via this method, the merits of both techniques can be taken advantage of to alleviate motion induced problems. The major content of this chapter was originally published in Optics Express [150] (also listed as journal article [10] in LIST OF PUBLICATIONS ). 5.1 Introduction The rapidly evolving three-dimensional (3D) shape measurement technologies have enjoyed a wide applications ranging from industrial inspection to biomedical science. The non-contact structured light technology has been increasingly appealing to researchers in many different fields due to its flexibility and accuracy [41]. Yet a crucial challenge for this field of research is to perform accurate 3D shape measurement of dynamically moving or deformable objects, which typically introduces measurement errors caused by object motion. To alleviate the measurement errors induced by object motion, it is desirable to reduce the number of fringe images required to reconstruct 3D geometry. The approaches that minimize the number of projection patterns include single-shot Fourier transform profilometry (FTP) [128], FTP approach [151], π-shift FTP approach [146], phase-shifting approach [152] and three-step phase-shifting profilometry (PSP) [36, 153]. The approach that requires least number of projection

102 84 patterns is the standard FTP approach which extracts phase information within a single-shot fringe image. This property of FTP approach is extremely advantageous when rapid motion is present in the measured scene. Most single-shot FTP approaches adopt spatial phase unwrapping, which detects 2 discontinuities solely from the wrapped phase map itself and removes them by adding or subtracting integer k(x, y) multiples of 2π. This integer number k(x, y) is often called fringe order. However, a fundamental limitation for spatial phase unwrapping is that the obtained unwrapped phase map is relative. This is simply because the phase value on a spatially unwrapped phase map is dependent on the phase value of the starting point within a connected component. As a result, it cannot handle scenes with spatially isolated object. Although researchers have come up with approaches that embed markers into the projected single pattern [133, 143, 144], the absolute phase retrieval could be problematic if the embedded markers are not clear on an isolated object. Temporal phase unwrapping, which obtains insights for fringe order k(x, y) by acquiring additional information, has the advantage of robust absolute phase recovery especially for static scenes. Some widely adopted techniques include multi-frequency (or -wavelength) phase shifting techniques [40, , 154], binary [38] or Gray [39] stripe coding strategies and phase coding strategies [138, 155, 156]. These approaches commonly require many additional fringes (e.g. typically more than three) to determine the fringe order for absolute phase retrieval, which is undesirable for dynamic scene measurements. To address this limitation of temporal phase unwrapping, Zuo et al. [157] proposed a four pattern strategy to further reduce the total number of patterns; Wang et al. [127] combined spatial and temporal phase unwrapping within phase shifting plus gray coding framework. These approaches can function well under majority cases especially when the object movement is not rapid. However, Zuo s approach [157] still requires the imaged scene to remain stationary within the four consecutively captured frames, and Wang s approach [127] requires the measured objects to remain stationary within the first three phase shifted fringes of the entire fringe sequence,

103 85 which are not valid assumptions if the object motion is extremely rapid. Cong et al. [158] proposed a Fourier-assisted PSP approach which corrects the phase shift error caused by motion assisted by FTP approach, yet in this particular research, the marker points are used to detect fringe order, which could encounter similar problems as all marker based approaches mentioned above. Recently, our research group [147, 159] proposed to obtain absolute phase map with the assist of geometric constraints. Hyun and Zhang [159] proposed an enhanced two-frequency method to reduce the noise and improve the robustness of conventional two-frequency phase unwrapping, yet it still uses six images as required by conventional two-frequency method, where speed is still a major concern. An et al. [147] introduced an absolute phase recovery framework that solely uses geometric constraints to perform phase unwrapping. This method does not require capturing additional images to determine the fringe order, and was lately combined with FTP to perform single-shot absolute phase recovery [126]. However, this single-shot approach cannot handle object depth variations that exceed 2π in phase domain [147], meaning that the measurement depth range could be quite constrained since FTP method typically requires using a high-frequency pattern. Apart from reducing motion-induced errors by modifying the phase computational frameworks, researchers are also seeking alternative solutions by adding more hardware. The approaches that use more than one camera have been proven successful by several reported research works [ ]. The fundamental mechanism behind this type of approaches lies in the fact that any motion-induced measurement error will simultaneously appear in different cameras, and thus the correspondence detection wont be affected. However, the cost of adding another camera could be expensive especially when high measurement quality is required. Moreover, only the sampled area that is viewed by all imaging sensors (i.e. two cameras, one projector) can be reconstructed. In this research, we propose a hybrid computational framework for motion-induced error reduction. Our proposed approach uses a total of 4 patterns to conduct absolute

104 86 3D shape measurement. First of all, we perform single-shot phase extraction using FTP with a high frequency pattern projection. Then, we identify each isolated object, and obtain continuous relative phase map for each object through spatial phase unwrapping. To determine the rigid shift from relative to absolute phase map, we use low-frequency three-step phase shifted patterns to find extra information: Essentially, we use low-frequency three-step phase shifted patterns plus geometric constraints to produce absolute phase map, yet phase errors caused by motion between the three frames are inevitable. However, we can obtain insights by finding the most common integer fringe order shift k s from this PSP extracted phase map to FTP continuous relative phase map. Our proposed method combines spatial and temporal phase unwrapping, in which we use spatial phase unwrapping to reduce motion-induced errors, and temporal phase unwrapping to obtain absolute phase map. Our proposed method does not involve any additional hardware for motion-induced error reduction. Experiments have demonstrated the success of our proposed computational framework for measuring multiple spatially isolated objects with rapid motion. Section 5.2 introduces the relevant theoretical background and the framework of our proposed research; Section 5.3 illustrates the experimental validations of our proposed research; Section5.4 discusses the strengths and limitations of our proposed computational framework, and Section 5.5 summaries our proposed research. 5.2 Principle In this section, we will introduce the relevant theoretical foundations of this proposed framework, which include the principles of FTP, PSP, phase unwrapping with geometric constraints, the motion-induced error in PSP method to be addressed in this research, as well as our proposed hybrid absolute phase computational framework.

105 Fourier Transform Profilometry (FTP) The basic principles of FTP approach can be expressed as follows: In theory, a typical fringe image can be represented as I(x, y) = I (x, y) + I (x, y) cos[φ(x, y)], (5.1) where I (x, y) denotes the average intensity, I (x, y) stands for the intensity modulation, and φ(x, y) is the phase information to be extracted. well-known Eulers formula, Eq. (5.1) can be re-formulated as I(x, y) = I (x, y) + I (x, y) 2 According to the [ e jφ(x,y) + e jφ(x,y)]. (5.2) A band-pass filter that preserves only one of the conjugate frequency components can be applied to produce the final image, which can be expressed as After band-pass filtering, the phase can be extracted by I f (x, y) = I (x, y) e jφ(x,y). (5.3) 2 φ(x, y) = tan 1 { Im [If (x, y)] Re [I f (x, y)] }, (5.4) where Re [I f (x, y)] and Im [I f (x, y)] respectively represent the real and the imaginary part of the final image I f (x, y). Consequently, Eq. (5.4) produces a wrapped phase map with 2π discontinuities. To obtain a continuous phase map without 2π jumps, a spatial or temporal phase unwrapping approach can be applied. In general, the key for phase unwrapping is to determine the integer fringe order k(x, y) for each pixel which removes 2π discontinuities. The relationship between a wrapped phase map and an unwrapped phase map can be expressed as Φ(x, y) = φ(x, y) + 2π k(x, y). (5.5) Phase Shifting Profilometry (PSP) PSP method, different from single-shot FTP method, uses a set of phase shifted fringe images for phase computation. For a three-step phase-shifting approach that

106 88 requires least number of phase shifting steps, the fringe images used can be described as I 1 (x, y) = I (x, y) + I (x, y) cos[φ(x, y) 2π/3], (5.6) I 2 (x, y) = I (x, y) + I (x, y) cos[φ(x, y)], (5.7) I 3 (x, y) = I (x, y) + I (x, y) cos[φ(x, y) + 2π/3]. (5.8) With the three phase shifted fringe images, the phase φ(x, y) can be extracted by simultaneously solving Eqs. (5.6) - (5.8): φ(x, y) = tan 1 3(I1 I 3 ) 2I 2 I 1 I 3. (5.9) Again, the phase φ(x, y) obtained here has 2π discontinuities. Similarly, we can adopt a spatial or temporal phase unwrapping framework to obtain the unwrapped phase map Phase Unwrapping Using Geometric Constraint As recently proposed by An et al. [147], one of the methods that removes 2π discontinuities on a wrapped phase map is by using geometric constraints. Figure 5.1 illustrates the fundamental principle of this type of methods. Suppose the region that the camera captures on the CCD censor is a flat plane located at z w = z min, which is the closet measurement depth plane of interest, the same region can be mapped to the projector DMD sensor which creates a pixelate artificial absolute phase map Φ min. This generated phase map Φ min can be used to locate 2π discontinuities on a wrapped phase map. The detailed procedures of z min determination and Φ min generation can be found in [147]. Figure 5.2 shows the conceptual idea of phase unwrapping using artificial absolute phase map Φ min. Suppose when z w = z min, a camera captures the region shown inside the red dashed window on the projector [see Fig. 5.2(a)], in which the wrapped phase φ 1 has 2π discontinuities. The corresponding unwrapped phase Φ min is shown in the

107 89 Virtual plane z w = z min o w z w x w y w Lens Lens Mapped region of virtual plane CCD DMD Figure 5.1. Illustration of the geometric mapping between the camera image region and the corresponding region on the projector sensor if a virtual plane is positioned as z min (Reprinted with permission from [147], Optical Society of America). red dashed box in Fig. 5.2(b). Figure 5.2(c) shows the cross sections of both phase maps, from which it depicts that 2π should be added to the wrapped phase when the phase φ 1 is below Φ min. The same idea also applies when the phase φ is captured at z > z min as illustrated in the solid blue window, where 2π is added to the wrapped phase φ if below Φ min. Now we consider a more general case, as shown in Fig. 5.2(d), where the captured camera image contains more fringe periods, different fringe order k(x, y) should be added to the wrapped phase φ depending on its difference with Φ min. The fringe order k(x, y) can be determined as follows: 2π (k 1) < Φ min φ < 2π k, (5.10) or equivalently: [ ] Φmin φ k = ceil, (5.11) 2π where ceil[] operator returns the closest upper integer value.

108 90 min min 6 min 2 2 x A B C x (a) (b) (c) (d) Figure 5.2. Concept of removing 2π discontinuities using the minimum phase map determined from geometric constraints (Reprinted with permission from [147], Optical Society of America). (a) Regions acquired by the camera at different depth z plane: red dashed windowed region where z = z min and solid blue windowed region where z > z min ; (b) Φ min and Φ defined on the projector; (c) cross sections of the wrapped phase maps, φ 1 and φ, and their correctly unwrapped phase map Φ min and Φ; (d) case for using fringe patterns with four periods Motion-Induced Error in PSP PSP works really well under the assumption that the object is quasi-static during the process of capturing multiple phase-shifted fringe images. However, the object movement can cause measurement error if the fundamental assumption of a phaseshifting method is violated. If the sampling speed of the system is fast enough, this type of error is not obvious. However, when the sampling speed cannot keep up with object movement, this type of error is pronounced and can be dominant. We simulated this motion-introduced error by moving a unit sphere by 2% of the overall span (e.g., diameter of the sphere) along y direction for each additional fringe pattern capture (i.e., the sphere moves 10% for six fringe patterns). In this simulation, we adopted a two-frequency phase-shifting algorithm. Figures 5.3(a) - 5.3(f) show six fringe patterns. Figure 5.3(g) shows the reconstructed 3D geometry. Apparently, the sphere surface is not smooth because of its movement. To better visualize difference between the reconstructed 3D result and the ideal sphere, we took one cross section of the sphere and overlaid it with the ideal circle, as

109 91 shown in Fig. 5.3(h). And Fig. 5.3(i) shows the difference between these two. Clearly, there are periodical structural error on the object surface, which is very similar to the nonlinearity error, or phase shift error. However, for this simulation, the sole source of error is caused by object movement, which is defined as motion-induced error. (a) (b) (c) (d) (e) 10 (f) -3 5 Z Error Reconstructed Ideal (g) (h) 0.5 X X (i) Figure 5.3. Simulation of motion-induced measurement error. (a) (c) high frequency phase shifted patterns; (d) - (f) low frequency phase shifted patterns; (g) reconstructed 3D shape; (h) a cross section of (g) and the ideal sphere; (i) difference between the reconstructed sphere and the ideal sphere Proposed Hybrid Absolute Phase Computational Framework The major type of error that this research aims at addressing is the measurement error caused by object motion. As discussed in the previous subsection, the motion-introduced error of PSP method is caused by the violations of the fundamental assumption of phase-shifting method: object remain static during the capture of required number of phase shifted fringe patterns. It is well known that the FTP

110 92 method can extract phase information within one single-shot fringe image, which is extremely advantageous when measuring scenes with high speed motion, yet the pixel-by-pixel absolute phase retrieval problem remains nontrivial for FTP approaches without capturing any additional fringe patterns. We recently proposed to use geometric constraints of the structured light system to perform pixel-wise absolute phase unwrapping [126], yet is depth range is confined to a small range [147]. To address enhance the capability of our previously proposed method [126] by extending its depth range to a substantially large range, we propose a hybrid computational framework that combines FTP with PSP to address this limitation. We first perform single-shot FTP and spatial phase unwrapping to produce continuous relative phase map Φ r for each spatially isolated object. Suppose we have an additional set of low frequency three-step phase shifted patterns, the phase extracted from this set of three patterns can be unwrapped by artificial phase map Φ min to produce a rough absolute phase map Φ e, yet measurement errors are present owing to the object motion within the three frames. However, if under the assumption that the motion-induced errors is not predominant on the entire phase map, we can still take advantage of this phase map to find the rigid fringe order shift k s from relative phase map to the final absolute phase map By using three additional phase-shifted fringe patterns with a lower frequency, the proposed method increases the depth range of our previous method [126]. For example, if the angle between camera and projector optical axes is around θ = 13 and the overall projection range is 400 mm, the proposed method can handle approximately 348 mm depth range for a noise-free system [147]. In contrast, the method proposed in [126] is confined to approximately 27 mm, which is approximately 13 times smaller than our proposed method. Figure 5.4 illustrates the procedures of our proposed hybrid absolute phase retrieval framework, in which a set of four patterns are used to retrieve absolute phase map. The first step is to use a single-shot high frequency fringe pattern to perform FTP, in which image segmentation is used to separate each isolated object, and spa-

111 93 tial phase unwrapping [164] is used to unwrap the phase extracted FTP for each object to create a continuous relative phase map Φ r. To obtain absolute phase map Φ a, we need to determine the constant rigid shift k s in fringe order between absolute phase map Φ a and the relative phase map Φ r. k s = round {[Φ a (u, v) Φ r (u, v)] /(2π)}. (5.12) where the round() operator selects the closest integer number. To detect this constant rigid fringe order shift k s, we use another set of single-period three-step phase shifted low frequency patterns to obtain some insight. We extract a rough absolute phase map Φ e with motion error through three-step phase-shifting plus geometric constraints from this set of three patterns, then we can use the area without significant phase errors (e.g. big phase jumps) to detect the constant rigid shift k s. Phase-shifted low frequency fringe image Three-step phase shifting Phase unwrapping with geometric constraints Unwrapped phase map with error Φ e High-frequency fringe image Object segmentation Isolated objects FTP Spatial phase unwrapping Motion compensation Find fringe order shift k s Unwrapped phase maps for each object Φr Absolute phase Φ a Figure 5.4. The pipeline of proposed hybrid absolute phase computational framework. The first step is to generate continuous relative phase map Φ r using single shot FTP and spatial phase unwrapping; the second step is to generate absolute phase map with error Φ e through PSP and geometric constraint; the final step is to retrieve absolute phase map by finding rigid fringe order shift k s. We compute the difference map k e in fringe order between Φ r and Φ e as k e (u, v) = round {[Φ e (u + u, v + v) T ] } l T Φ r(u, v) /(2π), (5.13)

112 94 where u and v is to compensate for the object motion between adjacent fringe images, which can be roughly estimated by detecting the center pixel movement of the bounding boxes for each isolated object between different frames. Assuming that the motion-induced error is not predominant on k e map, we then determine the actual constant shift k s by finding the most common integer number on k e map. k s = mode [k e (u, v)], (5.14) where mode[] operator selects the most common value. Finally, the absolute phase map Φ a can be extracted by Φ a (u, v) = Φ r (u, v) + 2π k s. (5.15) Once the absolute phase map is obtained, the 3D reconstruction can be performed using the system model and calibration method as described in [93]. 5.3 Experiments We set up a structured light system, shown in Fig. 5.5, to test the effectiveness of our computational framework. The system includes a digital-light-processing (DLP) projector (model: LightCrafter 4500) and a high-speed CMOS camera (model: Phantom V9.1). The projector resolution is In all our experiments, we adopted the binary defocusing technique [42] to generate quasi-sinusoidal profile by projecting 1-bit binary pattern with projector defocusing. The projector image refreshing rate was set at 1500 Hz. We set the camera resolution at pixels with an image acquisition speed of 1500 Hz which is synchronized with pattern projection. The lens attached to the camera has a focal length of 24 mm with an aperture of f/1.8. The system is calibrated using the method described in [93]. To demonstrate the effectiveness of our proposed framework regarding motioninduced error reduction, we compare our proposed method with a PSP based method. In this research, we use enhanced two-frequency PSP method [159] since it only

113 95 Sync circuit Camera Projector Figure 5.5. Photograph of our experimental system setup. requires 6 fringe patterns. The enhanced two-frequency PSP method essentially uses two-frequency phase shifted patterns: the wrapped phase obtained from low frequency patterns is unwrapped using geometric constraints (see Section 5.2.3), and then this obtained phase map sequentially unwraps high frequency wrapped phase to obtain final absolute phase map. We projected a sequence of six patterns: three phaseshifted high frequency square binary patterns with a fringe period of T = 18 pixels (denoted as I1 h, I2 h and I3 h ), and three phase-shifted low frequency binary dithered patterns with a fringe period of T l = 228 pixels [165] (denoted as I1, l I2 l and I3). l Enhanced two-frequency PSP method uses all six patterns for 3D reconstruction, and our proposed method only uses the last four patterns (i.e. I3 h and I1 l - I3) l for 3D reconstruction. We first measured two free-falling ping-pong balls using the enhanced two-frequency PSP method [159]. Figures 5.6(a) - 5.6(f) show a sequence of 6 continuously captured fringe images. For better visualization of object movement during the capture of six fringe patterns, we cropped the left ball in the six fringe images, and then drew a reference line (red) and a circle around the contour of the ball correspondingly, as shown in Figs. 5.6(g) - 5.6(l). It is very obvious that the object moves a lot even for such high-speed capture. Since phase-shifting method requires the movement to

114 96 be small, it is difficult for phase-shifting method to perform high quality 3D shape measurement. Figure 5.7(a) shows the retrieved absolute phase map, from which one can visually observe some motion artifacts around the boundaries of the spheres. The reconstructed 3D geometries, shown in Fig. 5.7(b), clearly depicts significant errors (e.g. large jumps, spikes) especially around the edges the spheres. Besides spikes, one can also observe that the object motion produces apparent artifacts along the direction of phase shifting (e.g. some vertical stripes on surface), which is very similar to the motion-induced errors introduced in Section (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) Figure 5.6. A cycle of 6 continuously captured fringe images. (a) I h 1 ; (b) I h 2 ; (c) I h 3 ; (d) I l 1; (e) I l 2; (f) I l 3; (g) - (l) close-up views of the left ball in (a) - (f). We then implemented our proposed computational framework using the last four fringe images of a entire sequence (i.e. I3 h and I1 l - I3). l The first step is to perform single-shot FTP to extract a wrapped phase map. Figure 5.8(b) shows the obtained wrapped phase map from the a single-shot fringe image I3 h [Fig. 5.8(a)] with highfrequency pattern (T = 18 pixels) projection. We then identified the two segmented balls and separately performed spatial phase unwrapping [164] for each ball. Figure 5.8(c) shows unwrapped phase map Φ r for the entire scene.

115 97 (a) (b) Figure 5.7. A sample frame of result using enhanced two-frequency PSP method [159]. (a) Retrieved absolute phase map; (b) reconstructed 3D geometries. (a) (b) (c) Figure 5.8. Continuous relative phase map Φ r extraction from singleshot FTP. (a) Captured fringe image I h 3 ; (b) wrapped phase map obtained from (a) using FTP; (c) separately unwrapped phase map of each ball in (b) with spatial phase unwrapping. The next step is to obtain an absolute phase map Φ e with motion-induced error using PSP method. Figure 5.9 shows an example of Φ e map extraction. Figure 5.9(a) shows one of the three-step phase shifted fringe images (I1) l with low frequency patterns (T l = 228 pixels) projection. By applying three-step phase shifting (see Section 5.2.2), we obtain a wrapped phase map as shown in Fig. 5.9(b). As one can

116 98 see, the motion between three phase shifted fringes causes apparent errors on the extracted phase map especially around the boundaries of the balls. By applying phase unwrapping method based on geometric constraints (see Section 5.2.3), we can obtain an absolute phase map with motion-induced error as shown in Fig. 5.9(c). Albeit motion-induced measurement errors are present, this phase map can still be used to obtain insight of the rigid fringe order shift k s. (a) (b) (c) (d) Figure 5.9. Extraction of absolute phase map with motion-induced error Φ e from PSP. (a) One of the three phase shifted fringe pattern; (b) extracted wrapped phase map from I1 l - I3 l with motion-induced error before applying geometric constraints; (c) unwrapped phase map Φ e using geometric constraints; (d) difference fringe order map k e obtained from using low frequency phase map shown in Fig. 5.8(c) and phase map shown in (c) using Eq. (5.13). The final step is to find the rigid fringe order shift k s from relative phase map Φ r to absolute phase map Φ a. With the continuous relative phase map Φ r [shown in Fig. 5.8(c)] and the absolute phase map with errorφ e [shown in Fig. 5.9(c)], Eq. (5.13) yields a difference map in fringe order k e, shown in Fig. 5.9(d). We then plot the histograms of the difference map k e for each ball as shown in Fig We use Eq. (5.14) to find the bins of peak values on each histogram and pick the corresponding integer number to be the actual rigid shift k s for each ball. Then, we shift the relative phase Φ r using Eq. (5.15) to obtain the final absolute phase map. Figure 5.11(a) shows the final absolute phase map obtained using our proposed method. Figure 5.11(b)

117 99 shows 3D geometry reconstructed from the absolute phase map. Clearly, our proposed method works well for spatially isolated objects with the existence of rapid motion, and no significant motion-induced errors appear on the reconstructed 3D geometries. The associated video shows comparing result of the entire captured sequence. The video clearly shows that PSP method produces significant motion-induced errors, yet our proposed method consistently works well k e for the left ball k e for the right ball (a) (b) Figure Histogram based fringe order determination. (a) Histogram of Fig. 5.9(d) for the left ball; (b) histogram of Fig. 5.9(d) for the right ball. (a) (b) Figure Absolute phase Φ a retrieval and 3D reconstruction. (a) Retrieved final absolute phase map Φ a ; (b) reconstructed 3D geometries.

118 100 To further compare the performance of our proposed method against the conventional two-frequency PSP method, we pick one of the two spheres (i.e. left sphere) and perform further analysis. Figure 5.12 shows the comparison of 3D results between proposed method and PSP based method. Figures 5.12(a) and 5.12(e) show the reconstructed 3D geometries using these two methods, from which we can see that the ball is well recovered using our proposed method, yet the result obtained from PSP based method has significant errors (e.g. big jumps, spikes) especially on the top and bottom of the sphere, which is caused by vertical object motion. Also, the object motion produces apparent artifacts along the direction of phase shifting (e.g. vertical creases). Since the ping-pong ball has well-defined geometry (i.e. a sphere with 40 mm in diameter), we then performed sphere fitting on both reconstructed 3D geometries and obtained residual errors as shown in Figs. 5.12(b) and 5.12(f). The root-mean-square (RMS) errors for proposed method and PSP approach are 0.26 mm and 6.92 mm respectively, which indicates that our proposed method can well reconstruct the 3D geometry of a rapidly moving ball, yet PSP method fails to provide reasonable result. To better illustrate the differences, we took a cross section of sphere fitting and residual errors from both results. The corresponding plots are respectively shown in Figs. 5.12(c) (d) and Figs. 5.12(g) (h). We removed the big outliers for PSP result in Fig. 5.12(b) on the cross section plots for better visualization purpose. Note that the error structure of Fig. 5.12(d) is very similar to the motion-introduced error from our simulation result, shown in Fig. 5.3(i). These results again demonstrate that the reconstructed geometry obtained from our proposed framework agree well with the actual sphere, and the error is quite small. However, the result obtained from PSP method deviates quite a bit from the actual sphere, and the residual error is quite large and with big artifacts on the edges of the sphere. This experiment clearly shows the significance of our proposed computational framework in terms of motion-induced error reduction.

119 101 Z (mm) Measured sphere Ideal sphere X (mm) Error (mm) X (mm) (a) (b) (c) (d) Z (mm) Measured sphere Ideal sphere X (mm) Error (mm) X (mm) (e) (f) (g) (h) Figure Comparison between proposed computational framework and PSP based approach. (a) 3D result from PSP approach; (b) residual error of (a) after sphere fitting (RMS error: 6.92 mm); (c) a cross section of sphere fitting; (d) a cross section of residual error; (e) - (f) corresponding plots results from our proposed approach (RMS error: 0.26 mm). To further evaluate the performance of our proposed computational framework, we drastically increased the number of ping-pong balls within the scene and measured the motion of all balls. Figure 5.13 and its associated video demonstrate the measurement results of many free-falling ping-pong balls, where Figs.Figs. 5.13(a) and 5.13(b) respectively show a sample frame of the texture and the corresponding 3D geometries. The measurement result demonstrates that our proposed computational framework performs well under the scenes with a large number of rapidly moving spatially isolated objects. This experiment further proves the success and robustness of our proposed computational framework. One may notice that on the reconstructed 3D geometries, some artifacts still appear when the black characters on the balls show up in the captured scene. An example is shown in Fig. 5.14, which is the zoom-in view of the ball selected in the red bounding boxes of the pictures in Fig Some artifacts appear when the characters on the ball appear as shown in the blue bounding boxes in Figs. 5.14(a)

120 102 (a) (b) Figure D shape measurement of multiple free-falling ping-pong balls. (a) A sample frame image; (b) 3D reconstructed geometry (b). This is caused by the inherent limitation of FTP method: it does not function well when rich texture variation is present. To alleviate this problem, one can incorporate our proposed framework with more sophisticated windowed Fourier transform [148, 149] or wavelet transform profilometry [166, 167]. (a) (b) Figure Illustration of artifacts induced by texture variation. (a) Zoom-in view of the ball inside of the red bounding box of Fig. 5.13(a); (b) zoom-in view of the 3D result inside of the red bounding box of Fig. 5.13(b).

121 Discussion Our proposed computational framework has the following advantages compared to other absolute phase retrieval frameworks. Resistance to measurement errors caused by rapid object motion. Since the final absolute phase map is generated by shifting the spatially unwrapped singleshot FTP phase map, it is resistant to phase errors caused by rapid object movements, and thus reduces measurement errors induced by motion. Absolute 3D shape measurement of multiple rapidly moving objects within a large depth range. As shown in experiments, our proposed framework is capable of recovering absolute 3D geometries for many spatially isolated objects with rapid motion, which is difficult for existing frameworks to do so especially when object displacement is quite significant between frames. Comparing with our previous method, the depth sensing range of the proposed method is approximately 13 times of that achieved by our previously proposed method [126]. However, our proposed framework also has some inherent limitations, and the performance could be affected under the following conditions: Measurement of complex surface geometry or texture. Since the phase extracted from single-shot FTP finally determines the phase quality and thus measurement quality, therefore, some inherent limitations of standard FTP approach remain in our proposed method. Namely, under circumstances where there are rich local surface geometric or texture variations, the measurement qualities are reduced because of the difficultly of accurately retrieving the carrier phase in FTP. Existence of abrupt geometric discontinuities. As introduced in Section 5.2.5, spatial phase unwrapping is involved at the first step of absolute phase retrieval. Therefore, if there exist abrupt geometric discontinuities on a single object or

122 104 between overlapping objects, the performance of our proposed computational framework could be affected. 5.5 Summary In this research, we proposed a computational framework that reduces motioninduced measurement errors by combining FTP and PSP approach. This framework uses a high-frequency pattern to perform FTP which extracts phase information within single-shot fringe image, then spatial phase unwrapping is applied to each isolated object to obtain continuous relative phase map. Finally, by referring to the absolute phase map with errors obtained from a set of low frequency phase shifted patterns, we shift the relative phase maps for each object to produce final absolute phase map. Experiments have demonstrated the effectiveness of our computational framework for measuring multiple rapidly moving isolated objects without adding additional hardware. Comparing with the previous method of its kind, the proposed method increase substantially increase the depth sensing range (i.e., 13 times).

123 NOVEL METHOD FOR MEASURING DENSE 3D STRAIN MAP OF ROBOTIC FLAPPING WINGS Chapter 2-5 are all developed to advance the 3D shape measurements by enabling simultaneous superfast, high-accuracy measurements with alleviated motion induced problems. This chapter introduces our research innovations in 3D data analytics and application development. Specifically, we introduced a novel method for measurement of dense 3D strain map of robotics flapping wings. As a new technology to be introduced to the field of bio-inspired engineering, it can potentially benefit robotics designers by providing novel means of non-contact evaluations. 6.1 Introduction Over the past several decades, the scientific study of insect flight has been greatly promoted with new experimental techniques ranging from measurements of flow field to aerodynamics of flight. Within insect flight studies, insect wing deformations and strains have been interesting topics to scientists owing to their variety between different species, between different flight types of the same insect, and even between different strokes of the same type of flight [168]. Besides, the deformations and strains of wings could contain important informations for lift force analysis [169], which could provide vital insight knowledge for flapping wing design. In the past, scientists have made great attempts to study the insect wing deformations by first identifying some general patterns of bending during the wing stroke cycles using still photographs [170]. In recent decades, scientists have started to use optical techniques to quantify the deformation of wings. Scientists first attempted to actively illuminate sparse laser strips onto the flapping wings, and quantifying the wing deformation through analyzing distorted stripes captured by high-speed cam-

124 106 eras [169, ]. However, the spatial resolutions of such methods are limited by the sparsely illuminated comb-shape like laser stripes. Moreover, it is difficult to track any specific points on the wings with such methods [175]. To overcome the latter limitation, scientists started to use high-speed stereo videography [2] to provide quantitative description of wing morphology. Within such technique, one of the widely adopted method is to use fiducial markers as joints to facilitate identification of similar points in different cameras, and 3D information of those joints can be obtained by well-established stereo vision technique. The geometry of the wings can be reconstructed through joint-based hierarchical subdivision surface method [2,175]. Over the years, high-speed videography techniques have been widely adopted to study a variety of species including hummingbirds [176, 177], moths [178], dragonflies [175], butterflies [179], bats [180], etc. An important advantage of such method is that some specific points (e.g. marker points) can be tracked in different frames to accurately quantify the motion and deformations of those points of interest. However, a major limitation of this method is that only those sparsely arranged marker points are precisely measured, which making the full-field strain analysis of the wings not well-documented so far. Also, the mechanical properties of the wings could be altered by those fiducial markers. Different from high-speed stereo videography method, the digital fringe projection (DFP) technique can reconstruct 3D geometries of the entire scene with high resolution and accuracy [34]. Therefore, it is possible to realize full-field strain analysis if the dynamic deforming process of the flapping wings can be captured by DFP technique. In this research, we investigate a special type of flapping wing made of inextensible thin membrane. First, we developed a DFP system to measure the dynamic 3D geometries of the rapidly deforming wings. Specifically, we use a digitallight-processing (DLP) projector to project binary defocused patterns on the wings at 5,000 Hz. A precisely synchronized camera captures the distorted fringe patterns by the object surface. The captured distorted patterns are analyzed by a fringe analysis method for 3D topological reconstruction. Once the dynamic 3D geometries are

125 107 precisely measured, the strain for each point can be computed through examining the geometric deformations. In this research, we also developed a strain analysis framework based on geodesic computation and Kirchhoff-Love shell theory. We first developed a novel point tracking method based on surface geometry using a proposed enhanced Dijkstra s algorithm. The Green-Lagrange strain tensor of each tracked point is then determined by the curvature change from its strain free condition. Experiments demonstrate the success of our proposed method. Our strain analysis framework is solely based on surface geometric information, and thus is advantageous for applications where the measured surface does not contain rich textures or special surface treatment is undesirable. 6.2 Results Superfast 3D Imaging of a Flapping Wing Robotic Bird We built a DFP system for superfast 3D imaging. The system is composed of a high-speed DLP projector (Model: Wintech PRO 6500) for fringe projection and a high-speed CMOS camera (Model: NAC MEMRECAM GX-8F) for image acquisition. The fringe projection speed was set as 5,000 Hz with an image resolution of 1, 920 1, 080 pixels. Precisely synchronized with the fringe projection, the camera captures images also at a rate of 5,000 Hz with an image resolution of pixels. A lens (Model: SIGMA 24 mm f/1.8 EX DG) with a focal length of 24 mm is attached to the camera whose aperture ranges from f/1.8 to f/22. The robotic bird (Model: XTIM Bionic Bird Avitron V2.0) that we used in this research has a beat frequency of approximately 25 cycles per second with both wings made of inextensible thin membranes. The total span of a single wing is about 150 mm(l) 70 mm(w). We employed a modified Fourier transform profilometry [146] (FTP) for 3D reconstruction. We captured two sets of 3D data in preparation for further strain analysis: (1) with both two anchor points (black) on the corner and five marker points (white) inside of

126 108 each robot wing [see Fig. 6.1(a) - 6.1(c)]; (2) with only two anchor points (black) on the corner of each robot wing [see Fig. 6.2(a) - 6.2(c)]. We use our first dataset to validate our proposed point tracking method by comparing with conventional marker based point tracking; then, we use our second dataset to perform strain computation. Since our proposed method does not need those markers inside of the wings, we removed them in our second dataset to reduce the potential mechanics changes caused by markers. The purpose of putting anchor points is to assist our proposed geometrybased point tracking. Figure 6.1(d) - 6.1(f) [associated Video 1] and Fig. 6.2(d) - 6.2(f) [associated Video 2] respectively show the reconstructed 3D geometries of both datasets. From which we can see that our proposed 3D measurement algorithm consistently works well for the entire dynamic flapping flight processes of the bird robot Marker points Anchor points (a) (b) (c) (d) (e) (f) Figure D measurement results of a flying bird robot with markers and anchor points for validation of our proposed point tracking. This markers are used to compare our point tracking scheme with marker based point tracking. (a) - (c) Three sample frames of 2D images; (d) - (f) three sample frames of 3D geometries Validation of Point Tracking Once the dynamic 3D data is obtained, the next task is to perform point tracking so that the strain can be computed by examining the surface deformation. Here we

127 109 Anchor points (a) (b) (c) (d) (e) (f) Figure D measurement results of a flying bird robot with anchor points only for strain computation. The markers are removed to reduce potential mechanics changes. (a) - (c) Three sample frames of 2D images; (d) - (f) three sample frames of 3D geometries. propose a novel point tracking method based on geodesic computation. Given that we are investigating inextensible surfaces, the theoretical foundation of our proposed point tracking method is that topological changes will not change the shortest distances of any two points on the surface. Therefore, for any point inside of the wings in one 3D frame, we locate its corresponding point in other 3D frames by computing its geodesic distances to the two anchor points. This conceptual idea is shown in Fig. 6.6 and the detailed principles are discussed in Section 6.4. We used our first dataset shown in Fig. 6.1 to compare our proposed point tracking with conventional marker based tracking. We performed the comparison by examining the differences of the extracted trajectories in X, Y and Z from both methods. Table 6.1 shows both the mean and the root-mean-square (RMS) differences. The maximum mean difference is about 1.2 mm for X and Y, and 0.80 mm for Z; the maximum RMS difference is about 1.2 mm for X and Y, and 1.0 mm for Z. Considering the total span of a single wing [i.e. 150 mm(l) 70 mm(w)], this difference is relatively small. For visualization, here we show two different comparison results of the left wing in Fig. 6.3 and Fig. 6.4, which corresponds to the ones with least (marker 4) and most (marker 5) differences. We overlaid the extracted X, Y and Z

128 110 trajectories from our proposed method (blue solid) with the ones directly extracted from circle centers (red dashed), from which we can see the overall extracted trajectories from the two methods are pretty similar. The results show that our proposed geometry-based method can achieve very similar point tracking compared to the conventional marker based method, which validates the success of our proposed point tracking framework. Table 6.1. Validation of our geometry-based point tracking by comparing with marker based tracking, Diff is short for Difference. Left wing Marker # Diff X Diff Y Diff Z Diff X Diff Y Diff Z (Mean) (Mean) (Mean) (RMS) (RMS) (RMS) mm 0.42 mm 0.15 mm 0.39 mm 0.75 mm 0.37 mm mm 0.73 mm 0.02 mm 0.72 mm 0.78 mm 0.21 mm mm 0.40 mm 0.13 mm 0.80 mm 1.06 mm 0.48 mm mm 0.18 mm 0.02 mm 0.42 mm 0.67 mm 0.29 mm mm 1.17 mm 0.51 mm 0.53 mm 1.24 mm 1.05 mm Right wing Marker # Diff X Diff Y Diff Z Diff X Diff Y Diff Z (Mean) (Mean) (Mean) (RMS) (RMS) (RMS) mm 0.42 mm 0.20 mm 0.35 mm 0.19 mm 0.17 mm mm 0.45 mm 0.17 mm 0.77 mm 0.28 mm 0.26 mm mm 0.30 mm 0.35 mm 1.19 mm 0.37 mm 0.36 mm mm 0.72 mm 0.22 mm 0.84 mm 0.40 mm 0.20 mm mm 0.94 mm 0.76 mm 1.14 mm 0.48 mm 0.86 mm

129 111 (a) (b) (c) (d) (e) (f) Figure 6.3. Visualization of tracking for marker point 4 of the left wing. (a) - (c) Overlay the directly extracted marker points (red dashed lines) with tracked marker points (blue solid lines) using geodesic computation under X, Y and Z coordinate; (d) - (f) the difference plots of (a) - (c) obtained by taking the difference of curves, the mean difference for X, Y and Z are 0.17 mm, 0.18 mm and 0.02 mm respectively; the RMS difference for X, Y and Z are 0.42 mm, 0.67 mm and 0.29 mm respectively Visualization of Strain Map Since we have validated that our proposed point tracking method can work well, we can now perform strain computation using our second dataset shown in Fig As aforementioned, to reduce the potential mechanics changes, we removed the markers inside of the wings in our second dataset given that our point tracking method does not need those markers. Since the wings are inextensible, here we mainly consider the bending strain in Green-Lagrange strain tensor. For each point on the wings that is tracked between different frames, the bending strain can be computed by examining the curvature changes. The theoretical background of strain computation is discussed in the Section 6.4. Figure 6.5 [associated video 3] shows the results of our strain computation. It illustrates that our method can compute the strains of the entire wings. Here we show a sample frame of up-stroke and down-stroke respectively. One can notice that

130 112 (a) (b) (c) (d) (e) (f) Figure 6.4. Visualization of tracking for marker point 5 of the left wing. (a) - (c) Overlay the directly extracted marker points (red dashed lines) with tracked marker points (blue solid lines) using geodesic computation under X, Y and Z coordinate; (d) - (f) the difference plots of (a) - (c) obtained by taking the difference of curves, the mean difference for X, Y and Z are 0.08 mm, 1.17 mm and 0.51 mm respectively; the RMS difference for X, Y and Z are 0.53 mm, 1.24 mm and 1.05 mm respectively. the wings are most strained on areas where we see the most bending or curvature, which agrees well with nature of bending strain. This result demonstrates the success of our proposed strain computational framework. The computed strain maps can be easily turned into stress maps if we know the modulus of the wing material in advance. 6.3 Discussion Compared to existing technologies, our proposed research has the following advantages: Measure both high-resolution 3D geometry and full-field wing strain map. Our measurement technology can measure 3D geometry in high spatial and temporal resolution, and meanwhile compute full-field strain for the wings. By providing

131 113 Sample frame (up-stroke) Sample frame (down-stroke) 2D 3D 2D 3D Left wing strain map Right wing strain map Left wing strain map Right wing strain map Figure 6.5. Two sample frames of strain measurement result. those information, our technology could be effective tools for robotics field for the study of wing morphology and mechanics analysis. Require only two anchor points on the corners. Our point tracking scheme only requires identifying two anchor points on the corners. It neither requires putting markers inside of the wings nor requires special surface treatment on the wing surfaces, which reduces the potential changes of flight mechanics during measurements. Despite of the aforementioned merits, however, our strain measurements could encounter challenges when the wings contain membrane or tension strain. In our strain analysis, we performed point tracking based on the assumption that the wing is an inextensible surface. For isogeometric wings, our algorithm can be adaptable if the ratio of surface expansion can be determined beforehand. However, it could be challenging to adapt our technology to measurements of non-isogeomtric wings. Future work is possible to develop more sophisticated algorithms for non-isogeomtric analysis if some priori knowledge of the dynamics or physical model of the wings can be obtained.

132 Methods Superfast 3D Imaging We used a modified FTP method [146] for 3D reconstruction. The basic principles can be explained as follows. In theory, two different fringe patterns with a phase shift of π can be described as I 1 = I (x, y) + I (x, y) cos[φ(x, y)], (6.1) I 2 = I (x, y) I (x, y) cos[φ(x, y)], (6.2) where I (x, y) stands for the average intensity or DC component, I (x, y) represents the intensity modulation, and φ(x, y) is the phase information to be computed. After subtracting the two fringe images, we can get rid of the DC component and obtain I = (I 1 I 2 )/2 = I (x, y) cos[φ(x, y)]. (6.3) Using Euler s formula, we can reformulate Eq. (6.3) as a summation of two harmonic conjugate components. I = I (x, y) 2 [ e jφ(x,y) + e jφ(x,y)]. (6.4) To preserve only one of the two harmonic conjugate components, we can apply a band pass filter and obtain the final fringe image as: I f (x, y) = I (x, y) e jφ(x,y). (6.5) 2 In this research, we chose to use a Hanning window as band-pass filter [146]. After apply filtering, we can extract the phase through an arctangent function. φ(x, y) = tan 1 { Im [If (x, y)] Re [I f (x, y)] }. (6.6) From Eq. (6.6), one can see that the phase φ is in the form of an arctangent function. As a result, the extracted phase φ is wrapped with a range from π to π. Therefore, phase unwrapping is necessary to obtain absolute phase map. In this research, we adopted a histogram-based method [150] for absolute phase retrieval.

133 115 From Eq. (6.1) - (6.2), we can see that modified FTP method requires the projection of more than one 8-bit sinusoidal patterns. However, the refreshing rate of 8-bit patterns are typically limited to several hundred Hz even for modern DLP projectors (e.g. 247 Hz for Wintech PRO 6500). Consider that our flapping wing robot (e.g. XTIM Bionic Bird Avitron V2.0) flaps 25 cycles per second, this projection speed is not sufficient for high-quality 3D imaging. Alternatively, as introduced by Lei and Zhang [42], one can project 1-bit square binary patterns with projector defocusing to produce quasi-sinusoidal profile. Such method is called the binary defocusing method. The basic principle of binary defocussing technique is that the projector defocusing effect, which is essentially similar to a Gaussian low-pass filter, can effectively suppress high order harmonics of a square wave in Fourier frequency spectrum. Over the past decade, scientists have adopted different methods to further suppress the high order harmonics by means of pulse width modulation [181], area modulation [182], dithering [165] and so forth. With the reduced data transfer load from 8-bit to 1-bit images, the DLP projectors has enabled khz 3D shape measurement speeds [43]. In this research, we used a set of area modulated patterns [182] for phase extraction and a set of dithered patterns [165] for unwrapping. The fringe pitch for area modulated patterns and dithered patterns are T = 24 and T = 380 pixels, respectively Geodesic-Based Point Tracking Although we have obtained the 3D data for each frame with superfast 3D imaging, performing strain analysis for each 3D frame is nevertheless challenging since it requires the point tracking on the wings so that the strains can be computed by examining the geometric deformations. In this section, we will introduce our proposed geometry-based point tracking method for inextensible membranes assisted by the computation of geodesic distance. For an inextensible surface, an important property is that the geodesic distances will be retained after surface deformation, which provides us with additional con-

134 116 straints to perform point tracking. For any point p(t 0 ) on the initial surface configuration S(t 0 ), we need to identify its corresponding point p(t) on a deformed surface configuration S(t). Figure 6.6 illustrates a schematic diagram of our proposed tracking approach using geodesic computation. Suppose we have two anchor points c 1 and c 2, for any point p(t 0 ) on an initial undeformed surface S(t 0 ), we compute its geodesic distances d 1 and d 2 respectively to the anchor points c 1 and c 2. Then, on the current deformed surface S(t), we extract the curves γ 1 and γ 2 respectively with equal geodesic distances d 1 and d 2. Finally, we identify the point p(t) by computing the numerical solution of the intersecting point of γ 1 and γ 2. Next, we will introduce the detailed procedures of our proposed tracking approach. p(t 0 ) S(t 0 ) d 1 d 2 c 1 (t 0 ) c 2 (t 0 ) (t) p(t) d 1 d 2 S(t) c 1 (t) 1 (x,y,z) 2 (x,y,z) c 2 (t) Figure 6.6. Finding the correspondence between a point p(t 0 ) on the initial surface configuration S(t 0 ) and a point p(t) on the current deformed configuration S(t); p(t) is identified by finding the intersecting point of the curves γ 1 (x, y, z) with equal geodesic distance d 1 and γ 2 (x, y, z) with equal geodesic distance d 2. The first step of our tracking approach is to compute the geodesic distances of any point p(t 0 ) to the anchor points c 1 and c 2. The geodesic distance is essentially the length of the shortest distance between two points on the surface. Some wellknown computational approaches include the Dijkstra s algorithm [183] which is based on distance computation, and fast marching algorithm [184, 185] which is based on gradient computation. In this research, we developed a computational approach that is based on Dijkstra s algorithm, but optimized to our case considering the surface data could contain some noises.

135 117 For a given anchor point on the graph, the Dijkstra s algorithm finds the shortest path from the anchor point to any other nodes on the graph. Figure 6.7 shows a simple example of the computational procedure of the Dijkstra s algorithm. Suppose Node 1 on the graph is the initial anchor point, the distances from which will be computed for each of its neighbors. The one that has the smallest distance becomes the new anchor point, and the previous anchor point will not be visited again and marked as out. This procedure continues and the distance value of each node will be updated whenever a smaller distance value is found. Once all node points have been marked as out, the entire procedure is done. d = d = d = 7 d = 20 d = d = 9 out 3 d = 14 5 d = 14>9+2, 5 6 d = 9<10+7, let d = 11 6 d = 22>11+9, d = 22 let d = 20 let d = 9 2 d = out out 1 1 out (a) 2 d = 7 (b) 2 d = 7 (c) d = 20 d = 11 5 out 6 6 d = 20<20+6, d = 20 d = 9 let d = 20 4 out 3 4 d = 11 d = 20 5 out 6 out d = 9 d = 20 out 3 4 out 1 out 2 d = 7 out out 1 out 1 out (d) 2 d = 7 (e) 2 d = 7 (f) Figure 6.7. An example of computing the shortest distance using the Dijkstra s algorithm. The numbers between two different nodes denote the length of the path connecting the two nodes. (a) - (f) illustrate the computational procedure. Node 1 is the initial anchor point. Each unvisited vertex with the lowest distance becomes the new anchor points, and the old anchor points will not be visited again. Each visited node will have an updated distance value if smaller than previously marked distance value. The Dijkstra s algorithm performs a good approximation of geodesic computation if the data is ideal and noise free. However, since our reconstructed 3D data could be polluted by camera noise, here we propose an optimization of conventional Dijk-

136 118 stra s algorithm with cubic Bézier curve fitting. Figure 6.8 illustrates the optimization scheme of our proposed geodesic computational method. For each currently visited node P 0, instead of only search its 4-connectivity or 8-connectivity neighbors, we pick its 7 7 neighborhood and search all possible marching directions within this 7 7 window, as illustrated on the left diagram. For each searching path (e.g. the path denoted by the purple arrow), we pick two more points P 1 and P 2 in addition to the start point P 0 and end point P 3. After picking up the four points P 0 - P 3, we then fit a cubic Bézier curve that can be formulated as follows: B(t) = (1 t) 3 P 0 + 3(1 t) 2 tp 1 + 3(1 t)t 2 P 2 + t 3 P 3, 0 t 1. (6.7) Then, the surface distance d(p 0, P 3 ) between the nodes P 0 and P 3 is estimated as the arc length of the fitted Bézier curve. d(p 0, P 3 ) = where B (t) is the first order derivative of the Bézier curve B(t). 1 0 B (t) dt, (6.8) P 1 P 2 P 0 P3 All marching directions Cubic Bézier curve fitting Figure 6.8. Optimization of Dijkstra s algorithm in accordance with our measured 3D data. Each grid point on the left figure denotes one 3D point corresponding to a camera pixel. For each currently visited point P 0, we pick its 7 7 neighborhood and search all possible marching directions as illustrated. For each searching path, we pick two more points in addition to the start and end point, and the distance is computed as the arc length of the interpolated cubic Bézier curve. With this optimized scheme for geodesic computation, we can generate maps of geodesic distances D 1 (t, u, v) and D 2 (t, u, v) with respect to anchor points c 1 and c 2.

137 119 Then, we can extract spatial curves γ 1 (x, y, z) and γ 2 (x, y, z), with equal geodesic distance d 1 and d 2, respectively. The corresponding point p(t) can be located by identifying the intersecting point of spatial curves γ 1 (x, y, z) and γ 2 (x, y, z). p(t) = {(x, y, z) γ 1 (x, y, z) γ 2 (x, y, z)}. (6.9) Strain Computation As is shown in Fig. 6.9, once we can determine the point-to-point correspondence between a current frame S(t) and the initial frame S(t 0 ), we can then calculate the Green-Lagrange strain tensor on that specific point. According to Kirchoff-Love shell theory [186], the coefficients E αβ of the Green-Lagrange strain tensor can be modeled as [187, 188]: S(t 0 ) G 3 G 1 G 2 S(t) g 3 g 1 g 2 z r(t 0 ) r(t) x 1 2 y Figure 6.9. Notations in differential geometry. (G α, G β ) and (g α, g β ) are the base vectors of the tangent planes of the initial configuration S(t 0 ) and the deformed surface S(t); G 3 and g 3 are the corresponding normal vectors; r(t 0 ) and r ( t) are the position vectors; θ 1 and θ 2 denotes the surface parametrization which coincides with world coordinate x and y in our research. E αβ = ε αβ + θ 3 κ αβ, α, β = 1, 2, (6.10) where α, β = 1, 2 denotes the indexes of the matrix tensor; ε αβ denotes the membrane strain due to surface extension or compression; κ αβ represents the curvature changes

138 120 due to bending, and θ 3 is the coordinate in thickness direction ( 0.5h θ 3 0.5h, h represents the thickness). Since the wings of our bird robot are made of a thin layer of inelastic plastic membrane with uniform thickness of h, according to Borg [189,190], the elastic membrane strain ε αβ reduces to 0 and the model can be simplified as E αβ = h 2 κ αβ. α, β = 1, 2. (6.11) The curvature change κ αβ is defined by the change in curvature tensor coefficients: κ αβ = b αβ B αβ, α, β = 1, 2, (6.12) where b αβ and B αβ are respectively the curvature tensor coefficients of the point on the current and initial surface configuration. In fact, b αβ and B αβ are defined by the second fundamental forms of the surfaces. To compute their second fundamental forms, suppose now we have already found the corresponding points p(t 0 ) and p(t) respectively on the initial undeformed and current deformed surfaces, we select a pixels neighborhood for both p(t 0 ) and p(t) and fit them into a quadratic surfaces r(θ 1, θ 2 ) as x = θ 1, (6.13) y = θ 2, (6.14) z = Aθ1 2 + Bθ2 2 + Cθ 1 θ 2 + Dθ 1 + Eθ 2 + F. (6.15) We coincide θ 1 and θ 2 with the world coordinate x and y to ensure that our surfaces are using the same parameterization. Then, we find the tangent plane base vectors (G 1, G 2 ) and (g 1, g 2 ) as: [ r(t0 ) (G 1, G 2 ) =, r(t ] 0), (6.16) θ 1 θ [ 2 r(t) (g 1, g 2 ) =, r(t) ]. (6.17) θ 1 θ 2 (6.18)

139 121 Then, the second order partial derivatives can be computed as G αβ = 2 r(t 0 ) θ α θ β, g αβ = 2 r(t) θ α θ β. α, β = 1, 2. (6.19) Finally, the curvature tensor coefficients b αβ and B αβ can be computed by their corresponding second fundamental forms B αβ = G αβ G 3, b αβ = g αβ g 3. α, β = 1, 2. (6.20) where G 3 and g 3 are respectively the normal vectors given by: G 3 = G 1 G 2 G 1 G 2, g 3 = g 1 g 2 g 1 g 2. (6.21) Once we have computed the curvature tensor coefficients b αβ and B αβ. We can compute the strain tensor coefficients E αβ by referring to Eq. (6.11) - (6.12). The strain maps shown in Fig. 6.5 were generated by extracting the dominant eigenvalue of the computed strain tensor.

140 Summary of Contributions 7. SUMMARY AND FUTURE PROSPECTS This dissertation research has made the following contributions: Developed novel calibration frameworks to calibrate a structured light system with an out-focus projector. A structured light system with binary defocusing technology has the capability of achieving superfast (e.g. khz) measurement speed, yet the calibration difficulty associated with a defocused projector makes it difficult to develop a high accuracy 3D shape measurement system. We have theoretically proved and experimentally verified that an outof-focus projector can be accurately calibrated by establishing the one-to-one mapping between the projector and camera in the phase domain. Our developed calibration framework performs consistently well under different defocusing degrees, and has achieved an accuracy of 73 µm with a calibration volume of 150(H) mm 250(W) mm 200(D) mm. Besides, in a well-designed system, the depth sensitivity of one pattern direction (horizontal or vertical) is maximized, while the sensitivity in the other direction is close to zero. Under such a system, we discovered that the pattern directions used in calibration matters in terms of measurement accuracies. Therefore, we developed a calibration framework based on optimal fringe angle which can increase the measurement accuracy up to 38% compared to the former method. These innovations broke the ground for 3D shape measurement by realizing superfast and high-accuracy at the same time. We published this research in the journal of Applied Optics and the details were introduced in Chapter 2. Developed a flexible calibration approach to calibrate a microscopic structured light system using telecentric lens. A microscopic structured

141 123 light system has a great potential of reaching µm measurement accuracy with a medium-scale measurement range (e.g. a spatial span of several cm 3 ), which is difficult to achieve with other 3D shape measurement technologies. Within such technology, the usage of telecentric lenses will allow the 3D shape measurements to have a relatively large DOF (e.g. several mm). However, a telecentric lens, owing to its nature of orthographic projection, is insensitive to depth variations along its optical or Z axis, which making it difficult to calibrate with flexibility, simplicity and high accuracy. We innovated a flexible calibration method for telecentric lens by utilizing the obtained X, Y and Z from a pinhole lens. Our calibration framework has achieved approximately 10 µm accuracy with a volume of 10(H) mm 8(W) mm 5(D) mm. We published this research in the journal of Optics Express and the details were introduced in Chapter 3. Developed a single-shot absolute 3D recovery method. For measurements of scenes with high-speed motion, Fourier transform profilometry is one of the widely adopted method owing to its nature of single-shot measurement. However, such single-shot technology is typically lack of enough cues to recover absolute 3D geometries purely from a captured 8-bit grayscale image without embedded markers. We developed a computational framework that addresses such limitation of this single-shot technology by taking advantage of the geometric constraints of a structured light system. Our technology has realized point-by-point absolute 3D recovery of multiple spatially isolated objects purely from a single-shot 8-bit grayscale image. We published this research in the journal of Applied Optics and the details were introduced in Chapter 4. Developed a motion induced error reduction framework. For dynamic 3D shape measurements, reducing the errors caused by object motions is one of the major challenges for achieving high measurement accuracies. To address such concerns, we developed a hybrid error reduction framework by combining phase shifting profilometry with Fourier transform profilometry. Our technology

142 124 is capable of measuring many spatially isolated free-falling objects with reduced motion introduced errors or artifacts compared to conventional two-frequency method. We published this research in the journal of Optics Express and the details were introduced in Chapter 5. Developed a dense 3D strain measurement framework for robotic flapping wings. The aforementioned technological developments have all contributed to achieving simultaneous superfast and high-accuracy 3D shape measurements. With such developed platform, we explored an interdisciplinary research of dense 3D strain measurement for robotic flapping wings, which could potentially provide knowledge for better design of bio-inspired flapping wing robots. We use our developed superfast 3D shape measurement technologies to precisely measure the dynamic deformations of the wings with high spatial (i.e pixels) and temporal (i.e Hz) resolutions, and innovated a geometry-based 3D strain analysis framework based on geodesic computation and Kirchhoff-Love shell theory. Our technology has the advantages of neither requiring a large number of fiducial markers nor special surface treatment. The details were introduced in Chapter Future Prospects In this dissertation research, we have developed technologies towards achieving superfast, high-accuracy 3D shape measurements, and successfully explored novel methods for flapping wing mechanics analysis with developed platform technologies. However, there are many other applications to be explored in future research. Here we would like to pick several examples to demonstrate the potentials of real-time (e.g. 30 Hz) to superfast (e.g. khz) 3D shape measurements.

143 Flapping Wing Robotics Design Since we have developed technologies that can measure both the 3D geometries and the dense 3D strain maps, such technologies can be used to provide information to optimize the geometry-based designs of flapping wing robots. One can first of all employ our developed technology to capture the flapping flight of different species (e.g. hummingbirds, hawkmoths, etc). With the acquired 3D geometry and strain maps, one can create a library of different designs to approximate the biological counterparts, such as the ones shown in Fig Wing space Body space Figure 7.1. An illustration of geometry-based modeling and design. As aforementioned, the ultimate goal of bio-inspired design is to mimic a real biological scene. Our technology can also be used to iteratively optimize the design until it produces a robot that is close enough to the real flying bird or insect. Figure 7.2 illustrates the idea of a design pipeline with closed-loop control. Once the designer has finished the initial design, one can perform physics-based simulations of the designed robot and overlay with the captured real biological scene using augmented visualization techniques. The initial design process will iterate until the simulated data is close enough to the actual captured data. Once the initial design process is finished, the robot will be manufactured and tested with actual flyings. One can then capture the actual flyings of the robot using our platform technology. The obtained data can be then superimposed with the data captured from the biological scene to examine the difference, which will serve as feedback input of the next iteration of the entire design process. Via this method, we can eventually approach the goal of approximating a biological scene using iterative design optimization.

144 126 Biological scene Dynamic 3D imaging Captured - 3D + - Captured 3D Dynamic 3D imaging Augmented Design visualization Simulated 3D Simulation Production Figure 7.2. The closed-loop control pipeline of geometry-based design Engine Inspection Our developed technologies could also be beneficial to manufacturing inspections. Here we take vehicle engine inspection as an example to demonstrate the future promise. In vehicle engines, the crankshaft is one of the most crucial element. To reduce the unwanted vibrations when the engine is in operation, the actual axis of balancing of the crankshaft needs to be close enough to its neutral axis. Current inspection of crankshafts are typically performed in a specialized balancing machine. However, a major limitation of current balancing inspection is the requirement of recalibration whenever a new product type comes in. Given that the recalibration could be quite a time-consuming process (typically takes several hours), this can be very costly for the manufacturer. Our superfast 3D shape measurement technology could be a potential solution to address this limitation. If one can put our superfast 3D shape measurement system inside of the inspection machine, the dynamic rotary process of the crankshaft can be recorded with high accuracy. With the obtained dynamic 3D data, one can first of all analyze the actual balancing axis of the crankshaft through geometry analysis, and then make a plan of material cutting based on the result of analysis. Different from current balancing inspection methods, our technology is applicable to measurements of different object shapes. Therefore, the future balancing inspection process could be flexibly applied to inspect different types of crankshafts.

145 Flexible Assembly Operations Robotic assembly systems are widely used for industrial automation. Nowadays, the customers become more and more demanding, and requires customized product designs more often than in the past. Therefore, it is increasingly needed to design assembly systems with great flexibility which are adaptable to product changes. However, most assembly robots are programmed to perform fixed operations, the flexibility of which may not meet the needs in many situations. But, given that our 3D measurement sensor can accurately measure the 3D geometry under a given world coordinate (an example is shown in Fig. 7.3), if an assembly robot is armed with our 3D measurement sensor, the robot can somehow see the part that is being operated, which provides possibilities of flexibly operating different types of parts assisted by the feedback from the 3D sensor. (a) (b) (c) Figure 7.3. An example of measuring a complex mechanical part (original picture from [1]). (a) A photograph of the part; (b) one of the captured fringe patterns; (c) reconstructed 3D geometry. Figure 7.4 illustrates the conceptual idea of a closed-loop control of flexible assembly operations. Our 3D sensor can provide input (e.g. part geometry, current XY Z locations) to assist the assembly robot in picking and placing the part to its designated position. Then, the 3D sensor can also provide instant feedback so that the robot can iteratively make adjustment until the part is aligned well enough to pro-

Enhanced two-frequency phase-shifting method

Enhanced two-frequency phase-shifting method Research Article Vol. 55, No. 16 / June 1 016 / Applied Optics 4395 Enhanced two-frequency phase-shifting method JAE-SANG HYUN AND SONG ZHANG* School of Mechanical Engineering, Purdue University, West

More information

Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry

Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry Vol. 24, No. 2 3 Oct 216 OPTICS EXPRESS 23289 Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry B EIWEN L I, Z IPING L IU, AND S ONG Z HANG * School

More information

Comparative study on passive and active projector nonlinear gamma calibration

Comparative study on passive and active projector nonlinear gamma calibration 3834 Vol. 54, No. 13 / May 1 2015 / Applied Optics Research Article Comparative study on passive and active projector nonlinear gamma calibration SONG ZHANG School of Mechanical Engineering, Purdue University,

More information

High-resolution 3D profilometry with binary phase-shifting methods

High-resolution 3D profilometry with binary phase-shifting methods High-resolution 3D profilometry with binary phase-shifting methods Song Zhang Department of Mechanical Engineering, Iowa State University, Ames, Iowa 511, USA (song@iastate.edu) Received 11 November 21;

More information

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light I Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Pixel-wise absolute phase unwrapping using geometric constraints of structured light system

Pixel-wise absolute phase unwrapping using geometric constraints of structured light system Vol. 24, No. 15 25 Jul 2016 OPTICS EXPRESS 18445 Piel-wise absolute phase unwrapping using geometric constraints of structured light system YATONG A N, J AE -S ANG H YUN, AND S ONG Z HANG * School of Mechanical

More information

High quality three-dimensional (3D) shape measurement using intensity-optimized dithering technique

High quality three-dimensional (3D) shape measurement using intensity-optimized dithering technique Graduate Theses and Dissertations Iowa State University Capstones, Theses and Dissertations 2014 High quality three-dimensional (3D) shape measurement using intensity-optimized dithering technique Beiwen

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Optics and Lasers in Engineering

Optics and Lasers in Engineering Optics and Lasers in Engineering 51 (213) 79 795 Contents lists available at SciVerse ScienceDirect Optics and Lasers in Engineering journal homepage: www.elsevier.com/locate/optlaseng Phase-optimized

More information

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based

More information

Error analysis for 3D shape measurement with projector defocusing

Error analysis for 3D shape measurement with projector defocusing Mechanical Engineering Conference Presentations, Papers, and Proceedings Mechanical Engineering 1-21 Error analysis for 3D shape measurement with projector defocusing Ying Xu Iowa State University Junfei

More information

Dynamic 3-D surface profilometry using a novel color pattern encoded with a multiple triangular model

Dynamic 3-D surface profilometry using a novel color pattern encoded with a multiple triangular model Dynamic 3-D surface profilometry using a novel color pattern encoded with a multiple triangular model Liang-Chia Chen and Xuan-Loc Nguyen Graduate Institute of Automation Technology National Taipei University

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

An Innovative Three-dimensional Profilometer for Surface Profile Measurement Using Digital Fringe Projection and Phase Shifting

An Innovative Three-dimensional Profilometer for Surface Profile Measurement Using Digital Fringe Projection and Phase Shifting An Innovative Three-dimensional Profilometer for Surface Profile Measurement Using Digital Fringe Projection and Phase Shifting Liang-Chia Chen 1, Shien-Han Tsai 1 and Kuang-Chao Fan 2 1 Institute of Automation

More information

The main problem of photogrammetry

The main problem of photogrammetry Structured Light Structured Light The main problem of photogrammetry to recover shape from multiple views of a scene, we need to find correspondences between the images the matching/correspondence problem

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Metrology and Sensing

Metrology and Sensing Metrology and Sensing Lecture 4: Fringe projection 2016-11-08 Herbert Gross Winter term 2016 www.iap.uni-jena.de 2 Preliminary Schedule No Date Subject Detailed Content 1 18.10. Introduction Introduction,

More information

Stereo and structured light

Stereo and structured light Stereo and structured light http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 20 Course announcements Homework 5 is still ongoing. - Make sure

More information

IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER

IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER 2012 411 Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems Ricardo R. Garcia, Student

More information

Overview of Active Vision Techniques

Overview of Active Vision Techniques SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active

More information

Structured Light II. Guido Gerig CS 6320, Spring (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC)

Structured Light II. Guido Gerig CS 6320, Spring (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC) Structured Light II Guido Gerig CS 6320, Spring 2013 (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC) http://www.cs.cmu.edu/afs/cs/academic/class/15385- s06/lectures/ppts/lec-17.ppt Variant

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Method for improving sinusoidal quality of error diffusion binary encoded fringe used in phase measurement profilometry

Method for improving sinusoidal quality of error diffusion binary encoded fringe used in phase measurement profilometry Optica Applicata, Vol. XLVI, No. 2, 216 DOI: 1.5277/oa16213 Method for improving sinusoidal quality of error diffusion binary encoded fringe used in phase measurement profilometry ZIXIA TIAN, WENJING CHEN

More information

3D data merging using Holoimage

3D data merging using Holoimage Iowa State University From the SelectedWorks of Song Zhang September, 27 3D data merging using Holoimage Song Zhang, Harvard University Shing-Tung Yau, Harvard University Available at: https://works.bepress.com/song_zhang/34/

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

Study on Gear Chamfering Method based on Vision Measurement

Study on Gear Chamfering Method based on Vision Measurement International Conference on Informatization in Education, Management and Business (IEMB 2015) Study on Gear Chamfering Method based on Vision Measurement Jun Sun College of Civil Engineering and Architecture,

More information

Sensing Deforming and Moving Objects with Commercial Off the Shelf Hardware

Sensing Deforming and Moving Objects with Commercial Off the Shelf Hardware Sensing Deforming and Moving Objects with Commercial Off the Shelf Hardware This work supported by: Philip Fong Florian Buron Stanford University Motivational Applications Human tissue modeling for surgical

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa 3D Scanning Qixing Huang Feb. 9 th 2017 Slide Credit: Yasutaka Furukawa Geometry Reconstruction Pipeline This Lecture Depth Sensing ICP for Pair-wise Alignment Next Lecture Global Alignment Pairwise Multiple

More information

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller 3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Depth Sensors Kinect V2 A. Fornaser

Depth Sensors Kinect V2 A. Fornaser Depth Sensors Kinect V2 A. Fornaser alberto.fornaser@unitn.it Vision Depth data It is not a 3D data, It is a map of distances Not a 3D, not a 2D it is a 2.5D or Perspective 3D Complete 3D - Tomography

More information

Computer Vision. 3D acquisition

Computer Vision. 3D acquisition è Computer 3D acquisition Acknowledgement Courtesy of Prof. Luc Van Gool 3D acquisition taxonomy s image cannot currently be displayed. 3D acquisition methods Thi passive active uni-directional multi-directional

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Natural method for three-dimensional range data compression

Natural method for three-dimensional range data compression Natural method for three-dimensional range data compression Pan Ou,2 and Song Zhang, * Department of Mechanical Engineering, Iowa State University, Ames, Iowa 5, USA 2 School of Instrumentation Science

More information

Improved phase-unwrapping method using geometric constraints

Improved phase-unwrapping method using geometric constraints Improved phase-unwrapping method using geometric constraints Guangliang Du 1, Min Wang 1, Canlin Zhou 1*,Shuchun Si 1, Hui Li 1, Zhenkun Lei 2,Yanjie Li 3 1 School of Physics, Shandong University, Jinan

More information

Phase error correction based on Inverse Function Shift Estimation in Phase Shifting Profilometry using a digital video projector

Phase error correction based on Inverse Function Shift Estimation in Phase Shifting Profilometry using a digital video projector University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Phase error correction based on Inverse Function Shift Estimation

More information

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

More information

Structured light , , Computational Photography Fall 2017, Lecture 27

Structured light , , Computational Photography Fall 2017, Lecture 27 Structured light http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 27 Course announcements Homework 5 has been graded. - Mean: 129. - Median:

More information

High-speed three-dimensional shape measurement system using a modified two-plus-one phase-shifting algorithm

High-speed three-dimensional shape measurement system using a modified two-plus-one phase-shifting algorithm 46 11, 113603 November 2007 High-speed three-dimensional shape measurement system using a modified two-plus-one phase-shifting algorithm Song Zhang, MEMBER SPIE Shing-Tung Yau Harvard University Department

More information

Superfast high-resolution absolute 3D recovery of a stabilized flapping flight process

Superfast high-resolution absolute 3D recovery of a stabilized flapping flight process Vol. 25, No. 22 30 Oct 2017 OPTICS EXPRESS 27270 Superfast high-resolution absolute 3D recovery of a stabilized flapping flight process B EIWEN L I 1,* AND S ONG Z HANG 2 1 Department 2 School of Mechanical

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

HANDBOOK OF THE MOIRE FRINGE TECHNIQUE

HANDBOOK OF THE MOIRE FRINGE TECHNIQUE k HANDBOOK OF THE MOIRE FRINGE TECHNIQUE K. PATORSKI Institute for Design of Precise and Optical Instruments Warsaw University of Technology Warsaw, Poland with a contribution by M. KUJAWINSKA Institute

More information

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods L2 Data Acquisition Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods 1 Coordinate Measurement Machine Touch based Slow Sparse Data Complex planning Accurate 2

More information

Multiwavelength depth encoding method for 3D range geometry compression

Multiwavelength depth encoding method for 3D range geometry compression 684 Vol. 54, No. 36 / December 2 25 / Applied Optics Research Article Multiwavelength depth encoding method for 3D range geometry compression TYLER BELL AND SONG ZHANG* School of Mechanical Engineering,

More information

Metrology and Sensing

Metrology and Sensing Metrology and Sensing Lecture 4: Fringe projection 2017-11-09 Herbert Gross Winter term 2017 www.iap.uni-jena.de 2 Preliminary Schedule No Date Subject Detailed Content 1 19.10. Introduction Introduction,

More information

Shift estimation method based fringe pattern profilometry and performance comparison

Shift estimation method based fringe pattern profilometry and performance comparison University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2005 Shift estimation method based fringe pattern profilometry and performance

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

Phase error compensation for three-dimensional shape measurement with projector defocusing

Phase error compensation for three-dimensional shape measurement with projector defocusing Mechanical Engineering Publications Mechanical Engineering 6-10-2011 Phase error compensation for three-dimensional shape measurement with projector defocusing Ying Xu Iowa State University Laura D. Ekstrand

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

Metrology and Sensing

Metrology and Sensing Metrology and Sensing Lecture 4: Fringe projection 2018-11-09 Herbert Gross Winter term 2018 www.iap.uni-jena.de 2 Schedule Optical Metrology and Sensing 2018 No Date Subject Detailed Content 1 16.10.

More information

Measurements using three-dimensional product imaging

Measurements using three-dimensional product imaging ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using

More information

Three-dimensional data merging using holoimage

Three-dimensional data merging using holoimage Iowa State University From the SelectedWorks of Song Zhang March 21, 2008 Three-dimensional data merging using holoimage Song Zhang, Harvard University Shing-Tung Yau, Harvard University Available at:

More information

High dynamic range scanning technique

High dynamic range scanning technique 48 3, 033604 March 2009 High dynamic range scanning technique Song Zhang, MEMBER SPIE Iowa State University Department of Mechanical Engineering Virtual Reality Applications Center Human Computer Interaction

More information

Novel Approaches in Structured Light Illumination

Novel Approaches in Structured Light Illumination University of Kentucky UKnowledge University of Kentucky Doctoral Dissertations Graduate School 2010 Novel Approaches in Structured Light Illumination Yongchang Wang University of Kentucky, ychwang6@gmail.com

More information

Shape and deformation measurements by high-resolution fringe projection methods February 2018

Shape and deformation measurements by high-resolution fringe projection methods February 2018 Shape and deformation measurements by high-resolution fringe projection methods February 2018 Outline Motivation System setup Principles of operation Calibration Applications Conclusions & Future work

More information

1 Laboratory #4: Division-of-Wavefront Interference

1 Laboratory #4: Division-of-Wavefront Interference 1051-455-0073, Physical Optics 1 Laboratory #4: Division-of-Wavefront Interference 1.1 Theory Recent labs on optical imaging systems have used the concept of light as a ray in goemetrical optics to model

More information

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD Takeo MIYASAKA and Kazuo ARAKI Graduate School of Computer and Cognitive Sciences, Chukyo University, Japan miyasaka@grad.sccs.chukto-u.ac.jp,

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

Optical Active 3D Scanning. Gianpaolo Palma

Optical Active 3D Scanning. Gianpaolo Palma Optical Active 3D Scanning Gianpaolo Palma 3D Scanning Taxonomy SHAPE ACQUISTION CONTACT NO-CONTACT NO DESTRUCTIVE DESTRUCTIVE X-RAY MAGNETIC OPTICAL ACOUSTIC CMM ROBOTIC GANTRY SLICING ACTIVE PASSIVE

More information

Measurement of 3D Foot Shape Deformation in Motion

Measurement of 3D Foot Shape Deformation in Motion Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The

More information

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or

More information

Accurate projector calibration method by using an optical coaxial camera

Accurate projector calibration method by using an optical coaxial camera Accurate projector calibration method by using an optical coaxial camera Shujun Huang, 1 Lili Xie, 1 Zhangying Wang, 1 Zonghua Zhang, 1,3, * Feng Gao, 2 and Xiangqian Jiang 2 1 School of Mechanical Engineering,

More information

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc. Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare

More information

Integration of 3D Stereo Vision Measurements in Industrial Robot Applications

Integration of 3D Stereo Vision Measurements in Industrial Robot Applications Integration of 3D Stereo Vision Measurements in Industrial Robot Applications Frank Cheng and Xiaoting Chen Central Michigan University cheng1fs@cmich.edu Paper 34, ENG 102 Abstract Three dimensional (3D)

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry

Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry Lei Huang,* Chi Seng Ng, and Anand Krishna Asundi School of Mechanical and Aerospace Engineering, Nanyang Technological

More information

Active Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth

Active Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth Active Stereo Vision COMP 4900D Winter 2012 Gerhard Roth Why active sensors? Project our own texture using light (usually laser) This simplifies correspondence problem (much easier) Pluses Can handle different

More information

Transparent Object Shape Measurement Based on Deflectometry

Transparent Object Shape Measurement Based on Deflectometry Proceedings Transparent Object Shape Measurement Based on Deflectometry Zhichao Hao and Yuankun Liu * Opto-Electronics Department, Sichuan University, Chengdu 610065, China; 2016222055148@stu.scu.edu.cn

More information

Multi-projector-type immersive light field display

Multi-projector-type immersive light field display Multi-projector-type immersive light field display Qing Zhong ( é) 1, Beishi Chen (í ì) 1, Haifeng Li (Ó ô) 1, Xu Liu ( Ê) 1, Jun Xia ( ) 2, Baoping Wang ( ) 2, and Haisong Xu (Å Ø) 1 1 State Key Laboratory

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

White-light interference microscopy: minimization of spurious diffraction effects by geometric phase-shifting

White-light interference microscopy: minimization of spurious diffraction effects by geometric phase-shifting White-light interference microscopy: minimization of spurious diffraction effects by geometric phase-shifting Maitreyee Roy 1, *, Joanna Schmit 2 and Parameswaran Hariharan 1 1 School of Physics, University

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

3D Modelling with Structured Light Gamma Calibration

3D Modelling with Structured Light Gamma Calibration 3D Modelling with Structured Light Gamma Calibration Eser SERT 1, Ibrahim Taner OKUMUS 1, Deniz TASKIN 2 1 Computer Engineering Department, Engineering and Architecture Faculty, Kahramanmaras Sutcu Imam

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

Natural Viewing 3D Display

Natural Viewing 3D Display We will introduce a new category of Collaboration Projects, which will highlight DoCoMo s joint research activities with universities and other companies. DoCoMo carries out R&D to build up mobile communication,

More information

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal

More information

Integrating 3D Vision Measurements into Industrial Robot Applications

Integrating 3D Vision Measurements into Industrial Robot Applications Integrating 3D Vision Measurements into Industrial Robot Applications by Frank S. Cheng cheng1fs@cmich.edu Engineering and echnology Central Michigan University Xiaoting Chen Graduate Student Engineering

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

Supplementary materials of Multispectral imaging using a single bucket detector

Supplementary materials of Multispectral imaging using a single bucket detector Supplementary materials of Multispectral imaging using a single bucket detector Liheng Bian 1, Jinli Suo 1,, Guohai Situ 2, Ziwei Li 1, Jingtao Fan 1, Feng Chen 1 and Qionghai Dai 1 1 Department of Automation,

More information

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra) Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD ECE-161C Cameras Nuno Vasconcelos ECE Department, UCSD Image formation all image understanding starts with understanding of image formation: projection of a scene from 3D world into image on 2D plane 2

More information

ksa MOS Ultra-Scan Performance Test Data

ksa MOS Ultra-Scan Performance Test Data ksa MOS Ultra-Scan Performance Test Data Introduction: ksa MOS Ultra Scan 200mm Patterned Silicon Wafers The ksa MOS Ultra Scan is a flexible, highresolution scanning curvature and tilt-measurement system.

More information

High-speed 3D shape measurement using Fourier transform and stereo vision

High-speed 3D shape measurement using Fourier transform and stereo vision Lu et al. Journal of the European Optical Society-Rapid Publications (2018) 14:22 https://doi.org/10.1186/s41476-018-0090-z Journal of the European Optical Society-Rapid Publications RESEARCH High-speed

More information

What is Frequency Domain Analysis?

What is Frequency Domain Analysis? R&D Technical Bulletin P. de Groot 9/3/93 What is Frequency Domain Analysis? Abstract: The Zygo NewView is a scanning white-light interferometer that uses frequency domain analysis (FDA) to generate quantitative

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Deep Learning-driven Depth from Defocus via Active Multispectral Quasi-random Projections with Complex Subpatterns

Deep Learning-driven Depth from Defocus via Active Multispectral Quasi-random Projections with Complex Subpatterns Deep Learning-driven Depth from Defocus via Active Multispectral Quasi-random Projections with Complex Subpatterns Avery Ma avery.ma@uwaterloo.ca Alexander Wong a28wong@uwaterloo.ca David A Clausi dclausi@uwaterloo.ca

More information

Miniaturized Camera Systems for Microfactories

Miniaturized Camera Systems for Microfactories Miniaturized Camera Systems for Microfactories Timo Prusi, Petri Rokka, and Reijo Tuokko Tampere University of Technology, Department of Production Engineering, Korkeakoulunkatu 6, 33720 Tampere, Finland

More information

ENGN D Photography / Spring 2018 / SYLLABUS

ENGN D Photography / Spring 2018 / SYLLABUS ENGN 2502 3D Photography / Spring 2018 / SYLLABUS Description of the proposed course Over the last decade digital photography has entered the mainstream with inexpensive, miniaturized cameras routinely

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Advanced Stamping Manufacturing Engineering, Auburn Hills, MI

Advanced Stamping Manufacturing Engineering, Auburn Hills, MI RECENT DEVELOPMENT FOR SURFACE DISTORTION MEASUREMENT L.X. Yang 1, C.Q. Du 2 and F. L. Cheng 2 1 Dep. of Mechanical Engineering, Oakland University, Rochester, MI 2 DaimlerChrysler Corporation, Advanced

More information

High-speed, high-accuracy 3D shape measurement based on binary color fringe defocused projection

High-speed, high-accuracy 3D shape measurement based on binary color fringe defocused projection J. Eur. Opt. Soc.-Rapid 1, 1538 (215) www.jeos.org High-speed, high-accuracy 3D shape measurement based on binary color fringe defocused projection B. Li Key Laboratory of Nondestructive Testing (Ministry

More information

UNIT-2 IMAGE REPRESENTATION IMAGE REPRESENTATION IMAGE SENSORS IMAGE SENSORS- FLEX CIRCUIT ASSEMBLY

UNIT-2 IMAGE REPRESENTATION IMAGE REPRESENTATION IMAGE SENSORS IMAGE SENSORS- FLEX CIRCUIT ASSEMBLY 18-08-2016 UNIT-2 In the following slides we will consider what is involved in capturing a digital image of a real-world scene Image sensing and representation Image Acquisition Sampling and quantisation

More information

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by  ) Readings Szeliski, Chapter 10 (through 10.5) Announcements Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by email) One-page writeup (from project web page), specifying:» Your team members» Project goals. Be specific.

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Digital Volume Correlation for Materials Characterization

Digital Volume Correlation for Materials Characterization 19 th World Conference on Non-Destructive Testing 2016 Digital Volume Correlation for Materials Characterization Enrico QUINTANA, Phillip REU, Edward JIMENEZ, Kyle THOMPSON, Sharlotte KRAMER Sandia National

More information