Design, Development, and Calibration of a 3D Sensory System for Surface Profiling of 3D Micro-Scale Parts

Size: px
Start display at page:

Download "Design, Development, and Calibration of a 3D Sensory System for Surface Profiling of 3D Micro-Scale Parts"

Transcription

1 Design, Development, and Calibration of a 3D Sensory System for Surface Profiling of 3D Micro-Scale Parts by Wei-Hao Chang A thesis submitted in conformity with the requirements for the degree of Masters of Applied Science Mechanical and Industrial Engineering University of Toronto Copyright by Wei-Hao Chang 2014

2 Design, Development, and Calibration of a 3D Sensory System for Surface Profiling of 3D Micro-Scale Parts Wei-Hao Chang Master of Applied Science Department of Mechanical and Industrial Engineering University of Toronto Abstract 2014 This thesis first presents the development of the SL sensory system and the extensive experiments to verify the use of the SL sensory system for measuring macro-scale 3D parts. This thesis further presents the development of a SL sensor system for Micro-scale applications. A novel calibration method for micro-scale applications is presented to determine the overall SL system calibration parameters. The method includes a novel calibration model which explicitly considers the microscope lenses parameters for the hardware components (camera and projector) as well as addresses the limitation of narrow DOF behaviour of these lenses. The latter is achieved by incorporating an image focus fusion technique. The proposed approach is implemented and tested on a designed SL system. The measurement ability of the SL sensory system was provided by measuring micro-scale features. The measurements results demonstrate great potential for using this sensory system in obtaining 3D information of micro-scale complex parts. ii

3 Acknowledgements I would like to express my gratitude to my supervisors, Professor Goldie Nejat, for her kind, yet constructive guidance and support of my research work. I gratefully thank Professor Beno Benhabib for kindly providing me with access to his laboratory to use the linear-stage utilized for calibration and measurement error estimation of my 3D sensory system. I would like to thank Professor J. K. Mills for his continuous support for this project. I would like to thank my M. A. Sc. thesis committee for their time and inputs. I would like to thank my dearest colleague and friend Veronica for her constant guidance and support of my research work. Furthermore, I would also like to thank my lab mates, Derek Mccoll, Geoffery Louie, Fu Shao, Paul Bovbel, and Sean Feng from the Autonomous Systems and Biomechatronics Lab for their motivation and support during the period of my research. I would also like to thank Simon Han for his help with the system calibration and his assistance on the development of the phase error correction algorithm, and Roye Liu for his help and his assistance on the measurement error correction algorithm. Moreover, I would also like to thank my friends, Masih Mahmoodi, Hay Azulay, Adam Le, Jeremy Chen, and Mengzhe Zhu from the CANRIMT program for their insight and inspiration. I would like to acknowledge NSERC for providing the funding to make the CANRIMT project and my research possible. Finally, I would like to thank my parents for their kind support and encouragement. iii

4 Table of Contents Acknowledgements... iii Table of Contents... iv List of Tables... vii List of Figures... viii List of Appendix... xiv Chapter 1 Introduction Motivation Literature Review Non-contact optical 3D measurement systems Common difficulties of the structured light sensory system Problem Definition and Thesis Objective Proposed methodology and tasks Literature Review Design and development of the 3D sensory system Experiments for Macro-scale Applications Structured light System for Micro-Scale Applications Calibration experiment for Structured Light System for Micro-Scale Applications Conclusion and future work... 9 Chapter 2 Calibration of Structured Light Sensory Systems Empirical calibration method Analytical calibration method Camera calibration techniques Projector calibration techniques Challenges of developing a SL system Chapter summary Chapter 3 Structured light sensor system for Macro-scale application System setup Hardware development Software Triangulation configuration and operation iv

5 3.2. System calibration Intensity calibration Random noise and thermal noise Analytical calibration method Chapter Summary Chapter 4 Experiments for Macro-scale application Measurement Error Depth (Z-axis) Experiment Object Surface Measurement Error Identification System Stability Object Surface Effects on Measurements Error compensation methods Phase Error Compensation Optimise Illumination Angle Compensation Alterative Pattern Implementation and Testing Hardware Improvement Investigation High Resolution Camera Component Overall (X, Y, Z) measurement error compensation Chapter summary Chapter 5 Development of a Structured Light Sensory System for Micro-scale Applications Overview of SL sensory system for micro-scale application Selection of Hardware Components and Optical Designs Projector Components Camera and Framer Grabber Components Camera-Projector Synchronization Error Identification and Compensation Projector Component Camera Component System Vibration SL Sensory System Calibration v

6 Background Proposed Analytical Calibration Parametric Model Proposed Analytical Calibration Technique Proposed Measurement Technique Chapter Summary Chapter 6 Experiments for Micro-scale application System Setup Calibration Result Camera Parameters Projector Parameters Measurement Flat Plane Experiment Non-Uniform Object Surface Measurement Error Identification Camera Error Projector Error Chapter summary Chapter 7 Conclusion Summary of Contribution Literature review Design, development, and calibration of a 3D Sensory System for Macro-scale Applications Design, development, and calibration of a 3D Sensory System for Micro-scale Applications Discussion of Future Work Final Concluding Statement Reference Appendix A- Digital image processing: Focus Fusion algorithms vi

7 List of Tables Table 1: Input configuration parameters Table 2: Output Parameters of the Optimal Configuration Table 3: Pixels per checker ratio of our SL system calibration Table 4: RMS error and standard deviation of the SL sensory system Table 5: Measurement results of a certified metric step block Table 6: Measurement results of a certified metric step block Table 7: 3D measurement results of the using the different fringes Table 8: Camera specifications Table 9: Comparison of the RMS errors Table 10: Transformation matrix Table 11: Compensated results Table 12: Specification of the microscope lens setup for the projector Table 13 : Camera specification Table 14: Adimec Quartz Q-4A180 camera specification [98] Table 15: specification of the microscope lens setup for the camera Table 16: Camera projector synchronization parameters Table 17: Sorbothane damping sheet Table 18: Specification of Fixed frequency grid distortion target Table 19 Projector and camera parameters Table 20: Camera intrinsic and extrinsic parameters Table 21: Projector intrinsic and extrinsic parameters vii

8 List of Figures Figure 1: Measurement Methods... 3 Figure 2: Structured light technique... 5 Figure 3: Experimental setup for common calibration method of using a reference plane [5] Figure 4: Empirical calibration system schematic Figure 5: Micro-phase shifting fringe projection system [51] Figure 6: Fiber Image Techniques in Digital Stereomicroscopy [46] Figure 7: Micro-phase shifting fringe projection system [33] Figure 8: Microscopic Fringe Projection System [32] Figure 9: Analytical calibration system schematic Figure 10. Experimental set up for using modified DLT technique [64] Figure 11. 2D dot array calibration object [64] Figure 12: Experimental set up for using existing multi-step and non-linear optimization method [65] Figure 13: Micro-fabricated square array calibration object [65] Figure 14: Experimental set up using a highly accurate micromanipulator [66] Figure 15: Micromanipulator calibration object [66] Figure 16: System overview with the 3D set up and the measured part Figure 17: 3D Sensory System Hardware Figure 18: Projected sinusoidal phase-shifted patterns Figure 19: 3D model of the SL system Figure 20: 3D model of SL system with FOV and DOF Figure 21: DMD modulation in the DLP light commander projector [87] viii

9 Figure 22: Camera-projector intensity response curves by varying projector illumination Figure 23: Camera-projector intensity response curves by varying camera aperture Figure 24: Camera-projector intensity response curve Figure 25: Linearized intensity response Figure 26: Linear intensity response error Figure 27: The intensity response error at the minimum intensity Figure 28: The intensity response error at the maximum intensity Figure 29: SL system calibration setup Figure 30: Camera calibration error Figure 31: Projector calibration error Figure 32: Calibration image Figure 33: 3D point cloud of the flat plane Figure 34: RMS errors of the SL sensory system in Z direction within 6 mm range Figure 35: Metric step block Figure 36: 3D surface reconstruction of metric step block measured with the optimal configuration Figure 37: 3D point clouds of complex objects: (a) LEGO piece, (b) propeller, and (c) gear.. 56 Figure 38: Error deviation of the flat plane over time Figure 39: 3D point clouds of the step block: (a) Metal surface, (b) Painted matted white surface58 Figure 40: 3D point clouds of the step block: (a) Complex gear (ABS), (b) Barbie (PVC with dyed plastic strands) Figure 41: camera-projector intensity response curve and phase error [3] Figure 42: camera-projector intensity response curve and phase error ix

10 Figure 43: a) 3D reconstructed planar surface before phase correction, b) 3D reconstructed planar surface after phase correction, c) Center row cross-section of the uncompensated and compensated 3D reconstructed planar surfaces Figure 44: 3D point cloud of the square block: (a) Vertical Pattern, (b) Optimal fringe angle of 81 degrees, and (c) Pessimal fringe angle of -8 degrees Figure 45: Cross-section profile of the square block measurements Figure 46: Integer Absolute phase value obtained using 3 & 5 fringes patterns Figure 47: a) Absolute phase value with decimals obtained using 3 fringes & 5 fringes, b) Absolute phase value with decimals obtained using 7 fringes & 11 fringes Figure 48: RMS errors comparison of the SL sensory system in Z direction within 6 mm range 71 Figure 49: (a) 2D circle plane object, (b) corresponding 3D world coordinates Figure 50: Distribution of the reference points Figure 51: Measurement error vector at a z=0 mm Figure 52: Neighbor points for error compensation Figure 53: 3D Sensory System Framework for Micro-Scale Application Figure 54: DLP 0.55 XGA Series 450 DMD Figure 55: Projected pattern demagnification Figure 56: Adimec Quartz Q-4A180 [98] Figure 57:1 CMV4000 CMOS sensor [99] Figure 58: Spectral response curve of Adimec Quartz Q-4A180 camera [98] Figure 59: Airy disk 2D [38] Figure 60: Airy disk 3D [38] Figure 61: Camera-microscope lens setup x

11 Figure 62: Silicon software microenable IV AD4-CL [101] Figure 63: Silicon software microenable IV AD4-CL Figure 64: Tilting effect of the light commander [87] Figure 65: Non-telecentric architecture [87] Figure 66: Telecentric architecture [87] Figure 67: PC-E Micro NIKKOR 45mm f/2.8d Figure 68: schematic of the reverse-lens set up [102] Figure 69: a) Microscope lens setup configuration 1, b) projected pattern Figure 70: a) Microscope lens setup configuration 2, b) projected pattern Figure 71: Scheimpflug effect schematics Figure 72: Microscope lens setup Scheimpflug effect corrected Figure 73: Scheimpflug effect corrected schematics Figure 74: schematic of the reverse lens component set up Figure 75: a) Image before flat field correction, b) Image after flat field correction Figure 76: a) Horizontal vibration, b) Vertical vibration Figure 77: Vibration experiment setup Figure 78: Environment vibration Figure 79: Vibration of the projector from fan Figure 80: Vibration of the projector from fan 1 and fan Figure 81: Foam structure for fan Figure 82: Vibration of the projector compensate fan Figure 83: a) Horizontal vibration of the damped system, b) Vertical vibration of the damped system xi

12 Figure 84: Microscope lens optics Figure 85: Proposed Analytical Calibration Parametric Model Figure 86: Radial Distortion [59] Figure 87: Fixed frequency grid distortion target from Edmund Inc Figure 88: Camera calibration Figure 89: Digital image processing technique Figure 90: Projector Calibration Figure 91: Mapping from camera to projector Figure 92: a) measurement volume without focus fusion, b) measurement volume with focus fusion Figure 93: 3D Sensory System Hardware Figure 94: Calibration Setup Figure 95: Camera s view of reference points at Z= Figure 96: Projected patterns Figure 97: Projector s view of the reference points Figure 98: 3D point cloud of the flat plane Figure 99: Measurement error in z-axis within 0.2 mm range Figure 100: Measurement error in x-axis within 0.2 mm range Figure 101: Measurement error in y-axis within 0.2 mm range Figure 102: Canadian dime with micro-scale features Figure 103: a) Top, b) middle, and c) bottom measurement segments of the micro-scale features xii

13 Figure 104: a) Point Cloud of the micro-scale feature 2, b) Surface of the micro-scale feature Figure 105: a) Top, b) middle, and c) bottom measurement segments of the micro-scale features Figure 106: a) Point Cloud of the micro-scale feature 0, b) Surface of the micro-scale feature Figure 107: RMS error at different focus level Figure 108: Profile of the all-in-focus and partial focus pattern when focus at (a) left section of projection (x= pixels), (b) middle section of projection (x= pixels), (c) right section (x= pixels) of the projection xiii

14 List of Appendix Appendix A- Digital image processing: Focus Fusion algorithms xiv

15 Chapter 1 Introduction 1.1.Motivation Three-dimensional (3D) sensing refers to using measurement technique to analyze a real-world object/environment to collect data of its shape and possibly its appearance to construct digital 3D geometry of the object/environment. Research development of the different sensory techniques has led to the production of a wide variety of commercially available sensors for different applications [1]. Recent research efforts have focused on the development of structured-light (SL) sensing techniques for the accurate 3D measurement of micro-scale parts in manufacturing applications. The focus has been on developing fast and precise manufacturing processes with the aid of 3D sensory systems. The collected 3D data from the 3D sensory systems can be used for a wide variety of applications such as rapid prototyping [1], shape analysis [2][3], reverse engineering [4], quality control [5], and object detection and manipulation [6]. In terms of the different 3D sensing applications in manufacturing, they can be divided into macroscopic-scale and microscopic-scale applications based on the size of the measured part. The term macroscopic-scale (macro-scale) measurement commonly refers to the size of the measured part that is in the range of meters to millimeters and is visible with the naked eye [7]. On the other hand, the microscopic-scale (micro-scale) application, commonly refers to measured part that is sub-millimetres in size and require microscope lens for the measurement process [8]. Contact-based coordinate measurement machines (CMM) are often employed to perform macroscale sensing applications such as part inspection, reverse engineering, and quality control [9]. Compared to conventional 3D sensory systems for macro-scale application, sensory systems for micro-scale application offer higher resolution, precision, and measurement accuracy [10]. In terms of 3D sensory systems for micro-scale applications, scanning probe microscopes (SPM) have been commonly employed to assist MEMs manufacturing industries on tasks such as accurate sensing for micro- manipulation [11] and micro-assembling [12]. Among the SPM 1

16 techniques, atomic force microscopy (AFM) and scanning force microscopy (SFM) have been the most commonly used techniques because of their µm nm accuracy [11], [13]. However, due to the nature of the contact-based sensors, the measurement process is often time consuming and can potentially damage the part s surface [14]. Furthermore, contact-based sensors require precise movements and accurate feedbacks from the contact-probe, and the measurement resolution is dependent on the mechanical design of the contact-based sensors [14]. Therefore, research efforts have been made to focus on the development of accurate non-contact based optical measurement devices to perform accurate 3D measurement in micro-scale manufacturing applications [15]. In the past decade, interest in microstructure manufacturing has increased due to the advancement of micro-manufacturing technology to produce more complex micromechanical structures and efficient electromechanical components to ensure unrivalled performance [16]. The evolving manufacturing processes have led to a dramatic change in pushing the boundaries of machining part size and accuracy [9]. However, the current existing measurement methods are inefficient at assisting micro-scale manufacturing because of the long measurement time and the contact nature of the sensors [10]. Hence, a precise and efficient 3D optical sensor system is needed to achieve non-contact, high speed, high accuracy measuring. The objective of this thesis is to development a novel non-contact 3D sensory system for surface profiling of 3D microscale parts with 0.1µm measurement accuracy. 1.2.Literature Review The pertinent literature of current 3D sensory techniques used in manufacturing, namely, i) the focus measure technique, ii) the interferometry technique, and iii) the triangulation technique is reviewed below Non-contact optical 3D measurement systems Considering the currently existing methods for non-contact optical 3D sensory system designed for micro-scale measurements, based on their working principles, they can be divided into the following categories: focus measurement, interferometry, and triangulation [15], Figure 1. 2

17 Figure 1: Measurement Methods Focus Measurement Confocal microscopes are one of the popular methods to obtain 3D information in micro-scale applications [17]. It consists of two conjugate planes, one to place the part and the other for the detector CCD sensor. The point light source illuminates onto the part and reflects into the sensor passing a pinhole where the sensor captures peak intensities when the section of the part is in focus. The pinhole is able to eliminate out-of-focus light in specimens that are thicker than the focal plane and increase the image contrast. When utilizing with a vertical scanning process, the maximum intensity values in the captured images can be determined. By performing focus measure technique and depth from focus calculation on the intensity values in the captured images, the part s depth information can then be determined [18]. Despite the high resolution, the measurement process is time consuming. Furthermore, the measurement accuracy is highly dependent on the robustness of the focus measure algorithm which is highly sensitive to image noise, since the spatial resolution is non-uniform throughout the measurement volume [17] [19] Interferometry Interferometry microscopes are one of the most common measurement methods used in industrial applications to obtain high precision examination of surface topography [20]. Interferometry microscope obtains the height information by measuring the phase difference between the reference light and the measurement light [6]. The reference light is reflected from a plane mirror with the measurement light reflected from the measurement part. The height 3

18 information is converted from phase different using a mathematical triangulation relationship [20]. A phase unwrapping algorithm is used to resolve the ambiguities when the phase value of the light is at the maximum of minimum [20]. Popular interferometry measurement techniques include: Phase Shifting Interferometry (PSI) [6], and Vertical Scanning Interferometry (VSI) [21]. Despite the high measurement accuracy of nanometers in vertical measurement of interferometry microscopes, precise mechanical stages are required to scan the part across the planes to obtain the intensity information for each point in order to recreate the part surface. The whole measuring process is time consuming, and the scanning steps are dependent on the step sizes of the vertical and lateral stages. Furthermore, the small depth of field of the hardware lenses limits the technique to reconstructing relatively flat parts such as MEMs with canning range of 9μm [21] Triangulation Techniques Laser scanners use a laser source to emit a dot or a line onto the part of interest to be observed from a detector. The detector is either a camera or a linear array of photodiodes that is sensitive to intensity changes. The relative position of the laser and the detector are used in triangulation in order to obtain the 3D profile of the part [22]. When compared to conventional time-of-flight laser technique, it is superior in its precision and speed; and more positional data can be collected regardless of the effects of ambient light [23]. In the past two decades, there have been enormous progress in the hardware development of non-contact sensory components [1], as a result, a large number of companies have developed laser scanners for a wide variety of applications [22]. Dot and line laser scanner shows great potential in many areas including construction industry, manufacturing processes, and reverse engineering. In literature, 3D laser scanners were developed with the implementation of additional mechanical designs (i.e. piezoelectrical rotating mirrors, linear stages) for faster scanning [24]. Though, one of the main drawback in the 3D laser scanner is the inefficient mechanical positioning system in the design, which causing the scanning process to be time consuming [25]. Another limitation of the laser scanners is the specular reflections. When using concentrated laser to measure manufacturing materials such as polished metal or glass, the reflected laser beam behaves in an unpredictable manner, causing incorrect 3D measurements [26]. Finally, when high power laser source is concentrated onto the micro-scale part, the laser power raises safety issues during operation. 4

19 Stereo vision sensors use two or more cameras to capture images of the part from two or more points of views and finds the correspondences between the different images in order to reconstruct the 3D measurement through triangulation [27]. The triangulation uses the separation distance between the two points of views, the rotation and translation of the two points of views respect to their optical axes to obtain the 3D coordinate measurement of a point on the measuring part. It is most feasible when the surfaces are textured or when easily distinguishable features are presented [27]. Despite the simplicity and fast measurement speed, it faces correspondence problems when measuring complex shapes with only few matching points between the images [27],[28]. Structured light sensors, utilize a projector, one or more cameras, and necessary lenses to capture the deformations of a known pattern which is projected onto a part of interest in order to obtain the 3D surface profile [1]. It provides fast measurement and eliminates the complexity of utilizing mechanical stages during measurement. Furthermore, by utilizing the designed patterns in projection, it is capable of overcoming the correspondence error by assigning unique codewords onto the measurement part [29], [30], [31]. Owing to the recent research development in the pattern design, SL sensory systems are capable of measuring a 3D volume in all dimensions; therefore, it is most applicable to complex part measurement in micro-scale application [3], [32], [33]. Baseline Camera Light source Measured part Figure 2: Structured light technique 5

20 The high speed and active nature of the SL sensing technique makes this type of sensing method suitable for manufacturing environments. In comparison to confocal microscopes, the SL sensors are able to provide accurate coordinates in all dimensions, and furthermore, the active nature of the sensor makes it less sensitive to ambient light. When compared with interferometers and laser scanners, SL sensors are capable of obtaining 3D coordinate measurements of an area based on its field-of-view, rather than a single point or an array of points along a laser line. This eliminates the inefficient mechanical components and increases the measuring speed. Using the projected patterns, the process of obtaining the corresponding pixels in the sensor is simplified. Therefore SL sensors can overcome the common limitation of stereo vision sensors when measuring parts with indistinguishable features, and hence allowing SL sensors to be able to measure parts with non-uniform surface or complex profiles. Recently, SL techniques have been investigated for 3D surface profiling of parts in manufacturing inspection applications due to their fast measurement speed and high accuracy in sub-millimeters [34]. For such manufacturing applications, the sizes of the measuring parts are often in the macro-scale range. Structured light 3D sensory systems for micro-scale measurements applications have not yet been explored. The optical 3D measurement for microscale application normally involves additional steps to the sensory system designed for macroscale application. Camera, projector, and suitable microscope lenses will need to be design/selected according to the micro-scale part size. Parameters such as illumination time, illumination power, exposure time, and frame rate, etc. will need to be adjusted according to the measurement specifications. Image processing techniques will be required to filter any potential noises. Furthermore a combination of calibration methods will need to be used to obtain the 3D information of the micro-scale part. To date, no SL system calibration method has been developed for measuring micro-scale parts that considers both the microscope lenses parameters and the narrow depth-of-field (DOF) behaviour of these structured light systems [32]. 6

21 1.3.Common difficulties of the structured light sensory system In SL sensory systems for macro-scale 3D measuring applications, limitations due to the optical nature of the lenses (i.e. lens distortion [35]), light behaviour (i.e. ambient light effect [36], phase errors [36]) and component (camera and projector) noises (i.e. non-linear intensity response [37]) of the structured light sensory system are common difficulties for SL systems. In the micro-scale, not only does the sensor inherit the common SL sensory systems difficulties, additional common microscope phenomena (i.e. diffraction [38], aberration [38], and narrow depth-of-field [39]) occur in the microscope system. Furthermore, current existing calibration techniques used for SL systems for micro-scale applications utilize simplified linear depth function to determine the depth profile of the part without considering the non-linear effects (i.e. distortions due to the camera and projector lenses, camera sensor noise, and the non-linear intensity values from the projector) at micro scale. Hence, current calibration techniques are insufficient [32]. To date, no SL system calibration method has been developed to accurately measure micro-scale parts [32]. 1.4.Problem Definition and Thesis Objective In this thesis, a novel 3D sensor for high accuracy sensing based on SL technique is designed. For micro-scale measurements with SL systems, calibration techniques proposed in the literature rely on linear calibration parameters, and consider only the measurement errors in height. Hence, these techniques are limited to calibrating SL systems that measure relative flat parts such as MEM parts. Through the investigation of conventional structured light sensory system, it was found that the existing techniques suffer greatly from calibration accuracy because of the microscope phenomena of the microscope lens [38]. The focus of this thesis is to develop and calibrate a 3D SL sensory system for the macro-scale system in order to understanding the behaviour of the SL sensor. The thesis final objective is to design and propose a novel 3D sensor system prototype for micro-scale application. 7

22 1.5.Proposed methodology and tasks The objective of this thesis is to development a novel non-contact 3D sensory system for surface profiling of 3D micro-scale parts with 0.1µm measurement accuracy. The design, development, and calibration of the 3D sensory system are presented in the following chapters: Literature Review In Chapter 2, a discussion on the literature review of the different calibration methods of the SL system for micro-scale measurement is presented as a motivation to develop accurate calibration for the 3D measurement system for micro-scale application. An overview of the empirical calibration methods and its applications in SL systems for micro-scale measurement including their abilities and limitations is provided. The chapter then focuses on the analytical calibration methods for SL systems for micro-scale measurements. An overview of different analytical calibration techniques for obtaining accurate intrinsic and extrinsic parameters is presented. Finally, a discussion on the limitations in the calibration procedure of SL systems for microscale application is presented Design and development of the 3D sensory system Chapter 3 presents the design and system setup of the proof-of-concept SL system for macroscale application. This chapter further discusses the hardware and software components of the proof-of-concept SL system. The analytical calibration method was implemented, and the camera model considering the influence of lens distortions was studied Experiments for Macro-scale Applications In Chapter 4, extensive experiments are presents to evaluate the performance of the developed proof-of-concept SL system for macro-scale applications. This chapter further discusses the identification of the error source of the proof-of-concept SL system and purposed compensation techniques for improving the measurement accuracy Structured light System for Micro-Scale Applications In Chapter 5, the overview of the design for the proposed SL sensory system for micro-scale application is presented. This chapter further discuss the optical design of the system and 8

23 presents the system calibrations in component and system levels to addresses the limitation of the SL sensory system for micro-scale application. A new calibration method is proposed to address the limitations of the existing calibration methods for SL sensory system for micro-scale application. The proposed method includes a novel calibration model which explicitly considers the microscope lenses parameters for the hardware components (camera and projector) as well as addresses the limitation of narrow DOF behaviour of these lenses. The latter is achieved by incorporating an image focus fusion technique Calibration experiment for Structured Light System for Micro-Scale Applications Chapter 6 demonstrates the implementation process of the proposed calibration method on the proof-of-concept SL system for micro-scale applications and finally presents the measurement result Conclusion and future work Finally, Chapter 7 presents the concluding remarks, highlighting the contribution of the thesis and defining the future recommendations on SL systems for both macro-scale and micro-scale applications. 9

24 Chapter 2 Calibration of Structured Light Sensory Systems Although, often underestimated, precise calibration of a structured light system is the main prerequisite for successful and accurate 3D reconstruction of parts, therefore, it is important to determine the proper calibration technique that can satisfy the required measurement accuracy for the scale of the measurement [40]. In the macro-scale domain, calibration methods have been well established to precisely measure the 3D geometry of parts; however, research in the micro-scale part measurement have mainly focused on finding part height information using simple empirical calibration methods [32], [33]. In general, calibration techniques of SL sensory systems can be categorized into empirical calibration methods or analytical calibration methods [41]. The following subsections discuss the details of both of these SL calibration methods. 2.1.Empirical calibration method Empirical calibration is an approach where a mathematical function is obtained based on the relationship between the captured 2D images and the 3D world coordinates to describe the triangulation relationship of the SL system without the need of modeling the hardware components and lens phenomena [33][30]. Calibration is conducted using empirical data to calculate the profile-to-reference differences of phase shift mappings of surface deformation and rely on the information of the obtained phase map using either a translation stage or a calibration object [33][32][42][5]. To collect the empirical data of the system, the most common approach is by placing a reference plane at a reference position and to move the plane along different depth locations within the sensor s working range to sample within the calibration volume at regular intervals, Figure 3. 10

25 Figure 3: Experimental setup for common calibration method of using a reference plane [5] From the collected empirical data, for every pixel position, there are several projected intensity values corresponding to each plane position [43][44]. By relating the captured phase values to the relative depths of the planes for the same pixel position, a coefficient known as the phase-todepth conversion constant, K, can be computed using the empirical data. K is further incorporated into a mathematical function that describes the triangulation relationship of the hardware configuration in order to determine the depth profile of the overall part. Different calibration approaches have been proposed to determinek. These approaches generally use linear interpolation [43][45][46], least squares [47], or higher order polynomial fitting [32][48] to determine K from the collected empirical data. A simplified case for the empirical method is demonstrated below to build the mathematical function in order to obtain the depth profile of a part [49]. The schematic diagram of the phaseto-depth relationship is illustrated in Figure 4. 11

26 Figure 4: Empirical calibration system schematic Points P and C are the center of the optical axes of the DLP projector and the CMOS camera, respectively. The optical axes of the projector and the camera coincide at point O. After the system has been set up, a flat reference plane is measured and the phase map is used as the reference for subsequent measurements. The depth of an object surface is measured relative to this plane. From the point of view of the DMD, the point A on the object surface has the same phase value as point P 2 on the reference plane. While on the CMOS, we can see that point A on the object surface and C 2 on the reference plane are imaged on the same pixel. By subtracting the reference phase map from the object phase map, we can obtain the phase difference at the specific pixel. P and C are the nodal points of the DMD and CMOS respectively, with distance W between them and distance L to the reference plane. Using the Side Splitting Theorem, triangles PCA and P 2 C 2 A are similar triangles, thus the depth of point A on the object surface relative to reference plane AA 2 can be related to the distance between points C 2 and P 2 through the following triangulation relationship: (L) (AA ) 2 = C 2P 2 (WW+C ) 2 P 2 [49] (1) 12

27 For general SL system where W >> C 2 P 2 during measurement, the z coordinates can be determined through the following triangulation relationship: z(x, y) = AA 2 L C pp L WW 2PP 2 = φ 2πWW C 2 P 2 = Kφ C2 P 2 [49] (2) where p is the fringe pitch on the reference plane, L is the distance between the sensor and the part, K is the height conversion constant, and φ C2 P 2 is the phase different between the measured plane and the reference plane of the pixel (x, y). The height conversion constant, K, is derived using a known calibration 3D object, or a 2D object with a linear stage. K is a constant directly relating the projected phase value to z coordinates. From equation (2), a mathematical function describing the proportional relationship between the phase map and the surface depth can be derived. Furthermore, to obtain x and y coordinate of the part, one can assume the x and y coordinate values are linearly proportional to the real coordinates of the object. The size of the object is determined using simple conversion constants K X, K Y that relate the measurement area to the pixel size in x and y direction in the sensor. With the conversion constants (K, K X, K Y ) the 3D surface profile of a part can be measured and rebuilt through the mathematical function. To calibrate a SL system for the micro-scale application, microscope characteristics need to be considered. The SL system for the micro-scale application is equipped with the corresponding microscopic lenses to demagnify the projected patterns, and to magnify the captured images. However, with the microscope lenses, the narrow depth-of-field optical behaviour of the lenses limits the measurement range of the system to capture in-focused images. Features obtained outside of focal ranges of the system will be blurred, which introduces image noise and measurement errors [3]. Therefore, the narrow depth-of-field behaviour of the lenses introduces constraints on the calibration that limits the range for sampling the calibration volume, and further, reduces the data points that the system can use for calibration. To be able to perform calibration on the narrow depth-of-field system, empirical calibration models have been commonly applied. Empirical calibration models are generally not limited by system configuration and alignment [50], and the calibration models can be easily derived based on the simple triangulation. Furthermore, the empirical calibration can provide an estimation of phase-to-depth conversion with only a small sample of empirical data [46][51] any modeling of the lenses [33][30]. Therefore, empirical calibration methods have been commonly applied for 13

28 micro domain applications. From literature, all current calibration techniques developed for the SL sensory systems in micro-scale measurement utilize empirical calibrations, where a mathematical function is defined to relate the projected intensity to a depth value and further determine the depth profile of the part. In [51], a microscopic 3D shape measurement system, Figure 5, based on digital fringe projection was developed for non-contact real-time inspection of electronic components. A Digital Micro-mirror Device (DMD) along with its illumination optics, and a CCD camera were integrated into a stereomicroscope. The projected fringe patterns are deformed by the part surface and were recorded by a CCD camera. A calibration standard with a known step height was used to determine the phase-height conversion constant and to verify the accuracy of this model. A mathematical function using linear relationship to relate the part height to the phase of the grating pitch was proposed to obtain the relative height variation on an object surface. With the proposed system, it was shown to achieve a measuring resolution of 2-3 µm. However, the mathematical function is oversimplified, and requires additional error compensation technique. The narrow depth-of-field of the microscope lens limits the system to measure only relatively flat objects. Furthermore, the hardware restriction of the stereomicroscope limits any future development to improve system accuracy. Figure 5: Micro-phase shifting fringe projection system [51] In [46], a stereomicroscopy system, Figure 6, utilizing a custom image fiber bundle, a commercial DLP projector, and a CCD camera was proposed for micro-3d inspection. The 14

29 custom image fiber bundle in the proposed system provides flexibility in hardware configuration. The designed sinusoidal wave patterns were projected into the stereomicroscope through the fiber bundle and onto a micro-component surface. The deformed fringe patterns from the microcomponent surface were then captured by the CCD chip embedded in the image fiber bundle. A simple calibration procedure relating the calibration gauge height to pattern phases was proposed, and the four-step phase shifting arithmetic was used to reconstruct the 3-D contour of the micro-components. With the proposed system, it was shown to achieve a measuring resolution of 10 µm. However, the system was limited to measure only relatively flat objects. Figure 6: Fiber Image Techniques in Digital Stereomicroscopy [46] In [33], a micro-phase shifting fringe projection system, Figure 7, was developed for obtaining the 3D surface profile and deformation measurement of micro-components. The proposed system uses a custom micro phase shifting fringe projector generating fringes onto a microcomponent surface and capturing with a CCD camera. A simple procedure relating the part depth to the phase of the grating pitch was proposed, which enables the calibration of the optical set-up for subsequent quantitative measurement of micro-components of unknown shapes. With the proposed system, it was shown to achieve a measuring resolution of µm level. However, using the simple procedure relating the part depth to the phase, for objects with complex variations, the rapid changes in the surface profile will lead to unresolved depth information and large measurement error. 15

30 Figure 7: Micro-phase shifting fringe projection system [33] Furthermore, in Ref.[32], a novel microscopic SL system, Figure 8, was developed using commercial projector, a CCD camera, and the corresponding microscopic lenses to measure the 3D profile of micro-components. A five-step phase shifting method and a novel phase unwrapping method were used to obtain the phase values for calibration. The system was calibrated by moving a calibrating board and calculating the height of each pixel using a phase to the height polynomial fitting function. The relationship between the height and phase was obtained through system calibration and the object profile can be measured. The proposed system was able to achieve a µm level measuring resolution, however the mathematical function was unable to accurately models the behaviour of the system, and hence the noise level was high in the measurement result. Figure 8: Microscopic Fringe Projection System [32] 16

31 In general, the empirical calibrations have no constraints on system configuration and alignment, and the implementation of the model and algorithms is relatively easy. The calibration is tolerant to hard-to-model optical aberrations [41]. Therefore, empirical calibration methods have been widely applied to SL systems. Despite the advantages, limitations exist in the use of empirical calibrations that restrict their applicability to high accuracy 3D systems. Firstly, empirical calibration methods are only capable of obtaining relative height variation on a part surface as oppose to the absolute coordinates. Secondly, the mathematical model assumes the phase-to-depth constant, K, to be linear throughout the measurement volume, and hence, ignoring the non-linearity behaviour in the components (e.g. projector, camera). These non-linear effects include the distortions due to the camera and projector lenses, camera sensor noise, and the non-linear intensity values from the projector. When applying the linear mathematical function to measure parts with large complex shape variations, the aforementioned non-linear effects will introduce measurement errors in the depth profiles of the parts [33]. For objects with small depth, all assumptions can be sufficiently satisfied. Hence, current calibration techniques are only appropriate for relatively flat simple parts [32]. Thirdly, the empirical calibration method lacks modularization of system parameters [50]. All parameters are coupled and implicitly expressed in a single mathematical function. As a result, each time a system parameter is changed, due to a change of optical components, the entire calibration procedure has to be re-performed, which is time-consuming. Lastly, the non-linear effects vary depending on the lenses of the components, without modeling the lens characteristics, the empirical calibration method will not be applicable for different hardware components [50]. In order to develop an SL system that can accurately measure micro-scale 3D parts with large complex variations in their geometrical shapes, an analytical measurement method using a model-based calibration approach is needed. 2.2.Analytical calibration method An analytical calibration method requires modelling of each hardware component (camera and projector) to determine the 3D information of a part through the triangulation relationship of the hardware components [52]. Analytical calibration is the process of modelling and solving the components intrinsic parameters (focal length, principle point, and lens distortion) and the extrinsic parameters (relative orientation and position of the camera and projector) [53]. 17

32 Accurate model parameters (intrinsic and extrinsic) need to be carefully defined for the parametric models of the camera and projector systems, furthermore, these model parameters need to be accurately estimated in order to perform accurate three dimensional measurements of a part through triangulations [52]. The mathematical model of the analytical methods used for describing SL systems consists of two parts, a camera model and a projector model [52]. The camera model describes the geometric relationship between the 3D shapes of objects and their 2D images in the camera sensor. The projector model describes the geometric relationship between a 2D projection pattern from the sensor and the resulting light intensity distribution in the 3D space. In both models, the characteristics (intrinsic parameters) of the hardware components are determined from the relationship between the reference points in 3D-space and the sensor s 2D images considering the non-linear behaviour of rays caused by lens distortion [54]. Hence, if the reference points in 3D space and the corresponding 2D sensor image coordinates are both known, the intrinsic parameters can be solved. Furthermore, a technique called pixel-to-pixel correspondence of the hardware components is generally applied to relate the two components. The pixel-to-pixel correspondence technique relates the projected pixel from the projector to the corresponding pixel location in the camera sensor by assigning a code word to each projected pixel through the projected patterns [55]. With known intrinsic parameters and the pixel-to-pixel correspondence of the hardware components, the relative pose of the hardware components with respect to each other (extrinsic parameters) can then be determined. The schematic of the analytical model for an SL system is shown in Figure 9. The three coordinate systems are defined as the following: camera coordinate system, projector coordinate system, and world coordinate system. The origin of the camera coordinate system is fixed at the optical centre of the lens. The Z c axis is the optical axis of the camera lens; the X c axis is parallel to the u c axis of the image sensor, and the Y c axis is parallel to the v c axis of the image sensor. The origin of the projector coordinate system is fixed at the optical centre of the projector lens. The Z p axis is the optical axis of the lens; the X p axis is parallel to the u p axis of the image sensor, and the Y p axis is parallel to the v p axis of the image sensor. 18

33 Figure 9: Analytical calibration system schematic With the known intrinsic and extrinsic parameters, the 3D world coordinates of a part can then be determined through the triangulation relationship between the components [56]. Many works have been proposed to model the different camera lens distortion behaviours to improve the accuracy of the intrinsic parameters [35]. Different novel methods have also been proposed to model the projector s intrinsic and extrinsic parameters [57][55][52]. Furthermore, calibration techniques have also been investigated to solve the intrinsic and extrinsic parameters of the camera and projector models using iterative methods to improve parameter accuracy [58]. In order to obtain the most accurate intrinsic and extrinsic parameters, camera and projector parameters and lens behaviour must be carefully considered and solved Camera calibration techniques In SL systems, camera calibration is essential in building a precise camera model to relate 3D world coordinates to 2D images, which is then used in triangulation for 3D measuring. The most commonly used camera model is the pinhole camera model [59]. The pinhole model describes the formation of images as a perspective projection from 3D space to 2D image plane. To better 19

34 describe the camera, common advanced models model the influence of lens distortions to image formation. A complete camera model defines parameters that best describes the optical and geometrical features of the camera s lens and sensor. These parameters defined by the camera model cannot be measured directly and require calibration techniques to estimate them. Different methods in the photogrammetry community can be used to solve for the parameters: Linear transformation [15][46], Non-linear optimization, and Multi-step [54]. Linear transformation: In this method, intrinsic parameters are related to each other and are presented as intermediate parameters. A linear least squares method with a closed-formed solution is used to obtain the intermediate parameters matrix by solving linear equations with known 3D world reference coordinates and 2D image coordinates [56]. The intrinsic parameters are then determined once the intermediate parameters are solved [35]. This type of technique is fast because there is no iterative optimization. However, when only linear transformation technique is employed, the non-linear effects such as the lens distortions cannot be solved. Therefore, this technique is weak in the presence of non-linear behaviours of the camera [59]. Non-linear optimization: A model is considered non-linear when any lens imperfection or distortion is considered in the camera model [59]. In this method, the parameters of the model are searched by using an iterative algorithm with the objective to minimize residual errors in a defined equation (i.e. minimize error between the modelled image points and the actual image points) [59]. Many types of non-linear lens distortions can be incorporated in this technique. Furthermore, accurate estimation of model parameters can be achieved if the imaging model is precise and a global convergence is reached in optimization iteration [60]. However, since the algorithm is iterative, the optimization procedure requires accurate initial input to guarantee convergence [60]. In addition, the optimization can be unstable if the procedure of iteration is poorly designed. The harmful interactions between nonlinear and linear parameters can lead to divergence or to a false solution [60]. Multi-step: In this type of method, direct solutions for some of the intrinsic parameters are computed by using the relationship between the parameters. In the second step, all the remaining parameters are evaluated by non-linear optimization. (e.g., [35][54][61]). 20

35 The main advantage of this type of method is that the most of model parameters can be initially derived from a closed-form solution, and the number of parameters to be estimated through iterations is relatively small. With respect to Non-linear optimization, the multi-step method greatly reduces the number of iterations, hence, the calibration process is more efficient [59]. Furthermore, iterations are nearly guaranteed to converge due to the initial parameters obtained from closed-form solutions [54]. This method utilized the advantages of the previous two methods described above. Various popular calibration approaches that use a single or a combination of the above methods have been investigated. In [62], a direct linear transformation (DLT) technique uses the linear transformation method to solve camera intrinsic and extrinsic parameters by directly relating the 3D world coordinates to 2D images. A 3D calibration object with known reference points was used to provide the 3D world coordinates. In order to improve the accuracy of this linear method, as many reference points as possible within the calibration volume should be included [62]. However, any lens distortion can degrade the accuracy of the parameters computed from the DLT technique [59]. In [54], a calibration technique was proposed using a combination of a multi-step and non-linear optimization method. A well-aligned 2D calibration object was placed on a translation stage and moved along the stage. A camera is used to capture the images of the 2D calibration object at different position. By analyzing the captured images, camera s external position and orientation relative to the object reference coordinates system as well as the lens radial distortion [54] is obtained. A closed-form solution is used to estimate the initial intrinsic parameters, then the non-linear optimization method is used to compute the rest of the parameters based on the best fit between the observed image points and the modelled image points [54]. In [61], a multi-step camera calibration technique that is an extension of the aforementioned two-step method. Similar to [62], a closed-form solution is used to estimate the initial intrinsic parameters, and then all parameters are optimized in a non-linear iteration method [61]. Additionally, a third step is implemented to correct the distorted image coordinates with an empirical inverse model that accurately compensates for lens distortion [61]. Compare to [62], method proposed in [61] incorporated additional radial and decentering distortion model describe the lens distortion [61]. Recently, in [56], a flexible camera calibration technique which consists of a closed-form solution followed by a non-linear minimization approach based on the maximum likelihood criterion was presented. The flexible camera calibration technique uses the 21

36 relationship of homography between the model plane and its image to solve for the camera parameters. Furthermore, the radial and tangential lens distortions are modeled [56]. This method only requires the camera to observe a planar pattern from a few (at least two) different orientations in the 3D space to relate 3D world coordinates to 2D images [56]. The motion of the plane does not need to be known, hence compared with classical techniques which use expensive equipment such as two or three orthogonal planes, this technique has considerable flexibility [56]. In terms of camera calibration for the different measurement scales, there are similarities between the analytical calibration methods in the macro and micro domains. In both domains, the sensory systems follow the fundamentals light behaviour of the optical principles [38], and the modelling of the sensory systems in both domains deals with the mathematical approximation of the physical and optical behaviour of the camera and projector by using a set of parameters [59]. Hence, the micro-scale camera calibration has much in common with traditional camera calibration in the macro domain. Therefore, the aforementioned camera calibration technique can be extended to the micro-domain sensory system calibrations. However, in the micro domain, optical microscope lenses have unique characteristics with respect to the macro camera lenses [63]. General optical microscope lenses consist of a tubelens system, an objective lens, and an imaging sensor. To build a precise camera model, the microscope parameters (i.e. exact focal length, object plane to front focal plane distance, the optical tube length, and the lens magnification) must be considered within the camera calibration model. Recent advancements in the field of MEMs have resulted in camera calibration techniques for the micro-domain applications. Parametric camera models for microscopes have been developed and implemented for the micro-scale calibration for 2D measurements by extending or modifying the existing camera calibration techniques (i.e. DLT method [64], Tasi s multi-step calibration method [65] and Zhang s calibration method [66]). In [64], the characteristics of the microscope-camera were investigated, and a fast and simple calibration technique was developed. An optical microscope camera model was built based on the pinhole camera model with additional microscope parameters (microscope focal length, microscope magnification). The calibration technique was based on the DLT technique considering the relationship between 22

37 the digital image coordinate system, Figure 10, and the calibration sample coordinate system on a 2D dot array calibration object, Figure 11. The calibration takes the digital image coordinates and the calibration sample coordinates of the reference points to calculate the microscope model parameters based on the least-squares method. With the proposed method, it was shown that the 10X camera-microscope system was able to achieve a 2D measurement with average error of 0.68 um. However this calibration does not consider the lens distortion in the calibration model, and hence is oversimplified. Figure 10. Experimental set up for using modified DLT technique [64] Figure 11. 2D dot array calibration object [64] Similarly, in [65], the existing multi-step and non-linear optimization method camera calibration technique [54] was extended to include the unique parameters of an optical microscope. Calibration was performed in two steps. The first step determines all extrinsic parameters through a closed-form solution; the second step calculates all intrinsic parameters by a nonlinear optimization procedure. By using a micro-fabricated square array calibration object, Figure 13 and the parametric camera calibration model, it was shown that the 10X camera-microscope system, Figure 12 is able to achieve a 2D measurement accuracy of 0.67 um. However, the method simplifies the entire calibration model by assuming small angle approximations to the pitch and yaw angles of the rotation matrix. When these angles become larger, the error due to the linear approximation becomes more significant. 23

38 Figure 12: Experimental set up for using existing multi-step and non-linear optimization method [65] Figure 13: Micro-fabricated square array calibration object [65] Furthermore, in [66], a camera-microscope calibration method using a highly accurate micromanipulator, shown in Figure 14, to provide 3D reference coordinates was proposed. The proposed parametric camera calibration model was based on a modified form of camera calibration technique in [56]. It provides a larger calibration volume by solving for the intrinsic parameters using the concept of the homography transformation. With this calibration method, the system was shown to achieve a 3D measurement accuracy of 0.23µm. However, the calibration range is limited by the narrow depth-of-field of the microscope lens, hence, the calibration only uses a single image; hence, it reduces the number of intrinsic parameters that can be solved. Figure 14: Experimental set up using a highly accurate micromanipulator [66] Figure 15: Micromanipulator calibration object [66] From the literature, it is clear that the more recent models for the camera microscope calibration mentioned above are based the adaptive forms of the popular calibration techniques in the macro-domain and modify according to the characteristics of camera-based microscope system 24

39 for 2D measurements in micro-scale [66][65][64]. However, the current existing models and techniques for the camera microscope calibration have their limitations. In microscope-camera systems, the optical microscope narrow depth-of-field characteristic that is quite different from normal camera lenses in the macro domain. The narrow depth-of-field of the microscope lenses objectives poses as a constraint on the 3D SL system calibration for micro-scale applications [32][33][46][51]. The small focus distance from the narrow depth-of-field implies that reference planes located at varies depths distances outside of the focus distance cannot be used during the calibration. This constraint limits the ability of the calibration technique to accurately obtain the intrinsic and extrinsic parameters based on relating the 3D world coordinates to 2D images. To overcome this limitation, the above mentioned parametric camera models strictly consider using only a single plane placed parallel to the image sensor to overcome the narrow depth-of-field while sacrificing measuring range [66]. Hence, the narrow depth-of-field limits the measuring application to only flat parts [66]. In this dissertation, we aimed at developing a camera model for the micro-scale application considering the model parameters (intrinsic and extrinsic) of the microscope lenses. To accurately calibrate the camera for the micro-scale application, a calibration technique capable of dealing with the narrow depth-of-field behaviour of the microscope lenses while obtaining the intrinsic and extrinsic parameters of the camera is necessary Projector calibration techniques In SL systems, projector calibration is another essential component in building a precise projector model to relate 3D world coordinates to 2D projector images, which is then used in triangulation for 3D measuring. The analytical calibration of the projector can be categorized into two groups of methods. The first method, the world coordinates of the projection points are obtained by the calibrated camera, and further use to calibrate the projector [3][67][68]. In this method, target points are generated by projecting specific designed patterns onto planes of different heights and captured with a calibrated camera [3]. With the known intrinsic and extrinsic parameter of the camera and the projected target points, the projector s intrinsic and extrinsic parameters can then be computed [3]. Additional iterative approach can be implemented to optimize the accuracy of the projector s intrinsic and extrinsic parameters by comparing the projected points and the modelled projected points [69]. This methods proves to 25

40 be simple and convenient, however, the calibration accuracy of the projector relies on that of camera [70], hence the camera calibration error unavoidably affects the reliability of the projector reference data thus degrades the accuracy of the projector calibration. The best projector calibration accuracy is generally a single order of magnitude lower than the camera calibration [70]. The second method, the camera and projector are calibrated separately. Code words are assigned to the projector pixels to establish the pixel to pixel correspondence between the projected sensor and the camera sensor. With the pixel to pixel correspondence, the image coordinates of the 3D calibration points can be mapped from the camera sensor to the projector sensor, hence the projector is treated as a reverse camera, and can then be calibrated as the camera by using the calibration points in the projector s view [52][71][72][73][70]. The second method has been widely adopted in SL system modelling since it depends less on camera calibration and can achieve higher calibration accuracy for both the components (camera and projector). Furthermore, in the second method, to increase calibration accuracy different codification techniques have been proposed to improve the correspondence accuracy of the camera and projector [29]. The use of high resolution gray code [30], noise robust phase shifting [36], and gray code with phase shifting [1][70][69] have proved to provide robust and high resolution correspondence matching [74]. In terms of projector calibration in the micro-domain, to date, no research has been published. In this dissertation, we aimed at developing a projector model for the micro-scale application considering the model parameters (intrinsic and extrinsic) of the microscope lenses. To accurately calibrate the projector for the micro-scale application, a calibration method independent of the camera calibration is necessary. Furthermore, a robust codification method to establish the pixel to pixel correspondence between the camera and projector is required, where the projector can be calibrated as a reverse camera. Finally, a calibration technique capable of dealing with the narrow depth-of-field behaviour of the microscope lenses while obtaining the intrinsic and extrinsic parameters of the projector is necessary. 26

41 2.3.Challenges of developing a SL system Macro-scale applications In terms of the structured light sensory system for macro-scale applications, common difficulties exist due to the optical nature of the lenses and light behaviour of the structured light sensory system. Lens distortion is one of the common difficulties of the structured light sensory system. Lens distortions such as curvature distortion, decentering distortion, and thin prism distortion in the projector and camera lenses often introduce false position in the position of the image points in the image plane and resulting in image coordinates error [35]. Secondly, the presence of ambient light often causes degradation in the projected coded pattern from the projector and introduces error in the deformations of the projected intensity that decreases measurement accuracy [36]. Thirdly, the non-linear behaviour of the projector intensity and the high image noise from hardware components (camera, projector) often degrades the image quality and introduces image noise [37]. Finally, the accuracy of the sensory system calibration is crucial for application that involves quantitative measurements. The calibration parameters estimate the geometrical relation between the projector and camera to the part, it is critical to employ proper calibration methods for the structured light system [52]. Micro-scale applications In terms of the structured light systems for micro-scale applications, not only does the sensor inherit the difficulties as the sensory systems designed for the macro-scale application, different phenomena in the microscope system cause additional difficulties. Not only does lens distortion presents in the microscope lenses, but optical behaviours such as diffraction, aberration, and color crosstalk are common phenomenon that degrade the image quality. Firstly, in optical imaging systems, the fundamental maximum of the resolution is limited by the diffraction limit where the light begins to disperse when passing through the small opening of the lens numerical aperture. In order to achieve the highest resolvable accuracy for the SL system using visible light, an optical system with the ability to produce images with resolution as good as the theoretical limit is needed. The camera will need be able to resolve the finest sub-micrometer detail that is being projected. Furthermore, hardware specification needs to be considered for 27

42 high camera-projection-pixel ratio to obtain a high sampling rate to ensure correct camera-toprojector correspondence [37][75]. Secondly, any vibration from the measuring environment and the sensory system will be prominent source of errors in the micro-scale measurements. Thirdly, one of the main difficulties in implementing the model-based calibration for micro-scale application is the narrow depth-of-field of the microscope optics [32][76]. Since the model-based calibration requires a series of images captured in the whole working volume for determining the camera s and projector s intrinsic and extrinsic parameters, the narrow depth-of-field of the microscope lens will introduce defocus/blur images, and hence introduce large calibration error. In order to overcome the narrow depth-of-field, technique for generating an all-in-focus images is needed for calibration process [77][78]. Additionally, to perform the analytical calibration of SL system, a calibration object is required to determine the relationship between the digital image coordinate system and the calibration reference coordinate system to build accurate mathematical model of the hardware components [79]. The precision of the analytical calibration is directly related to the reference calibration target. Hence, it is necessary to utilize a proper calibration object with accurate reference points depending on the desired measurement accuracy of the SL system. In camera calibration methods for the macro-scale applications [54][52], a calibration pattern is commonly used to provide the reference 3D world coordinates with high precision. At micro-scale, it is important to utilize a precise sub-millimeter 3D calibration structure containing micro-markers that serve as well-distinguishable reference points for the calibration [66]. The proper calibration object must include the following characteristics: a) 3D pattern with textures so that the feature can be identify by the imaging sensor [80], b) Distinguishable features in order to avoid ambiguity problems when identifying the reference features [80], Provides features at different depths in order to have reference features for 3D [80]. From literature, current popular micro-fabricated calibration objects widely used in micro-scale calibration are developed based on optical lithography engraving [65][64][80], water drop covered with nickel filings [81], and highly accurate micromanipulator [66]. 28

43 Finally, for micro-scale measurements with SL systems, the current calibration techniques proposed in the literature rely on linear calibration parameters, and consider only the measurement errors in height. Hence, these techniques are limited to calibrating SL systems that measure relative flat parts such as MEM parts. Through the investigation of conventional structured light sensory system, it was found that the existing techniques suffer greatly from calibration accuracy because of the microscope lens characteristics. In order to develop an SL system that can accurately measure micro-scale 3D parts with large complex variations in their geometrical shapes, analytical calibration technique considering the model parameters (intrinsic and extrinsic) of the microscope lenses is needed. 2.4.Chapter summary In general, the calibration technique of SL sensory systems can be categorized into empirical calibration methods or analytical calibration methods [41]. Current calibration techniques used for SL systems for micro-scale applications utilize empirical methods, where the simplified linear depth function is utilized to determine the depth profile of the part without considering the non-linear effects at this scale. These non-linear effects include the distortions due to the camera and projector lenses, camera sensor noise, and the non-linear intensity values from the projector. When applying the linear depth function to measure parts with large complex shape variations, the aforementioned non-linear effects will introduce measurement errors in the depth profiles of the parts. Hence, current calibration techniques are only appropriate for relatively flat parts [32]. In order to develop an SL system that can accurately measure micro-scale 3D parts with large complex variations in their geometrical shapes, an analytical measurement method using a model-based calibration approach is needed. Such calibration procedures require accurate optical models of the camera and projector that consider the microscope lens parameters for these components. These parameters include microscope lens focal length, focal distance, optical tube length, and lens distortions. An additional challenge that needs to be addressed for such applications is obtaining in-focus images of the entire complex part due to the very narrow depth-of-field (DOF) of the microscope lenses. To date, no SL system calibration method has been developed for complex micro-scale parts that considers both the microscope lenses parameters and the narrow depth-of-field (DOF) behaviour of these lenses [32]. 29

44 Chapter 3 Structured light sensor system for Macro-scale application In this chapter, the development of the 3D sensory system for measuring complex macro-scale parts is presented. Namely, the hardware and software components of the sensory system are detailed, and the calibration of the system parameters is discussed. The overall 3D sensory system architecture is shown in Figure 16. Figure 16: System overview with the 3D set up and the measured part 3.1.System setup The proposed 3D structured light sensory system consists of a projector (active light source), a camera (capturing sub-system) and a part of interest to be measured. The proposed 3D structured light sensory system is based on projecting and capturing two unique sets of patterns. By using the SL techniques to obtain and analyze the deformations of a designed light pattern projected onto the part of interest, the 3D surface profile of a part can be obtained Hardware development The SL sensory system utilizes a DLP projector (Texas Instrument Inc. DLP Light Commander Projector) with a native resolution of 1024x768 and a brightness of 200 ANSI lumens to project 30

45 the coded fringe patterns onto an object of interest, Figure 17. The projector s controller board (DLP c200 controller chip) is programmed to utilize the DMD micromirrors chip (DLP 5500 DMD) to project three phase-shifted fringe patterns in monochrome mode at high speeds. The DMD micromirrors chip is composed of an array of micromirrors, each representing a pixel in the projected image. The projected patterns are captured using a CCD camera (Prosilica GE680C) with a resolution of 640x480 pixels. The projector and camera are mounted on a aluminum plate with a fixed relative pose with respect to each other. The camera can be exposed for as little as 25 μs, however, it requires 5000 μs of readout time (time required to digitize the CCD cell voltages). The synchronization is done through a microcontroller, triggering the camera and the projector simultaneously by sending an impulse function. The camera is configured to have an exposure time of 4800 μs when triggered, and the projector is configured to illuminate 4800 μs upon triggered. An impulse function with a pulse width of 10 μs and a period of 9800 μs is sent to both camera and projector. The camera and projector synchronization setup has a system frame rate of 102 Hz. DLP Projector Camera Aluminu m plate Triggering unit Figure 17: 3D Sensory System Hardware Software In order to define the most suitable SL technique for our application, we have defined the following criteria for the technique: i) it should provide sub-pixel accuracy for complex static parts, ii) it should be computational inexpensive and be implemented in real-time, and iii) it 31

46 should be part-color independent. Among the available fringe projection techniques for SL systems, the sinusoidal phase shifting technique is superior to others due to its ability to provide high resolutions and fast processing speeds [82]. The sinusoidal shifting technique consists of projecting a sequence of continuous patterns with sinusoidal intensity profiles. In the system software design, a multiple coded pattern based sinusoidal phase shifting technique in monochrome mode was implemented in order to determine correspondence between the camera and projector [83]. This technique allows for intensity normalization of captured images, making it robust against various part color surfaces [84]. In detail, the sinusoidal phase-shifting patterns were implemented using two sets of three continuous phase shifted patterns with sinusoidal intensity profiles in order to determine correspondence between the camera and projector [83]. The first set of patterns consists of three patterns with five vertical fringes; the second set consists of three patterns with one vertical fringe. For both sets, each pattern is shifted by 2π/3 with respect to the other patterns, Figure 18. Set 1: (Image 1) (Image 2) (Image 3) Set 2: (Image 1) (Image 2) (Image 3) Figure 18: Projected sinusoidal phase-shifted patterns In order to project these two sets of patterns, the three patterns in each set are designed in grayscale and loaded into the DLP light commander projector for sequential projection. The DLP light commander projector allows the projected patterns to be coded at pixel level. The codification of the intensities of the three patterns in each set is designed and implemented as follows [85]: I 1 (x, y) = I (x, y) + I (x, y)cos [θ(x, y) 2π/3] (3) I 2 (x, y) = I (x, y) + I (x, y) cos[θ(x, y)] (4) I 3 (x, y) = I (x, y) + I (x, y)cos [θ(x, y) + 2π/3] (5) 32

47 For a given pixel, I(x, y) represents the intensity value which is a function of the average intensityi (x, y), intensity modulationi (x, y), and θ(x, y) which is the phase value that varies between 0 and 2π based on the position of the pixel in the fringe period. The projected patterns are captured by the synchronized CCD camera. The first set (Set 1) of the captured images is used to obtain the phase of each pixel by solving Equations (1)-(3) for θ(x, y) [83]: θ(x, y) = tan 1 3 [I 1 (x, y) I 3 (x, y)]/[2 I 2 (x, y) I 1 (x, y) I 3 (x, y)] (6) Furthermore, a robust and efficient phase unwrapping algorithm technique [83] which is able to handle the discontinuities in the sinusoidal phase shifting technique is implemented. The technique consists of using the three single fringe sinusoidal phase shifted patterns in Set (2) to remove the fringe discontinuities of Set (1). This is achieved by first obtaining the phase of Set (2) by using Equation (4). Since Set (2) only has one fringe in its pattern, there are no fringe discontinuities in the image. Once the phase map for Set (2), is obtained, its phase values is utilized in order to unwrap the phase values for Set (1) [83]: θ pp (x, y) = floor θ 2(x, y) 2π Num Fppippgpp 2π + θ 1 (x, y) (7) For a given pixel, θ a (x, y) represents its absolute phase value which is obtained using the following: a) the relative phase value of Set 2, θ 2 (x, y), b) the number of fringe patterns in Set 1, and c) the relative phase value of Set 1, θ 1 (x, y). Once the absolute phase value is determined and the final phase unwrapped image is obtained, the absolute phase values of the pixels in the phase unwrapped image are matched with the phase values of the projected pixels in order to obtain the pixel-to-pixel correspondence between the camera and the projector. Triangulation is then used to convert the correspondence information into the 3D representation of an object using a phase-to-height algorithm based on the intrinsic and extrinsic parameters of the camera and the projector [52] Triangulation configuration and operation To set up the optimal system configuration of the SL system, a 3D model was built in SolidWorks based on the following: a) relationship between the camera and projector, b) the field-of-view (FOV) of the components, c) the depth-of-field (DOF) of the components. The 33

48 model was used to visualize the physical hardware configuration, and the measurement volume of the optimal configuration derived from the three-dimensional structured-light sensory systems generic design methodology [34]. In the 3D model, the intersection of the optical axes of the components (camera and projector) were fixed onto the measured part, where the camera s and projector s working distances were expressed as a function of the following input parameters for the components (camera and projector): separation of nodal points between the components, tilting angles of the camera and projector, pixel sizes of the camera and projector, apertures of the camera lens and projector lens, focal lengths of the camera and projector, resolutions of the camera CCD and projector DMD, and overlapping volume obtained from the FOVs and DOFs of the camera and projector. The values of the input parameters of the optimal configuration are presented in Table 1. Table 1: Input configuration parameters Parameters Value System Z distance to part("zp") Zp = 356 mm Width between camera and projector(w) W = mm Height between camera and projector(h) H = mm Length between camera and projector(l) L = ( )mm Camera horizontal tilting angle(α) α = PProjector vertical tilting angle(β) β = PProjector pixel size (CoCp) CoCp = mm Camera pixel size (CoCc) CoCc = mm Camera aperture(afc) AF = 4 PProjector aperture(afp) AFp = 2.8 Camera focal length(fc) Fc = 16 mm PProjector focal length(fp) Fp = 28 mm Camera resolution(ppixelhp, PPixelVp) PPixelHp = 640; PPixelVp = 480 PProjector resolution(ppixelhc, PPixelVc) PPixelHc = 640; PPixelVc =

49 Figure 19: 3D model of the SL system The geometrical distances between the camera, projector, and the measured part are first defined. Then the intersection between the optical axes of the components (camera and projector) and the measured part is defined. Knowing the input variables of sensory system s z distance to part, Zp, and camera horizontal tilting angle, α, width between camera and projector, W, and length between camera and projector, L, the projections of the camera, M, and the projections of the projector, P, are derived through trigonometry relationship. M = Zp/sin(α) (8) PP = (M 2 + W 2 + L 2 2 M W 2 + L 2 1/2 cos(α tan 1 ( L/W ))) 1/2 (9) 35

50 With the width between camera and projector, W, the length between camera and projector, L, the projections of the camera, M, and the projections of the projector, P, the projector s horizontal tilting angle, τ, is derived. τ = cos 1 (( W 2 + (L) 2 + P 2 M 2 )/(2 W 2 + L 2 1/2 PP) ) (10) Furthermore, with known input parameters of the projector vertical tilting angle, β, the height between camera and projector, H, the projections of the camera, M, and the projections of the projector, P, the camera s vertical tilting angle, γ, was then derived. γ = tan 1 ( ( tan ( β ) PP H) / M) (11) With the known parameters of projector vertical tilting angle, β, the camera s vertical tilting angle, γ, the projections of the camera, M, and the projections of the projector, P, the working distance of the projector, Wdp, and the working distance of the camera, Wdc, were then derived. Wdp = PP/cos(β) (12) Wdc = M/ cos( γ ) (13) With the known parameters of the focal length of the component, f, the component lens aperture, AF, the individual pixel width and height of the component, CoC, and the component s working distance, Wd, the depth-of-field, DOF, was derived. The DOF is the sum of the near depth-of-field, DN, and far depth-of-field, DF, for the component. DN = Wd 1 ( f ) 2 ( AF CoC ) ( f ) ( 2 (14) ( AF CoC ) + f) + Wd 2 f DF = abs ( Wd ( 1 ( f ) 2 ( AF CoC ) ( f ) ( 2 ) (15) ( AF CoC ) + f) Wd 36

51 With the known component s working distance, Wd, the component s pixel size, CoC, the component s vertical resolution, PixelV, the component s horizontal resolution, PixelH, and the focal length of the component f, the field-of-view of the components, FOV, is derived. FOV = Wd u CoC f (16) The measurement volume shown in Figure 20 is determined from the model based on the overlapping volume of the FOV and DOF of the camera and projector. Figure 20: 3D model of SL system with FOV and DOF By inputting the parameters of the optimal configuration from Table 1 into the 3D model, the output parameters of the optimal configuration are shown in Table 2. Table 2: Output Parameters of the Optimal Configuration Output Parameters Camera FOV PProjector FOV Camera DOF PProjector DOF Measurement Volume Working range of the sensor Value 109 mm x 82 mm 129 mm x 96 mm 30.3 mm 19.9 mm 9.5 cm x 7.5 cm x 10 cm mm 37

52 3.2.System calibration In this section, common experimental procedures were perform on the designed SL sensory system to build a look up table to configure the parameters (projector illumination time, projector illumination power, camera exposure time, and projector/camera lens aperture) of the sensory system for obtaining a one-to-one camera-projector intensity response curve with the highest achievable linear response intensity range. Furthermore an existing analytical calibration method for structured light system calibration was implemented, and the camera model considering the influence of lens distortions was studied. The computed intrinsic and extrinsic parameters of the SL system are presented Intensity calibration Intensity calibration is the process of selecting and adjusting the camera-projector intensity response curve such that the curve is linear throughout the camera and projector s intensity response range. The camera-projector intensity response curve is important in SL systems as it influences the working range and accuracy of the 3D coordinate measurements [36]. Any nonlinear behaviour in the components will result in noise or non-linear behaviour in the cameraprojector intensity response curve and generate phase errors which reduce the measurement accuracy [36]. Ideally, a one-to-one camera-projector intensity response curve is needed to prevent any under-sampling of the projected patterns. Furthermore, the intensity range of the linear segment of the camera-projector intensity response curve corresponds to the resolution in the projected images. A large intensity range of linear camera-projector intensity response curve with one-to-one intensity correspondence can provide higher signal to noise ratio and produce a higher resolution for 3D measurements[36]. Therefore, the goal of the intensity calibration is to obtain a one-to-one camera-projector intensity response curve with the highest achievable linear response intensity range. Common experimental procedures for obtaining the camera-projector intensity response curve are performed by projecting a series of intensity levels and capturing them with the camera and record in a look-up-table (LUT) [86]. The projected intensity versus the captured intensity information in the LUT is then used for adjusting the projected image in order to produce linear camera-projector intensity response curve. The camera-projector intensity response curve is a 38

53 combination of many factors, such as projector illumination time, projector illumination power, camera exposure time, and projector/camera lens aperture. Currently, the SL system is synchronized for high speed measurement through a microcontroller by sending pulse functions with periods of 4800 μs and pulse widths of 10 μs, which resulting in a system frame rate of 205 Hz. Hence, the exposure time is fixed. Therefore, projector illumination power, and projector/camera lenses apertures are considered to obtain the build the LUT and to obtain the optimal camera-projector intensity response curve Varying projector power In the optical system, the aperture of the lens refers to the size of the opening in which light travels. The size of the aperture controls the collimation of the light rays onto the camera CCD. Hence, changing the aperture on the component s lenses will change the amount of light being observed. Therefore, one can adjust the apertures of the components (camera, projector) to obtain the different camera-projector intensity response curve. Though, there exists a limitation on the aperture that we can use on the DLP projectors. The DMD modulates in the DLP light commander projector as a bi-stable spatial light modulator, consisting of an array of movable micromirrors, shown in Figure 21. Each mirror is individually controlled to reflect and produce different intensities by tilting the angles of the mirrors [87]. Light reflected from on pixels is reflected generally normal to the DMD plane towards the projection lens for full illumination. Whereas in off state light is reflected at a higher angle toward a light dump for no illumination [87]. The output image is then created by the intensity modulation between the on and off state and focused into the projector lens. 39

54 Figure 21: DMD modulation in the DLP light commander projector [87] The tilt angle of the micromirrors is ±12 degrees, meaning that the lens aperture of f/2.8 or higher with projection angle larger than 24 degrees is required for the image to be fully illuminated and focused into the projector lens without losing image quality. Therefore, in our SL system, the projector lens is fixed to f/2.8 to project the designed patterns and the projector s illumination power is adjusted instead of the projector s lens aperture to obtain the cameraprojector intensity response curve. In order to obtain the appropriate camera-projector intensity response curve, the intensity of the projected images were incrementally increased from the minimum (0 intensity) to maximum intensity level (255 intensity) and then captured by the camera. At each intensity increment, the intensity values of a 200 x 200 pixel area at the center of the captured images were averaged. By changing the projector s illumination power, the maximum captured intensity can be increased. Figure 22 presents the camera-projector intensity response curves with camera aperture set to F4 and varying the projector s illumination power ranging from 30 lumens to 80 lumens to examine the effects. Figure 22 shows that intensity response curve of the lower illumination power has narrow linear response intensity range ( intensity), while the intensity response curve for the high illumination power shows under-sampling of the projected intensity. At projector illumination of 36 lumens, the camera-projector intensity response curves 40

55 has a slope of while the camera captures the largest linear range of , hence a projector illumination of 36 lumens is used. Captured intensity Projected intensity 30 lumens 36 lumens 38 lumens 39 lumens 40 lumens 45 lumens 50 lumens 80 lumens Figure 22: Camera-projector intensity response curves by varying projector illumination Varying camera s lens aperture In order to obtain the ideal linear curve at 36 lumens, we have examined a series of cameraprojector intensity response curves based on a series of camera lens apertures ranging from f/1.4 to f/16. Figure 23 presents the camera-projector intensity response curves by varying the camera aperture. Varying the camera lens apertures has a larger impact on the camera-projector intensity response curves than varying the projector s illumination power. Figure 23 shows that intensity response curves for the larger f-number provides a small linear response intensity range ( intensity), while the intensity response curve for the smaller f-number undersamples the projected intensity. The camera lens aperture of f3 was chosen since the cameraprojector intensity response curve has a slope of while the camera captures the largest linear range of Hence, the projector s illumination power of 36 lumens with camera lens aperture of f3 was used. 41

56 Captured intensity Projected intensity f1.4 f2 f2.8 f3 f3.5 f4 f8 f16 Figure 23: Camera-projector intensity response curves by varying camera aperture Figure 24 presents the camera-projector intensity response curve and the linear fitting of the intensity values prior to adjusting the projected intensity using the LUT. The projected intensity patterns were adjusted through the LUT according to the fitted linear equation in order to achieve a one-to-one camera-projector intensity response curve. The procedure to obtain the camera-projector intensity response curves is then repeated using the newly adjusted projected images. The adjusted camera-projector intensity response curve of the system is presented in Figure 25, where it is close to a one-to-one ratio between the camera and projector intensity. The intensity response error after intensity calibration is presented in Figure 26. From the intensity response error, the linear projection intensity range is chosen to be within to prevent a higher intensity response error. Note that all estimated errors within the intensity range are within ±2.5 intensity levels which are close to ideal. 42

57 Figure 24: Camera-projector intensity response curve Figure 25: Linearized intensity response 43

58 Figure 26: Linear intensity response error Random noise and thermal noise Random noises occur in the camera CCD when shot noise is presented [88] or in the projector DMD when the illumination is unstable. Thermal noise occurs when dark noise is present in the camera CCD [88], or when the projector DMD is overheated over a time period. In order to determine the random noise and thermal noise in the SL system, an intensity experiment was performed over a time period. At each ten minute interval, the camera-projector intensity response curve is obtained. A series of projected images incrementally increased from the minimum (40 intensity) to maximum intensity level (210 intensity) is captured. At each intensity increment, the intensity values of a 200 x 200 pixel area at the center of the captured images were averaged and recorded. The intensity response error for the minimum intensity is presented in Figure 27 and for the maximum intensity (210) is presented in Figure 28. The results show that the higher intensity introduces larger error (±3 intensity levels) compared to the lower intensity response curve (±1.5 intensity levels). The result further shows that the errors do not accumulate over time meaning, therefore, thermal noise is not noticeable in the designed SL 44

59 sensory system, and the random noise affects higher intensity values more than lower intensity values. Intensity Error (0-255) Min intensity Time (mins) Figure 27: The intensity response error at the minimum intensity Intensity Error (0-255) Max intensity Time (mins) Figure 28: The intensity response error at the maximum intensity Analytical calibration method An analytical calibration method is used to model and determine the components intrinsic parameters (focal length, principle point, and lens distortion) and extrinsic parameters (relative orientation and position of the camera and projector) [53]. The intrinsic parameters describe the geometric relationship between the 3D reference points and their 2D images in the camera and projector sensors [53]. The extrinsic parameters provide the geometric relationship between the camera, projector and the measurement part [53]. These accurate model parameters (intrinsic and extrinsic) are needed to solve for the 3D coordinate information of parts using the 3D sensory system. To calibrate for the SL system s components intrinsic and extrinsic parameters, a calibration toolbox [89] with calibration method similar to the flexible camera calibration technique [56] which consists of a closed-form solution followed by a non-linear minimization approach was utilized. Two sets of sinusoidal phase shifted fringe patterns are used to obtain pixel to pixel 45

60 correspondence between the camera and projector. The first set consists of three patterns of five horizontal fringes and three patterns of one horizontal fringe. The second set consists of five vertical fringes and three patterns of one vertical fringe. The absolute phase values of the two sets of patterns are assigned to the projector pixels to establish the pixel to pixel correspondence between the projector DMD and the camera CCD. With the pixel to pixel correspondence, the image coordinates of the 3D calibration points can be mapped from the camera sensor to the projector sensor. Hence the projector is treated as a reverse camera, and can then be calibrated as the camera by using the calibration points in the projector s view. The calibration toolbox [89] is then employed to obtain the intrinsic and extrinsic parameters of the camera and projector separately. An overall calibration error including the effect of the aforementioned random noise error is presented Calibration target testing For analytical calibration method, the characteristics (intrinsic parameters) of the hardware components are determined from the relationship between the reference points in 3D space and the sensor s 2D images. To calibrate the SL system using the analytical calibration method [52], a checkerboard pattern is designed and printed as reference points. Research shows that there exist is a relationship between the checker size and the calibration error: a) If the pixels per checker square is too few, the corner detection error will be dominant and hence, the calibration cannot be performed accurately [79], b) however, if the checker size increases beyond a certain value, the calibration error increases since the number of calibration points for parameter estimation become less [79]. An optimal ratio of pixels per checker width for the structured light system calibration was found to produces the least amount of error for both the camera and projector calibration [79]. Table 3 presents the pixels per checker ratio of our designed SL system. The camera pixels per checker ratios were obtained by capturing the checker images within the measurement volume and determine the mean pixel to checker ratio for all checkers on the board. The projector pixels per checker ratios were obtained by mapping the camera s views to projector s view through pixel-to-pixel correspondence and determine the mean pixel to checker ratio for all checkers on the board. An optimal value of 4 x 4 mm checker size is selected for the calibration target. 46

61 Table 3: Pixels per checker ratio of our SL system calibration Checker Size (mm) 1 mm 2 mm 3 mm 4 mm 5 mm 6 mm Camera pixels /checker square Projector pixels/checker square Intrinsic parameters The calibration setup of the SL system is shown in Figure 29. Based on the 3D model, 64 image positions (translated along x, y, z, rotated along x, and z) within the camera s measurement volume are determined. The selected black and white checkerboard pattern is then placed at the image position for camera calibration. Moreover, based on the 3D model, a total of 104 image positions (translated along x, y, z, rotated along x, and z) are determined based on the system s measurement volume for projector calibration. The selected black and white checkerboard pattern is placed on a high precise stage, shown in Figure 29, and is moved to the selected positions within the SL system s measurement volume to obtain the images for the projector calibration. Figure 29: SL system calibration setup 47

62 Using the images for the camera calibration and projector calibration, and their pixel sizes (7.4 µm and 10.8 µm), and the checker pattern size (dx: 4 mm, dy: 4 mm), the intrinsic parameters matrices are determined as follow: A C = mm, A P = mm, Where A C represents the intrinsic parameter matric of the camera, A P represents the intrinsic parameter matric of the projector. The calibration error is defined as the difference in pixel location between the obtained checker corners of the input images and the modelled checker corners using the nonlinear model considering the intrinsic parameters. From the calibration toolbox, the camera calibration has an average error of [x: 0.25 pixels, y: 0.13 pixels]. From the calibration toolbox, the projector calibration has an average error of [x: 0.68 pixels, y: 0.51 pixels]. Figure 30: Camera calibration error 48

63 Figure 31: Projector calibration error The computed principle point of the projector from the calibration toolbox deviates from the nominal center significantly in one direction, towards the bottom border of the DMD chip. This is due to the fact that the projector is designed to project images with a tilting angle of 11 degrees along an off-axis direction [87]. The calibration toolbox is design to modeling the camera s lens behaviour but is incapable of modeling additional projector behaviours such as offset angle. To verify that the nonlinear distortions of the projector are negligible, the projector is calibrated using only a linear model without the lens distortions models. From the calibration toolbox, the projector calibration has an average error of [x: 0.43 y: 0.30], which is lower than the non-linear model. However, the improvement was little. Hence, in our SL system calibration, only the camera nonlinear distortion parameters are used Extrinsic parameters Once the intrinsic parameters of the components (camera, projector) are determined, the camera s view and projector s view of a single reference position of the checkerboard pattern shown in Figure 32, are used to determine the extrinsic parameters. 49

64 a) Camera s view b) Projector s view Figure 32: Calibration image The extrinsic parameters relate the geometric relationship between the two components (camera, projector) of the SL system through a single view of the checkerboard pattern placed at a common world coordinate frame. Where the extrinsic parameter matric, M C, describes the rotation, and translation of the camera with respect to the reference plane, and the extrinsic parameter matric of the Projector, M P, describes the rotation, and translation of the projector with respect to the reference plane. M C = [R C T C ] (17) M P = [R P T P ] (18) M C = mm, and M P = mm

65 3.3.Chapter Summary A 3D sensory system based on a structured light approach is designed and developed for macroscale applications. A multiple coded sinusoidal phase-shifting technique is implemented to obtain pixel correspondence between the camera and projector. Furthermore, a 3D model is developed to analyze and visualized the optimal configuration of the SL system. System calibration, namely the intensity calibration and the estimation of system intrinsic and extrinsic parameters is discussed. Calibration results are presented with a discussion on error analysis. 51

66 Chapter 4 Experiments for Macro-scale application In this Chapter, the extensive experiments to verify the use of the designed 3D SL sensory system for measuring macro-scale 3D parts are presented. The experiments are used to:(i) determine the sensor s ability to obtain 3D coordinate measurements, (ii) identify sources of errors in the 3D SL sensory system, (iii) design and implement a fast and simple error compensation technique to compensate for the overall 3D measurement errors. 4.1.Measurement Error Experiments were conducted to verify the performance of the SL sensory system for the optimal hardware configuration in order to measure small parts. A high-precision linear stage, Aerotech Model ATS212, with a repeatability of 1 μm, was utilized in the experiments. The following subsections discuss the measurement procedure for obtaining the measurement error in the z- axis of the world coordinates Depth (Z-axis) Experiment The experiment setup is shown in Figure 29. The sensor coordinate is aligned to the world coordinate. A flat plane was placed within the measurement volume on the high-precision stage and moved along the z-axis of the world coordinate system at 0.1 mm increments, covering a depth of ±3 mm at the center of the measurement volume. At each location of the plane, 40,000 measurement points were obtained from the plane in sensor coordinate and compared with the actual travel distance of the plane in world coordinate to determine the root mean square (RMS) error and standard deviation of the measurements. Figure 33 shows the 3D point cloud of the flat plane at the center of the measurement volume. 52

67 Figure 33: 3D point cloud of the flat plane Figure 34 shows the RMS errors in depth (z-direction) within the working range. The RMS errors and the standard deviation of the measurements are shown in Table 4. Table 4: RMS error and standard deviation of the SL sensory system Maximum Minimum Mean RMS error mm mm mm Standard Deviation mm mm mm Figure 34: RMS errors of the SL sensory system in Z direction within 6 mm range 53

68 Figure 35: Metric step block A metric step block with certified dimensions (American Society for Testing and Materials International E797 metric step block) was placed on the plane of the world coordinate system, shown in Figure 35, and also measured in order to compare the system performance when measuring an object with known depth variation, Table 5. The mean value of the measured height for each step was estimated by comparing the measurement results with the corresponding certified height. Figure 36 shows the surface profile of the step block obtained from the measurements. Table 5: Measurement results of a certified metric step block. Certified height Optimal configuration 1.0 mm mm 2.5 mm mm 5.0 mm mm 54

69 Figure 36: 3D surface reconstruction of metric step block measured with the optimal configuration With the developed SL sensory system, the measurement error in the z-direction of the world coordinate system within the working range was determined to be: minimum RMS error: mm, maximum RMS error: mm, mean RMS error: mm Object Surface Measurement As a further evaluation of the optimally configured SL sensory system, a set of complex objects were measured. The objects chosen were (a) a LEGO part with an array of protruding pins on its surface; (b) a propeller with four slender blades with evenly distributed small holes; and (c) a curved gear with both convex and concave regions and small teeth. The measurements show the potential in using the proposed SL sensory system in obtaining 3D surface profiles of different objects with varying surface complexities. (a) 55

70 (b) Figure 37: 3D point clouds of complex objects: (a) LEGO piece, (b) propeller, and (c) gear (c) 4.2.Error Identification In this section, experiments were conducted to identify the sources of errors of the SL sensory system. The following subsections identify the errors in the 3D sensory system. The experiments focused on: a) analyzing system stability, and b) determining the effects of illumination conditions on 3D sensing 56

71 4.2.1.System Stability In Chapter 3, the presence of random noise in the intensity level of the SL sensory system was shown. Random noise can be a result of the component s (camera and projector), system vibration, and/or ambient lighting. In this section, the influence of the random noise error on the 3D measurement is experimentally investigated to analyze the stability of the 3D sensory system. The stability of the system is determined by performing 3D measurements of a flat plane within the measurement volume over a long period of time. The plane was placed on the aforementioned high-precision stage and moved along the z-axis of the world coordinate system at 0.1 mm increments, covering a depth of ±3 mm at the center of the measurement volume. At each location, 40,000 measurement points were measured every five minutes for period of 120 minutes. The measurement points were compared to the initial measurements presented in Figure 34 in order to determine the error deviation over time. Figure 38 presents the RMS of the error deviation of the planes over the time period within the measurement volume. The SL sensory system provides accurate measurements with an RMS error deviation of mm and a standard deviation of mm. The RMS error deviation of the designed 3D sensory system at each plane location is less than 2% compared to the RMS error of the measurements and is less than 10% compared to the standard deviation of the measurements, and therefore the random noise in the intensity level only impact less than 2% of the 3D measurement results. Figure 38: Error deviation of the flat plane over time 57

72 4.2.2.Object Surface Effects on Measurements The ability to obtain accurate 3D coordinate measurements of a part is highly dependent on the part s surface property since the intensity of the patterns are can vary between the reflectivity of the materials. In this section, the 3D measurements of different surface material objects are compared. The experiment consisted of obtaining the 3D profiles of the aforementioned metric step block with either a stainless steel or painted matted white surface. The results are presented in Figure 39. The step block in Figure 39 (a) is made of stainless steel and has a specular surface. The specular surface of the step block reflects the projected intensities in an unpredictable manner, causing the reflected lights to saturate the camera s sensor at multiple regions. During the 3D measurement process, the sensor processes the captured images to determine the absolute phase value of the step block through the phase shifting algorithm implemented in [83] and further obtains the 3D measurements. Within the saturated regions of the images, maximum intensity and incorrect phase values are provided regardless of the projected pattern s intensity when computing for the absolute phase value, and therefore, the system fails to generate accurate 3D measurements. When applying a matt white paint onto the step block in Figure 39 (b), the light reflects uniformly and the sensor is able to accurately obtain 3D measurements from this object. (a) (b) Figure 39: 3D point clouds of the step block: (a) Metal surface, (b) Painted matted white surface 58

73 Additionally, two different plastic surfaces with ABS and soft PVC with dyed plastic strands were also measured using the 3D sensory system. The results are presented in Figure 40. The complex gear in Figure 40 (a) is made of ABS while the Barbie in Figure 40 (b) is made of soft PVC with dyed plastic. Figure 40 show that the two surface materials have very different reflectivity. Hence when projecting the same intensity onto the object, the ABS reflects all the projected intensities onto the camera sensor, while the PVC with dyed plastic strands absorbs most of the projected intensities. The results in Figure 40 show that detailed 3D measurements can be captured with the proper intensities reflected off the ABS, while the details in 3D measurement is degraded when there are little to no reflection. (a) (b) Figure 40: 3D point clouds of the step block: (a) Complex gear (ABS), (b) Barbie (PVC with dyed plastic strands) Based the 3D measurements of different surface material objects, the sensor is shown to be sensitive to the light reflection and absorption coefficient. High reflectivity can cause saturated intensities, while high absorption can cause degraded intensities. The material s light reflection and absorption coefficient highly influence the intensities of the reflected lights to the camera. 59

74 Hence for measuring different materials, camera-projector intensity response curve will need to be readjusted. 4.3.Error compensation methods In this section, error compensation methods were employed to further analyze and improve the accuracy of the 3D measurements. The following compensation methods were investigated to improve the 3D measurements by: a) compensating absolute phase values, b) optimizing projected pattern angle, and c) obtaining accurate absolute phase values using a different phase shifting technique Phase Error Compensation Pattern projection methods using phase shifting patterns are widely used for 3D reconstruction in many structured light applications [74]. The accuracy of phase shifting methods is usually influenced by such error sources as the phase shift error [36], non-sinusoidal waveforms [36], camera noise (dark current noise, shot noise, readout nose, and electronics noise) [18], camera non-linearity, and vibrations due different sources (i.e. power source, fan, environment). Furthermore, research shows that the phase error in the current system set up is a result of mislabeling intensities within the pixel caused by the: a) non-linearity of the camera and projection intensity response curve [36], b) camera-projector pixel ratio, and c) the captured image aliasing effect and quantization effects [90]. The non-linearity of the camera and projection intensity response curve often causes the nonsinusoidal nature of the projected fringe. The system's triangulation configuration and/or the low camera-projector-pixel ratio can also cause a single camera pixel to capture a linear combination of two or more adjacent projector pixels resulting in incorrect pixel labeling. The low cameraprojector pixel ratio and quantization effects further cause spatial aliasing in the captured images. Furthermore, the instability of the projector light source and camera CCD over time with the quantization effect can cause temporal aliasing in the captured images. All these sources of errors contribute to the deviation in intensity of each pixel from its true value, which result in incorrect phase values. 60

75 Techniques for correcting phase errors for the phase shifting technique were proposed [36],[91]. The techniques are suitable for pattern projection methods that use phase shifting patterns to obtain the absolute phase. One of the most commonly applied phase error compensation techniques is to construct a phase error LUT to map the uncompensated phases to its ideal phase [36] and was implemented on the designed 3D sensory system. The absolute phase values obtained using the phase shifting and phase unwrapping techniques mentioned in Chapter 3 are compensated. Figure 41 shows the camera-projector intensity response curve and the corresponding phase error obtained in [36]. Figure 42 shows the camera-projector intensity response curve of the designed 3D sensory system. For each pixel, the corresponding ideal absolute phase value is modelled using the phase shifting and phase unwrapping techniques mentioned in Chapter 3. Subtracting the ideal absolute phase value to the actual absolute phase value obtained using the designed 3D sensory system; the phase error of the system is computed, shown in Figure 42. Comparing to the camera-projector intensity response curve [36], our designed 3D sensory shows to have less intensity deviation, and hence lower phase error. Figure 41: camera-projector intensity response curve and phase error [3] Figure 42: camera-projector intensity response curve and phase error 61

76 The most common trait of phase error is the presence of periodic fringe patterns on the reconstruction of 3D measurement; the more deviation the camera-projector intensity response curve is from ideal, the larger the deviation of this periodic fringe patterns. From the reconstructed planar surface Figure 43 (c), the presence of periodic fringe patterns shows that the designed SL sensory system experiences phase error in 3D measurement. Figure 43 (a), (b) further shows a 3D reconstructed planar surface before and after phase correction of our system. Furthermore, Figure 43 (c) shows the center row cross-section of the 3D reconstructed planar surfaces. From Figure 43 (c), the measurement result of the 3D reconstructed planar surface has improved to: minimum RMS error: mm, maximum RMS error: mm, mean RMS error: mm when the phase error is compensated. (a) (b) (c) Figure 43: a) 3D reconstructed planar surface before phase correction, b) 3D reconstructed planar surface after phase correction, c) Center row cross-section of the uncompensated and compensated 3D reconstructed planar surfaces These experimental results confirmed that the designed SL sensory system experiences phase error and the implemented phase error compensation technique improved the accuracy of the designed 3D sensory system by reducing pattern errors in the 3D measurements. 62

77 4.3.2.Optimise Illumination Angle Compensation In this section, an existing approach for determining the appropriate orientation angle for the fringe patterns to increase the accuracy of a given system set-up [92] was investigated and implemented. Studies show that the 3D sensory system is most sensitive to the part s depth variations when the projected fringe patterns have an optimal angle. Hence, the direction of the projected fringe patterns is important for the accuracy of the measurements. To determine the optimal orientation angle for the fringe patterns, a technique by finding the largest phase change for a given depth variation on the part was investigated [92]. The phase change is determined by comparing the phase differences of the part s surface to a reference plane [92]. For this experiment, a 3D printed square block with a height of 5 mm is placed on the flat plane. The surface normal of the top of the square block is placed perpendicular to the sensor and aligned to the z-direction. The two sets of patterns, namely the horizontal and vertical fringe patterns, are projected onto the flat plane, then onto the square block separately. The absolute phases θ a (x, y) of the flat plane, and the square block are then obtained using the aforementioned phase unwrapping technique in Chapter 3. The absolute phase differences, of the vertical phase values θ a (x, y) V and horizontal patterns θ a (x, y) H of the flat plane and block are then calculated by subtracting the flat plane phase maps from the block phase maps for both vertical phase values and horizontal phase values. The optimal fringe angle φ o is then calculated by finding the arctangent of the ratio of the absolute phase differences of the vertical phase values, θ a (x, y) V, and the absolute phase differences of the horizontal phase values, θ a (x, y) H. φ pp = tan 1 [ θ pp (x, y) V / θ pp (x, y) H ] (19) The optimal angle of 81 degrees from horizontal was computed for the designed triangulation configuration. The worst angle (refer herein as the pessimal angle) was computed to be -8 degrees from horizontal. The two set of patterns were then designed and sent to the projector to perform 3D measurement. Figure 44 shows: (a) the original projected patterns with 90 degrees angle, (b) the optimal projected patterns with 81 degrees angle, and (c) the pessimal projected patterns with -8 degrees angle. The cross-sections of the 3D profiles corresponding to the projected patterns are also presented in Figure 44. The cross-sections of the 3D profiles of the 63

78 square block measurements are further shown in Figure 45 for comparison. From the crosssections we can see that the part s 3D measurement is sensitive to the direction of the projected fringe patterns. The height of the 3D measurement of the square block is the highest and most accurately measured using the optimal fringe angle of 81 degrees, and lowest using the pessimal fringe angle of -8 degrees. (a) (b) (c) Figure 44: 3D point cloud of the square block: (a) Vertical Pattern, (b) Optimal fringe angle of 81 degrees, and (c) Pessimal fringe angle of -8 degrees 64

79 Figure 45: Cross-section profile of the square block measurements Table 6 shows that the direction of the projected fringe patterns is important for the accuracy of the measurements. By merely changing the pattern angle by 9 to the optimal angle, the 3D measurement result improved by 2.52% using the original patterns. Furthermore, when the pessimal angle for the projected fringe patterns is used, the measurement error can be as large as 95.06%. Hence it is important to select the optimal angle that is most sensitive to the part s profile variations when designing the projected fringe patterns for 3D sensory systems. Table 6: Measurement results of a certified metric step block. Original patterns Optimal pattern Pessimal pattern angle angle Orientation Mean measured height (mm) mm mm mm Error (mm) mm mm mm Error (%) 3.92% 1.39% 95.06% 65

80 4.3.3.Alterative Pattern Implementation and Testing In phase shifting techniques, the absolute phase values of the pixels in the phase unwrapped image is used to obtain pixel-to-pixel correspondence between the camera and the projector. Triangulation is then used to convert the corresponding information into the 3D representation of an object based on the intrinsic and extrinsic parameters of the camera and the projector [52]. Hence, obtaining accurate absolute phase values is the key to increase the 3D measurement accuracy. There exist different phase shifting techniques to establish accurate pixel-to-pixel correspondences between the projector and the camera to allow for accurate triangulation [74]. One common difficulty in the phase shifting techniques is to obtain accurate relationship between the periodic relative phases. Hence, the goal of the pattern design and phase unwrapping techniques are to solve the correspondence issue caused by the periodic relative phase values, and further assign the correct absolute phase values. Previously mentioned in Chapter 3, a robust and efficient phase unwrapping algorithm technique [83] was implemented to assign the absolute phase values through Equation 5. In this section, a different phase shifting technique called Modified Number-Theoretic Approach [93], based on using relative prime multi-wavelength patterns is implemented and compared. Initially, two relative prime numbers of fringes,λ 1, λ 2, were selected. The sinusoidal phaseshifting patterns were then designed according to the integer multiple, n, of the least common multiple of the two selected fringes, λ 1, λ 2, without exceeding the projector s resolution. The codification of the intensities of the three patterns in each set are designed and implemented using the aforementioned method, Equation (1), (2), (3) in Chapter 3. Where the first set of pattern consists of three patterns with λ 1 vertical fringes and the second set consists of three patterns with λ 2 vertical fringes. In both sets of the patterns, each pattern was shifted by 2π/3 with respect to the other patterns. The relative phase values,(φ R1, φ R2 ), of the patterns were then solved using the aforementioned method, Equation (4) in Chapter 3. To obtain the absolute phase values according the two linear congruent coefficient, (e 1, e 2 ), were obtained for the two selected fringes, (λ 1, λ 2 ) though the Chinese remainder theorem [94]: 66

81 e 1 1(mod λ 1 ) (20) e 2 1(mod λ 2 ) (21) With the two linear congruent coefficients, (e 1, e 2 ), and computed relative phase values, (φ R1, φ R2 ), the absolute phase value, Φ ABS, can then be obtained: k Φ ABS = φ Ri e i mod( λ 1 λ k ) (22) i=1 One disadvantage for using the linear congruent and Chinese remainder theorem to solve for absolute phase value is that this technique is only applicable to resolve integer values. Hence, only the integer values from the patterns can be used for the calculation. To be able to increase the measurement resolution, a method for assigning fractional numbers to the absolute phase values [93] was implemented. First, the integer numbers from the relative phases are rounded to the closest number to compute for the absolute phase values. The fractional part of the integer is then added back to the final absolute phase value. Figure 46 shows the absolute phase values obtained using only the integer values. Figure 47 shows the absolute phase values obtained with additional fractional numbers. As shown in Figure 46, and Figure 47 (a), more pixels are assigned a unique absolute phase values using the additional fractional numbers, and hence the resolution of the absolute phase was improved significantly. However, when rounding the relative phase value to the closest integer number based on a threshold, a jump of 1 absolute phase value can be seen in Figure 47 (a). The experimental results in Figure 46 and Figure 47 show that this jump occurred constantly at the locations of the rounding error with a constant magnitude of 1 phase value. Hence increasing the numbers of fringes, (λ 1, λ 2 ) can improve the signal to noise ratio of the absolute phase value and diminish the effect of the jump on the absolute phase values. Furthermore, the experiment also show that increasing the numbers of fringes, (λ 1, λ 2 ), increases the range of the absolute phase value, and further improves the 3D measurement accuracy. The 3D measurement is improved from a mean RMS error of mm to a mean RMS error of mm when the numbers of fringes, (λ 1, λ 2 ) were increased from 3 and 5 fringes to 23 and 27 fringes. With the current triangulation and hardware 67

82 resolution, the designed patterns for projection are limited to 23 and 27 fringes when using the relative prime multi-wavelength patterns. Figure 46: Integer Absolute phase value obtained using 3 & 5 fringes patterns (a) (b) Figure 47: a) Absolute phase value with decimals obtained using 3 fringes & 5 fringes, b) Absolute phase value with decimals obtained using 7 fringes & 11 fringes 68

83 The designed patterns were then sent to the projector to perform measurement using the designed sensory system in Chapter 3. Based on the intrinsic and extrinsic parameters of the camera and the projector obtained in Chapter 3, the 3D measurement results of using different fringes are solved through triangulation. A flat plane was measured using the experimental setup mentioned in section , and the RMS errors in depth (z-direction) within the working range was obtained and compared. Table 7 presents the 3D measurement results of using the phase unwrapping technique based on Modified Number-Theoretic Approach [93] compared to the Active phase unwrapping technique [83] presented in Chapter 3. Table 7: 3D measurement results of the using the different fringes Modified Number-Theoretic Approach [93] Active phase unwrapping technique [83] Pattern 1 Pattern 2 Mean RMS error Mean Std (mm) mm mm mm mm mm mm mm mm mm mm mm mm mm mm mm mm Limited by the current hardware setup, the numbers of fringes were increased to a maximum of 23 and 27 to achieve a mean RMS error of mm. The Active phase unwrapping technique mentioned in Chapter 3 produced better 3D measurement result of mean RMS error mm. However, the experimental results of using the Modified Number-Theoretic Approach [93] shows that the mean RMS error errors decreases as the numbers of fringes, λ 1, λ 2 increases. Hence the relative prime multi-wavelength phase shifting technique shows great potential at improving the current 3D measurement accuracy of the system when higher fringe numbers patterns λ 1, λ 2 can be implemented. 4.4.Hardware Improvement Investigation Spatial aliasing in the camera captured images can lead to mis-labeling in the camera pixels. Such phenomenon in low camera projector pixel ratio is known as the quantization effect. 69

84 4.4.1.High Resolution Camera Component A method of improving measurement accuracy is to increase the camera to projector pixel ratio in order to increase the sampling frequency of the projector pattern, and hence decrease the effect of quantization error and random noise on the measurement. A high resolution camera (Adimec Quartz Q-4A180) with 2048 x 2048 pixel resolution was implemented in the SL sensory system to replace the Procillica camera. Table 8 shows the specifications of the two cameras. Table 8: Camera specifications Procillica GE680 Adimec Quartz Q-4A180 Resolution 640 x x 2048 Pixel size 7.4 µm 5.5 µm Sensor Type CCD (Truesense KAI-0340) CMOS (CMV4000) Frame rate 205 fps 180 fps The SL sensory system was configured to the same optimal configuration using the higher resolution Adimec camera. The intensity calibration and system calibration were then performed on the SL sensory system to obtain the new intrinsic and extrinsic parameters for 3D measurement. The aforementioned depth (z-axis) experiment measuring a flat plane placed within the measurement volume was conducted to determine the performance of 3D measurements of the SL sensory system. Figure 48 shows the RMS error in depth (z-axis) for both of the SL sensory systems. 70

85 Figure 48: RMS errors comparison of the SL sensory system in Z direction within 6 mm range A comparison of the RMS errors for both of the SL sensory systems is shown in Table 9. The comparison in Table 9 shows that the performance of the SL sensory system using the higher resolution Adimec camera provides more accurate measurements. The mean RMS error is decreased from mm to mm. The maximum RMS error is increased from mm to mm. The increase in the maximum RMS error is a result of the component triangulation. When the plane position is beyond 5 mm, the camera captured a multiple of incorrect projector pixels, and hence the measurement result degrades from this triangulation configuration. Further investigation in the triangulation configuration for the new hardware is needed to improve the measurement accuracy. Table 9: Comparison of the RMS errors SL sensory system with Procillica camera SL sensory system with Adimec camera Maximum Minimum Mean RMS error mm mm mm Standard Deviation mm mm mm RMS error mm mm mm Standard Deviation mm mm mm 71

86 By using a higher resolution camera, the SL system provides more unique corresponding pixels between the camera and projector, and hence improves the measurement accuracy. From experimental result, by using the higher resolution Adimec camera, the mean RMS error of the 3D measurements of the SL sensory system is improved by 9.9%. 4.5.Overall (X, Y, Z) measurement error compensation Different sources of errors can affect the measurement accuracy of the 3D sensory system. Namely, imaging error of the projector and camera, system calibration error when estimating the intrinsic and extrinsic parameter for the system, phase error when using fringe patterns projection technique, etc. In the SL sensory system, most of the errors are unavoidable and it is often hard to isolate the effect of each error on the 3D measurements. Hence as opposed to identifying and compensating the errors individually, an overall error compensation method directly compensating the 3D measurements is considered to be the most effective approach. The following section describes the experiment in details of the proposed fast and direct method to compensate the 3D measurement result. In the proposed method, a modified error compensation method based on ref. [95] is implemented and tested. The main objective of the proposed method is to compensate the 3D measurement results based on the 3D reference coordinates. As opposed to using CMM and building the reference coordinates point by point in ref. [95], our proposed method uses a 2D planar object and a linear stage to build the 3D reference coordinates. A 2D planar object was designed and printed with 12 x 10 circular reference points and spaced precisely at mm apart in X and Y direction, shown in Figure 49 (a). The 2D planar object was then placed on the aforementioned high-precision linear stage and moved along the z-axis of the world coordinate system at 10 mm increments, covering a depth of 100 mm to build the 3D reference coordinates. For each circular reference point on the 2D planar object, ellipse fitting and center detection algorithms are used to obtain the image coordinates (u, v) of the circular reference points, shown in Figure 49 (a). Furthermore, the SL sensory system s 3D measurement of image coordinates (u, v) was obtain through triangulation based on the intrinsic and extrinsic parameters of the camera and the projector. A LUT table of the 3D reference points was then built based on the image 72

87 coordinates (u, v), the corresponding 3D world coordinates, and the corresponding SL sensory system s 3D measurements. Z-axis Y-axis X-axis (a) (b) Figure 49: (a) 2D circle plane object, (b) corresponding 3D world coordinates In the next step, the relationship (rotation and translation) between the world s coordinates (X, Y, Z) and the SL sensory system s coordinates (X s, Y s, Z s ) was determined. By comparing vectors obtained using the reference points along the x-, y-, z-axes in the world coordinate system and the vectors obtained using sensory system s 3D measurement of those reference points. A transformation matrix is built in Table 10 to translate the world s coordinates (X, Y, Z) to SL sensory system s coordinates (X s, Y s, Z s ) through the rotation matrix, (u, v, w), and translation matrix, T. X s Y s Z s 1 = u x u y u z 0 v x v y v z 0 w x w y w z 0 T x T y T z 1 X Y (23) Z 1 73

88 Table 10: Transformation matrix u v w T (mm) x y z Figure 50 shows the 1160 reference points in world coordinates system (X, Y, Z) obtained within the mm x mm x 100 mm volume. The reference points in world coordinates system (X, Y, Z) were transform to the sensory system s coordinates (X s, Y s, Z s ) through equation (23). For every reference point, the measurement errors, (ε s, ε s, ε s ), of the SL sensory system is then obtained by subtracting the ground truth, (X s, Y s, Z s ), from the measurement, (X s, Y s, Z s ), shown in equation (24). (ε s, ε s, ε s ) = (X s, Y s, Z s ) (X s, Y s, Z s ) (24) The measurement errors, (ε s, ε s, ε s ), of each (X s, Y s, Z s ) reference points are then stored into the LUT for error compensation. A sample error vector map at the z=0 mm position is shown in Figure 51. Figure 50: Distribution of the reference points 74

89 Figure 51: Measurement error vector at a z=0 mm From Figure 51, the black points represent the (X s, Y s, Z s ) reference points, and the red vector represents the measurement errors, (ε s, ε s, ε s ). The magnitude of the measurement errors is represented by the length of the vector. A weighted function similar to ref. [95] is implemented. The distance of the measurement point to the eight closest neighbor data points from the LUT, Figure 52, is considered in the weight function. A compensation value based on the neighboring points measurement error is then assigned to the measurement point. Figure 52: Neighbor points for error compensation The compensation value s(v) for each measurement point (v x, v y, v z ) is determined by assigning a weight, w i (v), to the measurement errors, (ε i ) in the LUT based on the distances to the neighbor points, v i, and the number of the neighbor points, N. 75

90 w i (v) = v v i 2 v v pp 2 N pp=1 (25) N s(v) = [w i (v) (ε i )] i=1 (26) Using equation (25) and (26), the compensation value s v x, v y, v z for the measured point is then calculated, this compensation value is then subtracted from the measurement points (X s, Y s, Z s ) to improve measurement accuracy. To verify the compensation method, the aforementioned 2D planar object was placed on the aforementioned high-precision linear stage and moved along the z-axis of the world coordinate system at 2 mm increments covering a depth of 10 mm. The sensor coordinate is aligned to the world coordinate. At each location along the z-axis, the circular reference points were obtained in sensor coordinate (X s, Y s, Z s )and compared with the world coordinate to determine the root mean square (RMS) error and standard deviation of the measurements. Table 11 presents the RMS error and the RMS error of the compensated results from all the measured reference points. Table 11: Compensated results x y z RMS error mm mm mm RMS error compensated mm mm mm Improvement 87% 55% 28% The error compensation method discussed above uses 2D planer object and a linear stage to compensate the 3D measurement result of the SL sensory system. Compare to other methods of using expensive calibration equipment, i.e. CMM, 3D calibration objects, this method provides a fast and simple way to compensate the RMS error of the 3D measurement result. From the compensated results, Table 11, the RMS errors in all three axes were decrease by 28% or more. 76

91 4.6.Chapter summary In this chapter, an extensive analysis of the performance of the proposed SL sensory system was presented. Experiments were conducted to evaluate the measurement errors of the designed 3D sensory system for macro-scale application. The measurements demonstrate the proposed SL sensory system s ability to obtain 3D information of complex parts. In addition, the errors of the system were identified. Compensation methods to improve the measurement accuracy of our 3D sensory system were investigated and implemented. A fast and simple way to compensate the 3D measurement error of the SL sensory system using a designed 2D planer object and a linear stage was implemented and analyzed. From the experimental results, the error compensation method proved to decrease the RMS errors in all three axes by 28% or more. Overall, the experimental results presented in this chapter show promise for the use of the proposed 3D sensory system for high accuracy part measuring. 77

92 Chapter 5 Development of a Structured Light Sensory System for Microscale Applications In this chapter, the development of a high resolution 3D shape measurement system for microscale applications is presented. The chapter presents the hardware and software development for the sensory system and further describes the proposed calibration procedure for the system. The overall 3D sensory system framework is shown in Figure 53. Figure 53: 3D Sensory System Framework for Micro-Scale Application 5.1.Overview of SL sensory system for micro-scale application Micro-scale complex parts commonly refer to parts that are sub-millimeter in size and have nonuniform profiles. In this research, the objective is to design a SL sensory system that is able to measure micro-scale complex parts that are 0.5 mm x 0.5 mm with non-uniform profiles and achieve a measurement accuracy of 0.1 µm. To design a SL sensory system for the micro-scale applications, microscope lenses are needed to de-magnify and image the patterns from the projection chip to the desired dimensions, and project them onto the part s surface. In order to 78

93 capture the deformed micro-scaled patterns with the highest accuracy, the suitable microscope lenses are mounted onto the camera. The image capturing process is controlled by the frame grabber and the captured images are further transmitted to the computer for image processing. Once the relationships between the components (camera, projector, and reference plane) are obtained, the part s 3D coordinates can be determined through triangulation. To achieve the highest measurement accuracy for the defined part size of the research, a novel analytical calibration method for SL systems using microscope lenses to accurately measure the 3D surface profiles of complex micro-scale parts was proposed. The method includes a novel calibration model which explicitly considers the microscope lenses parameters for the hardware components (camera and projector) as well as addresses the limitation of narrow DOF behaviour of these lenses. The latter is achieved by incorporating an image focus fusion technique. The rest of this chapter is organized as follows. Section 5.2 introduces the selection of hardware components and optical designs of the system setup for micro-scale application. Section 5.3 presents the additional optical designs of the SL to address the limitations on diffraction, lens magnification, noise, and vibration issues. Section 5.4 presents the SL sensory system calibration model and calibration technique. Section 5.5 presents the SL sensory system measurement technique for micro-scale application. Finally, section 5.6 summarizes the chapter. 79

94 5.2.Selection of Hardware Components and Optical Designs Projector Components The developed 3D SL sensory system for micro-scale applications uses the DLP projector (Texas Instrument Inc. DLP Light Commander Projector), for projecting the designed patterns. The DLP projector has a resolution of 1024 x 768 pixels, and the brightness of 200 ANSI Lumens. The DLP projector is equipped with a 0.55 XGA Digital Micro Mirror (DMD) chip, shown in Figure 54, with a pixel pitch of 10.8 µm, and a dimension of mm x mm. With the 0.55 XGA DMD chip, the projector is able to generate 8 bit gray scale (0-256 gray level) image [96]. Figure 54: DLP 0.55 XGA Series 450 DMD In order to de-magnify and image the patterns from the DMD chip of the projection component onto the part s surface to generate micro-scale patterns, a microscope lens is needed. To select the appropriate microscope lens to use with the projector to satisfy the design objective, the projector pixel size (10.8 µm) and resolution (1024 x 768 pixels) were taken into consideration. Object 0. 5 mm 0.5 mm DMD mm mm Figure 55: Projected pattern demagnification 80

95 Here, it is assumed that the light directed onto the DMD chip is being projected out onto the measured part. In order to project the micro-scale designed pattern using DMD pixel pitch, du p, of 10.8 µm onto the micro-scale parts with part height, d h, 0.5 mm, and part width, d v of 0.5 mm, a projector lens with demagnification power, m p, of 16.6 times is calculated with equation (27). m pp = d v du pp (27) In SL sensory systems, projectors are commonly treated as an inverse camera [97][51][32], hence, to select a modular microscope lens system that can be configured to fit to the projector while satisfying the objective, different camera microscope lenses with high magnification were investigated. The following parameters are examined to determine the compatibility with the DLP projector, while satisfying the objective: lens back focal distance, required sensor specification (i.e., sensor type, pixel size, and resolution), lens magnification, lens numerical aperture, lens smallest resolvable feature, lens working distance, lens field of view, and lens depth of field. The microscope lens setup from Navitar Inc. which consist of a 6.5 X Zoom microscope lens ( A), a 2.0 X tube lens, and a 2 X adapter was selected to achieve the defined magnification (16.6 X). Additionally, the selected Navitar microscope lens setup produces range of 2.8 X - 18 X magnification at a desirable large working distance of 36 mm when coupled to the DLP projector. With the large numerical aperture (NA ) and high magnification (2.8 X to 18 X), the system is capable of projecting de-magnified patterns by 16.6 times to achieve a smallest labelling pixel of 0.65 µm x 0.65 µm. Table 12 provides the specification of the microscope lens setup for the camera. Table 12: Specification of the microscope lens setup for the projector Lens components 20X Mitutoyo objective lens 6.5 X Zoom microscope lens 2.0 X tube lens 2.0 X adapter Overall Magnification 2.8 X - 18 X Numerical Aperture NA Working Distance (mm) 36 Resolving Power (μm) 1.2 Field of view 2.65 mm x 4.47 mm 0.41 mm x 0.7 mm Depth of Focus (μm) Mounting Threads M26 x 36TPI Projector mount F mount Camera sensor 8 mm 81

96 5.2.2.Camera and Framer Grabber Components In order to select the hardware component for capturing the micro-patterns projected, a thorough comparison for camera selection was performed. Primary selections of market existing cameras that provide high resolution, high framer rate, small pixel size, and large sensor size were investigated. Their secondary features such as sensor type, numbers of taps, and communication protocol were then investigated. The goal for the camera selection was to determine the most suitable camera that can capture the DLP projector s pattern images with the least amount of error. The parameters considered for calculating the error includes: dynamic range, signal to noise ratio, quantum efficiency (visible light wavelength), full well capacity, dark noise, and readout noise. The descriptions for the parameters based on [88] are described below. Dynamic range: this describes the ratio of the maximum to minimum intensities that the camera can measure. A higher dynamic range is preferred to have more bits representing a particular light intensity level and to also accommodate for capturing images in both bright and dark levels. Signal-to-noise ratio (SNR): the SNR is a useful way of comparing the relative amounts of signal and noise for any electronics system. A high ratio will have very little noise effect. Quantum efficiency: this sensor character describes the sensor s efficiency at producing elections when absorbing photons at different wavelength. Higher quantum efficiency is preferred with the ideal case being that every photon absorbed produces an electron. Full well capacity: this is the maximum number of electrons that a pixel can hold. It is important for the full well capacity to be high especially when working with bright light. Undesirable effect such as blooming may occur if the full well capacity is low due to charge leakage to adjacent pixel. Dark noise: this refers to the electrons that are generated in the absence of light, as a result of heat produced in the camera sensor itself. The dark noise varies with time and temperature. Readout noise: when photon is detected by a pixel in the sensor, the pixel information is transferred through a readout structure, where charge is converted to voltage and 82

97 amplified prior to digitization in the Analogue to Digital Converter (ADC) of the camera. Noise will present in the circuit during the conversion process that leads to readout noise. Based on the above factors, a relationship between the camera and projector was established by determining the smallest unit of intensity the camera can capture, e Cmin. Then the sensitivity of the camera for the SL sensory system e Cmax/P is determined by the ratio between the camera s capturing electrons and the projected electrons per gray level in electronics. e Cmipp : e F ε D ε R b (28) e Cmppx/P = e F ε D ε R I P (29) The e Cmin, and e Cmax/P were determined from the following variables: ε D, is the dark noise of the camera, ε R is the readout noise of the camera, e F is the Full well capacity of the camera in electronics, b is the analog to digital convertor of the camera (in bit units), and I P is the gray levels of the projector. The error effect of the hardware specification, ε max, can then be determined from the relationship between the component error and e Cmax/P : ε mppx ε D + ε R e Cmppx/P (30) Finally, the error, ε max (λ) in capturing the projected patterns images with the selected camera considering the quantum efficiency of the capturing device, η(λ) is determined: ε mppx (λ) ε mppx η(λ) (31) Based on the parameters, six suitable cameras are compared in Table 13. The Adimec camera (Adimec Quartz Q-4A180) with a 5.5 µm x 5.5 µm pixel size and 2048 x 2048 pixels resolution has the least amount of error effect, ε max (λ) when capturing the projected patterns images, and hence, it was selected for the SL sensory system. 83

98 Table 13 : Camera specification Company Name Resolu tion Allied Tech Bonito CN x 1726 Adimec Q-4A x 2048 Basler aca x 180k 2048 Point Grey GZL-CL x 41C6M-C 2048 IO Industries Flare 2048 x 4M180NCL 2048 Fps Dynamic range (db) SNR Full well capacity (e-) Noise (e-) ε max ε max (λ) % 130.5% % 89.5% % 95.4% % 127.0% % 124.3% 3 Rank Figure 56: Adimec Quartz Q-4A180 [98] Figure 57:1 CMV4000 CMOS sensor [99] The selected camera is equipped with a 5.5 um x 5.5 um pixel size CMOS sensor with a sensor resolution of 2048 x 2048 Pixels, Figure 57. The camera can capture image at a frame rate of 180 fps. The video interface of the Adimec camera is called the Camera Link interface. Camera Link is a computer vision serial communication protocol that is designed to reduce the number of wires in cables and to offer high data transfer rate [100]. Camera Link currently outperforms other interfaces such as the GigE Vision, and the IEEE Ethernet interfaces with its high data transfer rate (2 Gbits/s for lower resolution base configuration and 6 Gbits/s for high resolution full configuration). With the high resolution and high bits size of the Adimec camera, two Camera Link cables are required to transfer the high resolution full configuration data at 6 Gbits/s. Additional camera specifications are shown in Table 14. Furthermore, from Figure 58 we can see that the camera can produce highly quantum efficient (40%) low noise images when utilizing the visible light wavelength of 450 nm to 750 nm. 84

99 Table 14: Adimec Quartz Q-4A180 camera specification [98] Sensor: Global shutter CMOS CMV4000, Optical size: 1 Resolution 2048 (H) x 2048 (V) Pixel size 5.5 x 5.5 μm Interface choices Camera Link Max sustained frame rate 180 frames per second at 10 tap 8 bit Camera Link configuration Readout noise 13e- Full well capacity 13,5ke- Dynamic range 60 db Linear; 90 db in HDR mode A/D converter 10 bit Sensitivity at sensor surface: 4,64 V/lux.s 0.22 A/W Triggering Internal or external Dimensions 80mm L x 41mm W x 80mm H Weight Total: 400 g Captured Zone Figure 58: Spectral response curve of Adimec Quartz Q-4A180 camera [98] In order to achieve the highest resolvable accuracy (sub-micrometer), the camera will need be able to resolve the finest sub-micrometer detail that is being projected. Theoretically, in optical imaging systems, the highest resolvable feature is limited by the diffraction limit. Diffraction limit occurs when the light rays disperse while passing through the small opening of the lens numerical aperture. When the lens numerical aperture is too small, the light rays of different 85

100 wavelengths will travel through the opening at different distances and start to diverge and interfere with each other. The diffraction limit results in a circular pattern shown in Figure 59 and Figure 60, called the Airy disc [38]. Figure 59: Airy disk 2D [38] Figure 60: Airy disk 3D [38] When the diameter of the airy disk's central peak becomes large relative to the pixel size in the camera (or maximum tolerable circle of confusion), it begins to have a visual impact on the image [38]. Once two airy disks become any closer than half of their width, they will no longer be distinguishable, hence degrading the smallest resolvable feature. An optical system with the ability to produce images with resolution as good as the theoretical limit is said to be diffraction limited [38]. In an ideal lens the smallest resolvable feature of the lens system, also known as circle of confusion, CoC, depends on the numerical aperture of the lens NA, and the wavelength of light being captured, λ [38]. CoC = λ [38] (32) 2(NA) For common microscope camera system, the size of the smallest resolvable feature is proportional to the wavelength of the light being observed and inversely proportional to the objective lens numerical aperture. Hence, to be able to achieve the smallest resolvable feature, the commercially available large numerical aperture of 0.42 was chosen. Based on Equation (32), the smallest resolvable feature using visible light with wavelength, λ, between 450 nm nm and the selected large numerical aperture of 0.42 was calculated to be µm µm. In order to capture the micro-patterns projected with a smallest resolvable feature of µm, a specially designed microscope lens setup will be required. 86

101 To select the appropriate microscope lens to implement onto the camera, the camera pixel size, du, the smallest resolvable feature, CoC, and Nyquist rate of two were taken into consideration for the calculation. To capture the smallest resolvable feature of um, a microscope lens with 20.5 magnification power for the camera was calculated: m c = duc 2 Coc (33) Based on the magnification power calculated the appropriate microscope lens setup that can achieve the defined magnification (20.5 X) while compatible with the selected Adimec Quartz Q-4A180 camera was selected. The following parameters are examined when determining the compatibility: lens back focal distance, camera sensor specification (i.e., sensor type, pixel size, and resolution), lens numerical aperture, lens smallest resolvable feature, lens magnification, lens working distance, lens field of view, lens depth of field, and lens connection types. The microscope lens setup from Navitar Inc. which consist of: a 20X Mitutoyo infinity-corrected long working distance microscope objective lens, a 6.5 X motorized UltraZoom microscope lens ( ), a 1.0 X tube lens (1-6015), and a lens motor controller were selected. The Navitar microscope lens setup produces 6.96 X to X magnification at a long microscope working distance of 20 mm when coupled to the camera. The final system setup for the camera is shown in Figure 61. Table 15 provides the specification of the microscope lens setup for the camera. Navitar 1 X microscope tube lens 20X Mitutoyo Plan Apo Infinity- Corrected Long WD Objective Navitar 6.5 X motorized UltraZoom microscope lens Figure 61: Camera-microscope lens setup Adimec Quartz Q-4A180 camera 87

102 Table 15: specification of the microscope lens setup for the camera Lens components 20X Mitutoyo objective lens 6.5 X motorized UltraZoom microscope lens 1.0 X tube lens Overall Magnification 6.96 X X Numerical Aperture NA Working Distance (mm) 20.0 Resolving Power (μm) Field of view 1.2 mm x 1.6 mm 0.22 mm x 0.28 mm Depth of Focus (μm) 3-10 Mounting Threads M26 x 36TPI Camera mount C mount Camera sensor 1 In order to communicate with the Adimec camera with 10 bit output resolution and the full camera link configuration, the microenable IV AD4-CL frame grabber, shown in Figure 62, from Silicon Software was selected. The microenable IV AD4-CL is a dual-port Camera Link frame grabber that can be used to communicate with two independent Base configurations, or a single Medium/Full configuration Camera Link camera and the DMA can provide high transfer rate [100]. With the programmable acquisition function, the camera and microenable IV frame grabber can be easily configured to full camera link acquisition mode for colour mode monochrome mode acquisition. Figure 62: Silicon software microenable IV AD4-CL [101] Camera-Projector Synchronization The microcontroller board used to develop the system for synchronizing the projector and the frame grabber is a Digilent chipkit Max 32 controller board, shown in Figure 62. The Max32 controller board takes advantage of the powerful PIC32MX795F512 microcontroller. This 88

103 microcontroller features a 32-bit MIPS processor core running at 80 Mhz, 512 K of flash program memory and 128 K of SRAM data memory. In addition, the processor provides a USB 2 OTG controller, 10/100 Ethernet MAC and dual CAN controllers that can be accessed via add-on I/O shields. The controller board is used for sensing trigger pulse signals to both the projector and camera for synchronization at 160 fps. Figure 63: Silicon software microenable IV AD4-CL To obtain the proper synchronization, the parameters of the components (camera, projector, and microcontroller), shown in Table 16, will need to be considered. Table 16: Camera projector synchronization parameters Camera parameters Projector parameters Microcontroller Camera integration time Projector illumination time Triggering delay time Camera frame period Projector illumination power Camera frame rate Projector frame rate 5.3.Error Identification and Compensation In this section, experiments were conducted on the designed SL system components (camera, projector) to address the limitations on diffraction, lens magnification, noise, and vibration issues, etc., and lens modifications were proposed. 89

104 5.3.1.Projector Component The experiments for the projector component are presented Vertical offset angle Chapter 4 showed that the light commander has a vertical offset angle of 7 degrees when projecting patterns. From the Texas Instrument s Optical Schematics of Light commander projector report [87], the Optical Schematics, Figure 64, Figure 65, show that the light commander DLP projector was designed according to the non-telecentric architecture. With the non-telecentric architecture, the projector requires less optical components and hence, fewer optical element losses and lower production cost. The non-telecentric architecture has an exit pupil of illumination path located at a short distance from the component, and some degree of vertical projection offset is required to increase the contrast while providing more angular separation of the illumination path from the projection path as oppose to the telecentric architecture shown in Figure 66. According to the Light commander projector report [87], the vertical offset is required to increase as the aperture size of the projector lens decreases in order to physically separate illumination and projection optics. Figure 64: Tilting effect of the light commander [87] 90

105 Figure 65: Non-telecentric architecture [87] Figure 66: Telecentric architecture [87] As shown in Chapter 3, the vertical offset angle from the non-telecentric architecture introduced a shifting in the principal point estimation and further caused an error in estimating the system calibration parameters. When the non-telecentric architecture is coupled to the selected microscope lens setup, the patterns were not projected properly because the vertical projection offset is too large for the small aperture of the microscope lens. Hence, to compensate for the effect of the offset angle, an approach based on using tilt-shift lens [97] was implemented. The proposed approach [97] uses a tilt-shift lens to adjust the vertical tiling angle of the projection path, and further focus it into a microscope lens. A similar tilt-shift lens, Nikon 45 mm f/2.8d tilt-shift lens, shown in Figure 67 was selected. The Nikon 45 mm f/2.8d tilt-shift lens provides a tilting adjustment range of ± 8.5 degrees. 91

106 Figure 67: PC-E Micro NIKKOR 45mm f/2.8d With the selected tilt-shift lens implemented, the vertical tilting angle (7 degrees) from the projector was successfully corrected with a tilting angle of -7 degrees from the tilt-shift lens. However, the selected tilt-shift lens introduced magnification and Scheimpflug effect in the projected image when coupled to the microscope lens setup Lens Magnification and Diffraction With the additional Nikon 45 mm f/2.8d tilt-shift lens the selected 2X tube lens, shown in Table 12 has to be removed since it was designed for the F-mount of the projector with a back focus distance of 46 mm, it is therefore incapable of focusing the rays coming from the image plan of the tilt-shift lens. Hence, an additional de-magnifying technique is required to be applied to the output of the tilt-shift lens in order to de-magnify and focus the projection image from the tiltshift lens into the microscope lens. In [102], a macrophotography technique using a reverse-lens setup onto a microscope system to provide a focused output image with the desired magnification between the DMD chip and the projection patterns was proposed. Based on the proposed technique [102], several reverse-lens setups were implemented and tested in order to de-magnify and focus the projected image from the tilt-shift lens into the microscope lens to achieve the desired image size as shown in Figure

107 Figure 68: schematic of the reverse-lens set up [102] The de-magnification, m, of the reverse-lens setup between the DMD and the image plane, shown in Figure 68, was calculated considering the tilt-shift lens focal length as the front focal length, f F, and the reverse lens s focal length as back focal length, f B [102]. m = f F f B (34) In order to achieve the same 2X demagnification, m, as the removed tube lens, the back focal length, f B, is calculated to be c 22.5 mm. Two of the most feasible configurations are presented in Figure 69 (a), and Figure 70 (a) using commercially available lenses. In the first configuration, shown in Figure 69 (a), the tilt-shift lens was coupled to a reversed Nikon camera lens (NIKKOR 28 mm f/2.8) and further coupled to the microscope lens (Navitar Zoom 6000). In this configuration, the reverse-lens set up has a de-magnification factor of 1.6 X. However, the diffraction within the lens setup caused interference to occur and introduced fringe lines on projected pattern, shown in Figure 69 (b). The diffraction happened when the using a small aperture revered mounted lens to converge the output rays from the tilt-shift lens. In the second configuration, shown in Figure 70 (a), the tilt-shift lens was coupled to a reversed larger aperture Fujinon CCTV lens (Fujinon 25 mm f/1.4) and further coupled to the microscope lens (Navitar Zoom 6000). In this configuration, reverse-lens set up has a de-magnification factor of 1.8 X. Furthermore, the larger aperture, f/1.4, of the reverse lens prevents diffraction from occurring on the projected pattern, shown in Figure 70 (b). 93

108 (a) Figure 69: a) Microscope lens setup configuration 1, b) projected pattern (b) (a) (b) Figure 70: a) Microscope lens setup configuration 2, b) projected pattern Therefore, further experiments were conducted based on configuration 2, shown in Figure Scheimpflug effect Scheimpflug effect is a phenomenon when the projecting plane is not parallel to the image focus plane [103]. It causes the image focus plane to be tilted and the DOF of the system to be unevenly distributed. With the current microscope lens set up, shown in Figure 70 (a), the image focus plane is tilted with uneven distribution of DOF, shown in Figure 71. Image focus plane Lens DMD Projecting plane Light commander Scheimpflug intersection Figure 71: Scheimpflug effect schematics 94

109 In order to measure the DOF focused range to determine the tilting angle of the Scheimpflug effect, the Normalized Gray level local variance method implemented in [78] was utilized. A flat plane was placed on an aforementioned high-precision linear stage, a series of horizontal lines were projected from top of the image to the bottom and further captured with the camera microscope setup. The Normalized Gray level local variance method was then used to determine the location of the highest focused lines. Based on the focus analysis, the projector s back support was levitated by 8 cm though empirical experiments, shown in Figure cm Figure 72: Microscope lens setup Scheimpflug effect corrected With the Scheimpflug effect corrected microscope lens set up, shown in Figure 72, the image focus plane is evenly distribution, and hence a uniform DOF, shown in Figure 73. Image focus plane Lens DMD Projecting plane Light commander Figure 73: Scheimpflug effect corrected schematics The microscope lens setup compensating the vertical offset angle, the lens magnification and diffraction, and the Scheimpflug effect is shown in the Figure 74 below: 95

110 DLP projector Nikon 45 mm f/2.8d tilt shift lens Navitar microscope lens Levitated 25 mm CCTV lens Figure 74: schematic of the reverse lens component set up Camera Component In terms of the camera calibration, multiple camera noises were investigated, and procedures for dark current noise, shot noise, and read noise were performed. Furthermore, a flat field correction to correct the pixel non-uniformity behaviour was performed through the Adimec camera s build in function. Dark current noise is commonly caused by the thermally generated electrons that build up in the pixels of the camera sensor [88]. The rate of dark current accumulation depends on the temperature of the camera sensor. Over a long operation time period, the dark current noise will eventually fill every pixel in a camera sensor. To compensate for the dark current noise, completely dark images were captured at each exposure time and the intensities were recorded into a LUT. At each exposure time, the true captured images were obtained by subtracting the dark noise images in the LUT. Shot noise is caused by the random arrival of photons [88]. The fundamental trait of light causes each photon to arrive at different time; the arrival time of any given photon cannot be precisely predicted. To compensate for the shot noise in the images during the measuring process, multiple (ten) images of the same projected pattern was captured and averaged for further processing. 96

111 Readout noise is normally a combination of all the on-chip noise [88]. Camera manufacturers typically combine all of the on-chip noise sources and express this noise as a number of RMS electrons, as shown in Table 13. The calibration procedure for readout noise was performed during measurement, where multiple (ten) images of the same projected pattern were taken. A read out noise mask is then determined by subtracting the first image with the average of the ten images. Finally, read out noise mask was used during the image processing to determine the measurement error. Lastly, the pixel non-uniformity is often caused by manufacturing defects such as sensor artifacts, lens artifacts, and illumination artifacts (shading), where a constant error pattern occurs in the captured image [88]. A flat field correction can be performed to improve the image uniformity, and the performance discrepancies between pixels regarding the exposure time of the camera. The on-board camera flat field correction is executed by performing a combination of dark field, bright field, and pixel level gain corrections. Before flat field correction, the shading effect can be seen at the corners of the image, Figure 75 (a). The image after flat field calibration is shown in Figure 75 (b). (a) (b) Figure 75: a) Image before flat field correction, b) Image after flat field correction System Vibration Considering the micro-scale resolution of the SL sensory system, any vibrations in the system or the environment can introduce errors in the measurement results. Hence, vibration is a major source of error in measurement systems for micro-scale applications. The designed SL system was placed onto a passive damping optical table in order to isolate any common vibration 97

112 sources, i.e. foot traffic, building services, building motion etc. To determine the influence of the system vibration, a vertical line of one pixel width, and a horizontal line of one pixel height were projected and captured, and the experiment was repeated for fifty trials. The vibrations in both of the directions are shown in the Figure 76 (a), (b). (a) (b) Figure 76: a) Horizontal vibration, b) Vertical vibration From, Figure 76, the horizontal vibration has an influence of ±2 pixels in the system, and the vertical vibration has an influence of ±3 pixels in the system. Further experiments were conducted in order to: a) identify the sources of vibrations, b) determine the vibration frequency and amplitude of each source, and c) perform vibration compensation. To identify the vibration source, an accelerometer was placed on the microscope lens setup of the camera, projector, and the optical table to measure the vibration of each over a 30 minutes time period. The projector was measured to be the main source of vibration, and therefore, additional experiments were performed to determine the sources of the projector vibration. 98

113 Figure 77: Vibration experiment setup For the projector setup, Sorbothane damping sheets, with specification in Table 17 were applied under the projector to dampen the effect of vibration of the projector onto the camera. Table 17: Sorbothane damping sheet Load Capacity (Set of Four) 44 to 70.4 lbs (20 to 32 kg) Resonant Frequency 15 Hz Transmissibility at Resonance 5.6 db (1.914 Ratio) Durometer 50 Dimensions Ø1.5" x 1.0" (Ø38.1 mm x 25.4 mm) Figure 77 shows the vibration experiment setup, where the projector enclosure was removed, and different components (fan1, fan2, power source) was individually isolated for the experiment. Using the experiment setup in Figure 77, it was shown in Figure 78 that there is a consistent ground vibration (i.e. environment, room) with frequency of 20~40 Hz and amplitude of 6x10 5 m. Furthermore, the main contributions to the vibration of the projector were identified to be from the operating fans of the optical module, illumination module and power supply module. The vibration effect from the power supply module was eliminated by isolating the power supply module from the projector. From measurement shown in Figure 79, it was shown that fan 2 is vibrating at high frequencies of 160 Hz, 325 Hz, and 370 Hz, and small amplitude of 1x10 5 m. The reason for the small amplitude of vibration was because the 99

114 manufacture, LogicPD, secured fan 2 using passive damping materials. From the measurements shown in Figure 80, it was shown that fan 1 has the highest amplitude of noise, 9x10 5 m, and is vibrating at 257 Hz and 465 Hz. Ground vibration Figure 78: Environment vibration Ground vibration Additional noise from Fan 2 Figure 79: Vibration of the projector from fan 2 Ground vibration Noise from Fan Noise from Fan 2 1 Figure 80: Vibration of the projector from fan 1 and fan 2 100

115 To dampen the vibration from fan 1, a foam structure was used to hold fan 1 in position shown in Figure 81. With the damping of fan 1, the vibration was reduced from 9x10 5 m at 260 Hz and 5x10 5 m at 470 Hz to 1x10 5 m at 260 Hz and 1x10 5 m at 470 Hz, shown in Figure 82. Figure 81: Foam structure for fan 1 Ground noise Damped Fan 1 Figure 82: Vibration of the projector compensate fan 1 Noise from Fan 2 To determine the vibration of the damped projector, the vertical line of one pixel width and a horizontal line of one pixel height were re-projected and captured, and the experiment was repeated for fifty trials. The vibrations in both of the directions are shown in the Figure 83 (a), (b). From, Figure 83, the vertical vibration improved to ±1 pixels in the system. 101

116 (a) (b) Figure 83: a) Horizontal vibration of the damped system, b) Vertical vibration of the damped system The maximum displacement of the projector, dx, can be calculated considering the acceleration, a, in m s2, vibration frequency, f, in Hz, and the dx = a (2π f) 2 (35) Therefore, the maximum displacement of the projector introduced from the ground noise was calculated to be µm. The maximum displacement of the projector introduced from the fan 1 was calculated to be 0.57 µm, the maximum displacement of the projector introduced from the fan 2 was calculated to be 0.23 µm, and the maximum displacement of the damped projector was calculated to be µm. Considering using the smallest labelling pixel of 0.65 µm x 0.65 µm for the projector, the vibration in the projector was calculated to introduce 0.26 pixel error in the projector lens, and hence introducing a 39% error in the labeling of the features. 102

117 5.4.SL Sensory System Calibration In this section, a novel analytical calibration method for SL systems using microscope lenses to accurately measure the 3D surface profiles of complex micro-scale parts was proposed. The system calibration consists of: a) a novel calibration model which explicitly considers the microscope lenses parameters for the hardware components (camera and projector), and b) a novel calibration technique to obtain in-focused 2D images of the 3D world coordinates for the micro-domain considering the narrow DOF limitation of the microscope lens to solve the proposed model of the hardware components Background SL sensory systems utilize a projector, a camera and their corresponding microscopic lenses to capture the deformations of a known pattern projected onto a micro-scale part in order to obtain its 3D surface profile. In analytical calibration, the models for the components consider the intrinsic parameters (focal length, principle point, and lens distortion) and the extrinsic parameters (relative orientation and position of the camera and projector). The models are then solved to obtain the intrinsic and extrinsic parameters. With the intrinsic and extrinsic parameters of the components (camera, projector), the world coordinates for accurate three dimensional measurements can be obtained. In order to develop an SL system that can accurately measure micro-scale 3D parts with large complex variations in their geometrical shapes, an analytical measurement method using a model-based calibration approach is needed Proposed Analytical Calibration Parametric Model Mentioned in Chapter 2, the mathematical models of the analytical calibration methods used for describing SL systems consists of two parts, a camera model and a projector model. The camera and projector are calibrated separately to avoid the effect of camera calibration error on the projector calibration. Code words are commonly assigned to establish the pixel to pixel correspondence between the projected sensor and the camera sensor. The image coordinates of the 3D calibration points are then mapped from the camera sensor to the projector sensor to form the projector s view, and hence the projector is treated as a reverse camera 103

118 [52][71][72][73][70]. The projector is modelled according to a camera, and calibrated as a camera by using the calibration points in the projector s view. With the components (camera, projector) coupled to optical microscope lens setups, shown in Figure 61 and Figure 74, additional optical microscope lens parameters are included in the models of the components. Modern microscope lens setups are usually equipped with infinitycorrected objectives, light emerging from these objectives is focused to infinity, and a second lens, known as a tube lens, creates the image at its focal plane and projects it onto the sensor plane (a CCD array) where a real image is formed [104]. For the purposes of geometrical calibration, the common microscope model demonstrated in the parametric microscope model [65] shown in Figure 84 is used for modelling the components. Figure 84: Microscope lens optics From the proposed analytical calibration parametric model, the systems are defined as the following: Camera coordinate system, Projector coordinate system, and World coordinate system. The additional intrinsic parameters in the camera and projector models include the microscope lens focal lengths(f m c, f m p ), the distance between the object plane and the front focal c plane(d), and the optical tube lengths(d tube p, d tube ). 104

119 Figure 85: Proposed Analytical Calibration Parametric Model With the microscope lens setup, where the measured part is relatively small compared to the measurement distance, and the object plane is assumed to be parallel with the image plane, we can make the following assumption: t z = f m + d (36) M = Image plane to lens Object plane to lens = d ppubpp + f m f m + d (37) The intrinsic parameters matrix of the components (camera, projector) consists the coordinates of principle point in the image reference plane, u o and v o, focal lengths along the u and v axes of the component image plane, f x and f y, and the skewness of the two image axes γ. The extrinsic parameters describe the orientation and translation between the component coordinate 105

120 system and the global coordinate system. Where R represents the rotation matrix and T represents the transformation matrix. The perspective projection for the components is based on ideal pinhole model. The target points in the camera and projector image coordinates can be express as the following: u = f X Z ; (38) v = f Y Z ; (39) With the rigid body transformation, we can express the relationship of world coordinates and the component coordinates as the following: X w X Coordinate transformation: Y = R Yw + T Z Z w (40) The pinhole model is only an approximation of the real sensor projection. With perfect lens system, light rays would pass from world s coordinates to image sensor and form a sharp image on the plane of focus, and the simple perspective projection model presented above would hold true assuming that the lens is free of optical distortion. However, in reality, the imperfection of lens construction, or complex lens systems such as wide angle lenses, zoom lenses, and microscope lenses introduce lens distortions that deviates the results from the theoretical model. Therefore, in order to model the system with high accuracy, a more comprehensive non-linear model including lens distortions is used. The modified perspective projection is a basis that is extended with some corrections for systematically distorted image coordinate. There are many different kinds of distortion, in general, the lens distortion function is totally dominated by radial components, and especially dominated by the first term. In microscope lens system, the same conclusion was drawn for the microscope lenses [65], [56], [105], [106], and therefore, the radial distortion is implemented. Radial distortions occurs when flawed radial curvature of the lens elements causes the image on the sensor to be displaced radially either inward or outward from the principle point [59]. [59] 106

121 No Barrel Pincushion Figure 86: Radial Distortion [59] To correct the radial distortion, Gaussian radial distortion model can be applied to describe the magnitude of the radial distortion. Radial distortion can be expressed in its Gaussian form as the following: δ pp = K 1 r 3 + K 2 r 5 + K 3 r 7 + [59] Where K 1, K 2, K 3 are the coefficients of the radial distortion, and r = u 2 d + v 2 d, which is the radial distance from the principal point (u o, v o ) of the image plane. And (u d, v d ) are the distorted image coordinates. For each image point, the radial distortion can then be approximated using the following expression: δ pp,u = (u d u pp )(K 1 r 2 + K 2 r 4 + ) δ pp,v (v d v pp )(K 1 r 2 + K 2 r 4 [59] + ) For microscope lenses, the most commonly used radial correction is the 2nd order distortion correction, which is represented by the first radial distortion parameter (k 1 ). Hence 2nd order radial distortion parameter, k 1, is modelled to relate the distorted (u d, v d ) and undistorted (u, v) image points. The lens distortion model can be express as the following: u = u d (1 + k 1 r 2 ) [59] (41) v = v d (1 + k 1 r 2 ) [59] (42) Therefore, transformed from the component s frame with distortion (u d, v d ) to the image frame, the pixel coordinates are: 107

122 u = λ(u d) d x + u pp [59] (43) v = v d d y + v pp [59] (44) Then the coordinate transformation from world coordinates to camera image plane coordinates can be described by the prospective projection as the following: (r 11 c f c r c 31 u c )X w (r c 12 f c r c 32 u c )Y w (r c 13 f c r c 33 u c )Z w (r c 21 f c r c 31 v c )X w (r c 22 f c r c 32 v c )Y w (r c 23 f c r c 33 v c = t zu c t x f c )Z w t z v c t y f c (45) Similarly, the coordinate transformation from world coordinates to projector image plane coordinates can be described by the prospective projection as the following: (r pp 11f pp r pp 31 u pp )X w (r pp 12 f pp r pp 32 u pp )Y w (r pp 13 f pp r pp 33 u pp )Z w (r pp 21 f pp r pp 31 v pp )X w (r pp 22 f pp r pp 32 v pp )Y w (r pp 23 f pp r pp 33 v pp = t zu pp t x f pp )Z w t z v pp t y f pp (46) Incorporate microscope parametric model into the prospective projection for the component s coordinate transformation. The equation for the camera and projector are shown as following: (r 11 c (f c m + d ppubpp ) r c 31 u c )X w (r c 12 (f c m + d ppubpp ) r c 32 u c )Y w (r c 13 (f c m + d ppubpp ) (r c 21 (f c m + d ppubpp ) r c 31 v c )X w (r c 22 (f c m + d ppubpp ) r c 32 v c )Y w (r c 23 (f c m + d ppubpp ) = (fm + d)u c c t x f m (f m + d)v c t y fc m (47) (r 11(f m + d ppubpp ) r pp 31 u pp )X w (r pp 21 (f pp m + d ppubpp ) r pp 31 v pp )X w (r pp 12 (f pp m + d ppubpp ) r pp 32 u pp )Y w (r pp 22 (f pp m + d ppubpp ) r pp 32 v pp )Y w (r pp 13 (f pp m + d ppubpp ) (r pp 23 (f pp m + d ppubpp ) = (fm + d)u pp pp t x f m (f m + d)v pp t y f pp m (48) The world coordinate of each measuring point P w (X w, Y w, Z w ) is determined by combining equation (47) and equation (48) and solving for the pseudo-inverse of the matrix. In the actual solving process at least 14 equations are need to be obtained to solve the intrinsic and extrinsic parameters. Hence at least 14 reference points in the 3D world coordinate is required. Once the 108

123 intrinsic and extrinsic parameters are known, only three equations are needed to solve for the unknown coordinates (X w, Y w, Z w ) [67]. Therefore, in the digital fringe projection method, only one projector image coordinate (u p d ) is needed to be combined with the camera image coordinates equations to calculate the world coordinates(x w, Y w, Z w ). The parameters in (45) and (46) involve system parameters (intrinsic and extrinsic) of the camera and projector obtained from a two-step component calibration method following Tsai s model. Therefore, the calibration process to obtain the parameters of both components with high accuracy is very important Proposed Analytical Calibration Technique Analytical calibration requires modelling of each hardware component (camera and projector) and performing a calibration technique for each hardware component to obtain in-focused 2D images of the 3D world coordinates in order to solve for the parameters (intrinsic and extrinsic) in the proposed models. In this section, a novel calibration technique to obtain focused 2D images of the 3D world coordinates for the micro-domain considering the narrow DOF characteristic of the microscope lens is presented. In the following sub-sections, the following steps for the calibration technique were proposed: a) to select a suitable calibration reference object, b) to calibrate the camera considering the narrow DOF characteristic of the microscope lens c) to calibrate the projector considering the narrow DOF characteristic of the microscope lens, and d) to optimize the obtained intrinsic and extrinsic parameters through optimization technique Calibration reference object selection From the literature, to precisely calibrate the microscope setup, the analytical calibration method can be used to compute accurate parameters (intrinsic and extrinsic) based on the reference points distances [107], [52]. To establish the relationship between the component image coordinate system and the calibration sample coordinate system for the analytical calibration, a calibration reference object with reference features is required. Hence, in order to perform accurate analytical calibration of camera-microscope calibration, highly accurate calibration reference object is needed. Various literatures have been proposed to perform analytical calibration in the micro-domain using highly accurate reference features calibration objects 109

124 manufactured using lithography process [64], micro-fabrication process [65], and gas assisted focused ion beam (FIB) deposition process [80]. Furthermore a flexible calibration method of using precise micromanipulator to act as reference object was proposed [66] for micro-domain analytical calibration. The selected microscope lens setup for the system setup produces a 20.5 X magnification with a smallest resolvable feature size of µm x µm. With high magnification, the measurement result is dependent on the smaller resolvable feature of the camera-microscope setup. In order to obtain accurate parameters for the calibration model of camera-microscope setup, the reference 3D coordinate on the calibration reference object has to be manufactured with accuracy comparable to the resolvable feature in order to produce the desired measurement result to satisfy the design objective [54]. A calibration reference object, shown in Figure 87, with feature accuracy of ±1 µm, was selected for the developed 3D SL sensory system to achieve a measurement accuracy of micrometers for our camera-microscope setup. The specification of the selected calibration reference object is further shown in Table 18. Figure 87: Fixed frequency grid distortion target from Edmund Inc. 110

125 Table 18: Specification of Fixed frequency grid distortion target Specification: Stock Number # Type Chrome on Opal Dimensions (mm) 50.8 x 50.8 Pattern Size (mm) 25 x 25 Thickness (mm) 1.5 Dot Diameter 62.5 µm Dot Diameter Tolerance ± 2 µm Dot spacing 125 µm Dot Spacing Tolerance (mm) ± 1 µm Center to Center, ± Grid Corner to Corner Overall Accuracy (mm) ± 1 µm Surface Accuracy (λ) 4-6 per 25.4 Area Surface Quality Coating Reflective First Surface Chromium Rabs = 50% ± 550 nm Flatness 25.4 µm According to the specification in Table 18, the selected calibration reference object has an array of 62.5 µm diameter dots with even spacing of 125 µm and accuracy of ±1 µm. The dots are etched on to a flat glass through lithography to provide the dot spacing accuracy of ±1 µm. With the accuracy of the calibration reference object being comparable to the smallest resolvable feature size of the camera-microscope setup (0.536 µm), hence we will be able to achieve the micrometer measurement accuracy using the selected calibration reference object for calibrating the structured light system prototype Camera calibration In this sub-section, a novel calibration technique for the camera system to obtain focused 2D images of the 3D world coordinates for the micro-domain considering the narrow DOF characteristic of the microscope lens is proposed. In the proposed calibration technique, the selected calibration reference object is to be placed on a high-precision stage and moved along the z-axis of the world coordinate system within the measurement volume to provide reference 3D points for the calibration. A schematic of the proposed calibration technique is shown in Figure 88. By moving the precise linear stage at different known distances, multiple planar calibration target points are produced for the camera calibration. The X, Y distances of the target points in 3D world coordinate are provided by the calibration specification in Table

126 Figure 88: Camera calibration In microscope systems, large numerical apertures (NA) are desirable to resolve small features [38], however, the optical behaviour of a high NA lens leads to a narrow depth of field (DOF) with a very shallow focus range [38]. Any slight angle between the camera image plane and the reference plane will produce out of focus image in the camera sensor. Hence, it is impossible to obtain focused 2D images with the configuration of the SL sensory system. The narrow DOF character of the microscope lens limits the ability of the calibration technique to accurately obtain the intrinsic and extrinsic parameters based on relating the 2D images to 3D world coordinates. In order to calibrate the SL sensory system and to overcome the narrow DOF characteristic of the microscope lens, a digital image processing technique, known as the image focus fusion, in which multiple images taken at different focus distances and fused to generate a single all-in-focus image [108] is implemented. The image focus fusion technique, Figure 89, includes the following steps: focus motor control, focus measure, selective measure, image alignment, noise removal optimization, and focus fusion, Appendix details these steps. Figure 89: Digital image processing technique 112

127 During the calibration, at each planar location along the z-axis, a series of images of the calibration reference object are captured with sequential focus levels of the camera, with camera focusing from the left of the image to the right, to perform the image focus fusion to generate focused 2D images. For each focused image, a direct least square fitting of ellipses algorithm is applied to detect the calibration target points and determine the geometric center of the points in the image plane location. A direct least square fitting of ellipses algorithm was implemented to obtain the 2D coordinates (u i, v i, ) of the target points. Once the 3D coordinates of the calibration feature points in world coordinates and their 2D pixel correspondences on the camera image plane are obtained, this information is then used in the aforementioned camera parametric microscope model to obtain the intrinsic and extrinsic parameters of the camera Projector calibration In the second step of the calibration technique, the focused 2D images of the 3D world coordinates for the projector s view is obtained considering the narrow DOF characteristic of the microscope lens. The reverse camera method for calibrating the projector, is implemented for the micro-scale projector calibration. The pixel to pixel correspondence between the projector DMD and the camera sensor was established through capturing the projected patterns. The projector is then treated as a reverse camera, thus making the projector calibration essentially the same as that of a camera, and hence the aforementioned digital image processing technique was utilized to obtain focused 2D images of the 3D world coordinates for the micro-domain. To perform the calibration, the selected calibration reference object is placed on the high-precision stage and moved along the z-axis of the world coordinate system within the measurement volume to provide reference 3D points for the calibration. A schematic of the calibration setup is shown in Figure

128 Figure 90: Projector Calibration The correspondences of the reference points on the projector image plane are obtained by means of encoding the projected light with the coded patterns. Patterns are generated and projected onto the calibration object placed at different z-axis locations within the measurement volume. In terms of the pattern design for obtaining the correspondences of the projector to camera relationship for projector calibration, the aforementioned sinusoidal phase-shifting pattern described in Chapter 3 was implemented using two sets of three continuous phase shifted patterns with sinusoidal intensity profiles in order to determine correspondence between the camera and projector [109]. Equation (1), (2), and (3), were used to design the patterns. The first set of patterns consists of three patterns with five vertical fringes; the second set consists of three patterns with one vertical fringe. For both sets, each pattern is shifted by 2π/3 with respect to the other patterns. The multiple wavelength phase shifting provided an absolute phase value for each pixel position P P (i,j) in the projector. 114

129 When processing the captured images of the phase shifted patterns, the absolute phase value for each pixel position P C (i,j) in the camera is determined, so the correspondence between the pixel location on the projector and the camera is uniquely correlated through the projected patterns, therefore, projector s view is mapped from the camera s view, shown in Figure 91. Screen WW PP pppppppp Screen WW PP pppppppppppppp Camera Projector Camera Projector Projected pattern Measuring Figure 91: Mapping from camera to projector During the calibration, at each planar location along the z-axis, the projector is adjusted to focus to the calibration object, and the designed patterns were projected onto the calibration reference object. Images of the calibration reference object are captured by the camera with sequential focus levels to perform the image focus fusion to generate focused 2D images. The focused 2D images are then mapped to the projector s view through the pixel to pixel correspondence between the projector sensor and the camera sensor. For each 2D image in the projector s view, the centers of the calibration feature points (u i, v i, ) are obtained. Once the 3D coordinates of the reference points in world coordinates and their 2D pixel correspondences on the projector image plane are fully obtained, this information is then used in the aforementioned projector parametric microscope model to obtain the intrinsic and extrinsic parameters of the projector Parameter optimization An optimization technique to improve the accuracy of the computed intrinsic and extrinsic parameters for both the camera and the projector is implemented. The iterative optimization technique, Levenberg-Marquardt (LMA), is implemented to optimize the intrinsic and extrinsic parameters of the camera θ c f(f c c m, d tube, u c 0, v c 0, K c, R c, T c ) and projector θ p f(f p p m, d tube, u p 0, v p 0, K p, R p T p ) iteratively with the objective to minimize the re-projector 115

130 error. The re-projector error is computed from the difference between the actual reference points, P w, and the modelled reference points, (P c and P p ), computed from the initial intrinsic and extrinsic parameters. θ c, θ pp = argmin ( g(pp w (k), θ c ) PP c (k) 2 k + g PP w (k), θ pp PP pp (k) 2 ) (49) The initial set of the computed intrinsic (f c c m, d tube p, u c 0, v c 0, K c, f p m, d tube, u p 0, v p 0, K p ) and extrinsic parameters (R c, T c, R p T p ) were used as input to the iterative optimization. The intrinsic and extrinsic parameters of the camera and projector were iteratively adjusted to obtain the minimum re-projector error. 5.5.Proposed Measurement Technique In the designed SL sensory system, high numerical apertures (NA) are desirable for the microscope lenses to capture as much light as possible to achieve the smallest resolvable feature. A high numerical aperture (equivalent to a low f-number) gives a very shallow DOF [110], and hence the entire part specified by the requirement (0.5 mm x 0.5 mm) cannot be in focus at once. In order to overcome the small focus distance caused by the limited depth of field of the microscope lens system, shown in Figure 92 (b), a novel measurement technique incorporating the aforementioned image focus fusion technique into the SL sensory system measurement is proposed. Focus fusion is a digital image processing technique in which the image processing technique fuse multiple images taken at different focus distances to generate a single resulting image with a greater DOF than any of the individual source images [108]. The image focus fusion technique takes into account of the focus measurement of the source images, and selecting the focus zone of each image, it further aligns images to perform image fusion based on the focused areas. The focus fusion technique is generally applied to macro-photography and microscopy where getting sufficient DOF is particularly challenging. 116

131 For the system measurement, the part (0.5 mm x 0.5 mm) is placed at the reference position. The camera and projector are then scanned through the part with sequential focus steps along their optical axis, Figure 92 (a). In each step, the projector projects the designed focused patterns and the camera captures the deformed patterns in focus, shown in Figure 92 (b). Once all images are obtained, the images with the same projected pattern are fused to obtain one all-in-focus 2-D image, and to perform coordinate transformation using known component parameters (intrinsic and extrinsic). In order to minimize the noise in the fused image coming from camera noise and system vibration, an image alignment algorithm and a known noise-robust selective image fusion algorithm [77] has been implemented in order to generate noise robust all-in-focus 2-D images of the deformed patterns on the part for the system measurement. (a) 117

132 (b) Figure 92: a) measurement volume without focus fusion, b) measurement volume with focus fusion 5.6.Chapter Summary In this thesis, a novel analytical calibration method for SL systems using microscope lenses to accurately measure the 3D surface profiles of complex micro-scale parts was proposed. The method includes a novel calibration model which explicitly considers the microscope lenses parameters for the hardware components (camera and projector) as well as addresses the limitation of narrow DOF behaviour of these lenses. The latter is achieved by incorporating an image focus fusion technique. The proposed analytical calibration method allows for accurate 3D world coordinates of a 3D complex part to be obtained. This calibration method implemented and tested on the developed SL sensory system, and the result of 3D measurement of complex micro-scale parts is shown in the next Chapter. 118

133 Chapter 6 Experiments for Micro-scale application In this Chapter, the experiments to verify the use of the designed 3D SL sensory system for measuring micro-scale 3D parts are presented. The experiments are used to: (i) setup the system, (ii) calibrate the system, (iii) determine the sensor s measurement error in z-, x-, and y- axes, (iv) demonstrate the sensor s ability to obtain 3D micro-scale non-uniform features, and (v) identify sources of errors in the 3D SL sensory system. 6.1.System Setup The DLP projector (Texas Instrument Inc. DLP Light Commander Projector) and CMOS camera (Adimec Quartz Q-4A180 camera) with their microscope lenses are placed with a fixed relative pose with respect to each other, shown in Figure 93. The synchronization is done through the selected microcontroller (Digilent ChipKit Max 32 controller board), triggering the camera and the projector simultaneously by sending an impulse function. The projector is configured to illuminate 1400 μs when triggered, and the camera is configured to have an exposure time of 6250 μs upon triggered to obtain the desired projected image with the least amount of specular effect from reflection. An impulse function with a pulse width of 10 μs and a period of 6250 μs is sent to both camera and projector. The camera and projector synchronization setup has a system frame rate of 160 Hz. The parameters are shown in the following table. Table 19 Projector and camera parameters DLP projector Adimec camera Illumination time of 1400 µm Exposure time of 6250 µs 160 Hz 160 fps Vertical flip Projection power of 25% (R,G,B) 119

134 6.2.Calibration Result Figure 93: 3D Sensory System Hardware The calibration setup is shown in Figure 94. The selected calibration object described in Chapter 5 was mounted onto the motorized stage using an aluminum bracket. Each calibration point on the selected calibration object has a mm diameter and mm apart in x- and y- directions from each other. The calibration object was moved along the z-direction by 0.01 mm using the motorized linear stage. For each z-position, 28 images with different focus levels were taken and fused to produce a single focused image using the focus fusion technique described in Chapter 5. The images of different focus levels were obtained by adjusting the focus level of the microscope lenses of the camera and projector. For each focus image, a direct least square fitting of ellipses algorithm was applied to detect the calibration reference points and determine the geometric center of the points in the image plane location. 30 points per plane and 20 z- positions with 0.01 mm increments were utilized to build the 600 reference points LUT to calibrate the measurement volume of the system. The LUT consist the 2D pixel coordinates of the reference points on the camera image plane and the corresponding 3D world coordinates. Once the reference LUT was built, the LUT was then used to compute the camera and projector parameters. 120

135 Figure 94: Calibration Setup Camera Parameters The camera s intrinsic and extrinsic parameters were obtained using the proposed analytical calibration parametric model and the novel calibration technique presented in Chapter 5. Figure 95 shows the camera s view of the reference points at z= 0 position. Using the LUT, the intrinsic and extrinsic parameters of the camera were obtained, shown in Table 20. Figure 95: Camera s view of reference points at Z= 0 121

136 Table 20: Camera intrinsic and extrinsic parameters Intrinsic Parameters f x = (mm); f y = (mm) d ppubpp = (mm) u pp = pixels v pp = pixels k1 = sx = M = Extrinsic Parameters T = [Tx = (mm) Ty = (mm) Tz = (mm) ] R = e e Intrinsic parameters includes the focal length of the microscope lens system of the camera, f x, f y, the tube length, d tube, the principal point, u o, v o, the distortion factor, k1, and the scale factor sx. The extrinsic parameters include the translation matrix of the camera to the world coordinate system, T, and the orientation of camera to the world coordinate system, R Projector Parameters The projector s intrinsic and extrinsic parameters were obtained using the proposed analytical calibration parametric model, the reverse camera method, and the novel calibration technique presented in Chapter 5. The correspondences of the reference points on the camera s view to the projector image plane are obtained by means of encoding the projected light with the coded patterns, shown in Figure 96. Figure 97 shows the projector s view of the reference points at z= 0 position. Using the LUT, the intrinsic and extrinsic parameters of the projector were obtained, shown in Table

137 Figure 96: Projected patterns Figure 97: Projector s view of the reference points 123

138 Table 21: Projector intrinsic and extrinsic parameters Intrinsic Parameters f x = (mm); f y = (mm) d ppubpp = (mm) u pp = pixels v pp = pixels k1 = sx = 1 M = 6.2 Extrinsic parameters T = [Tx = (mm) Ty = (mm) Tz = (mm) ] R = Measurement Experiments were conducted to verify the performance of the designed 3D SL sensory system for measuring micro-scale parts. A high-precision linear stage, Aerotech Model ATS212, with a repeatability of 1 μm, was utilized for the measurement of the sensory system. The following subsections discuss the measurement procedure for obtaining the measurement error of the 3D SL sensory system in the z-axis, as well as the x- and y- axes using a flat calibration object. Furthermore, the small features on a Canadian dime were measured to demonstrate the 3D SL sensory system ability at measuring micro-scale parts Flat Plane Experiment The performance of the 3D sensory system for micro-scale application was conducted by measuring the selected calibration object placed within the measurement volume. The 3D measurement result of the calibration object is shown in Figure 98. The measurement area is 0.6 mm x 0.5 mm. 124

139 Figure 98: 3D point cloud of the flat plane Measurement error in z-axis The calibration object was placed on the high-precision stage and moved along the z-axis of the world coordinate system at 0.01 mm increments, covering a depth of 0.2 mm at the center of the measurement volume. At each location of the plane, 1,042,766 measurement points were compared with the actual travel distance of the object to determine the root mean square (RMS) error and standard deviation of the measurement. With the developed SL sensory system for micro-scale application, the measurement error in the z-direction of the world coordinate system within the working range of 0 to 0.2 mm was determined to be minimum RMS error: 8.54 µm, maximum RMS error: µm, mean RMS error: 9.52 µm, shown in Figure

140 Figure 99: Measurement error in z-axis within 0.2 mm range Measurement error in x-axis and y-axis The calibration reference object with X and Y point to point distance of mm and distance accuracy of ±1 µm, was mounted onto a holder and placed on the high-precision linear stage and moved along the z-axis of the world coordinate system at 0.01 mm increments, covering a depth of 0.2 mm at the center of the measurement volume. The calibration reference object s vertical axis is aligned with the x-axis of the sensor coordinate, and its horizontal axis is aligned with the y-axis of the sensor coordinate. At each location along the z-axis, the circular reference points were obtained in sensor coordinate and compared with the world coordinate to determine the x- and y- axis root mean square (RMS) error and standard deviation of the measurements. The RMS error of the x- and y- axes with respect to the object positions in z-axis are shown in Figure 100 and Figure 101. The RMS error for the x-axis is µm and with Standard deviation of 1.9 µm. The RMS error for the y-axis is 8.65 µm and with Standard deviation of 0.70 µm. The higher RMS error in the x-axis is a result of the system movement from the vertical vibration of the system hardware mentioned in Chapter

141 Figure 100: Measurement error in x-axis within 0.2 mm range Figure 101: Measurement error in y-axis within 0.2 mm range Non-Uniform Object Surface Measurement As a further evaluation of the 3D SL sensory system, a Canadian dime with non-uniform microscale features was measured. The micro-scale patterns were projected onto the character 2 and 0 of the dime, shown in 127

142 Figure 102. The micro-scale features 2 and 0, exceeded the size of the measurement area (0.6 mm x 0.5 mm area), hence, the micro-scale features were scanned from top to bottom in three segments, shown in Figure 103 and Figure 105. The measured point clouds of the microscale features were then stitched together using commercially available Geomagic Studio 2012 software to produce the final 3D measurement results of the features. The 3D measurement results of the features were presented in Figure 104 and Figure 106. Figure 102: Canadian dime with micro-scale features 128

143 (a) (b) (c) Figure 103: a) Top, b) middle, and c) bottom measurement segments of the micro-scale features 2 129

144 (a) (b) Figure 104: a) Point Cloud of the micro-scale feature 2, b) Surface of the micro-scale feature 2 130

145 (a) (b) (c) Figure 105: a) Top, b) middle, and c) bottom measurement segments of the micro-scale features 0 131

146 (a) (b) Figure 106: a) Point Cloud of the micro-scale feature 0, b) Surface of the micro-scale feature 0 132

An Innovative Three-dimensional Profilometer for Surface Profile Measurement Using Digital Fringe Projection and Phase Shifting

An Innovative Three-dimensional Profilometer for Surface Profile Measurement Using Digital Fringe Projection and Phase Shifting An Innovative Three-dimensional Profilometer for Surface Profile Measurement Using Digital Fringe Projection and Phase Shifting Liang-Chia Chen 1, Shien-Han Tsai 1 and Kuang-Chao Fan 2 1 Institute of Automation

More information

Hyperspectral interferometry for single-shot absolute measurement of 3-D shape and displacement fields

Hyperspectral interferometry for single-shot absolute measurement of 3-D shape and displacement fields EPJ Web of Conferences 6, 6 10007 (2010) DOI:10.1051/epjconf/20100610007 Owned by the authors, published by EDP Sciences, 2010 Hyperspectral interferometry for single-shot absolute measurement of 3-D shape

More information

Rodenstock Products Photo Optics / Digital Imaging

Rodenstock Products Photo Optics / Digital Imaging Go to: Apo-Sironar digital Apo-Macro-Sironar digital Apo-Sironar digital HR Lenses for Digital Professional Photography Digital photography may be superior to conventional photography if the end-product

More information

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

TISSUE SURFACE TOPOGRAPHY USING 3D PRFILOMETRY

TISSUE SURFACE TOPOGRAPHY USING 3D PRFILOMETRY TISSUE SURFACE TOPOGRAPHY USING 3D PRFILOMETRY Prepared by Craig Leising 6 Morgan, Ste156, Irvine CA 92618 P: 949.461.9292 F: 949.461.9232 nanovea.com Today's standard for tomorrow's materials. 2011 NANOVEA

More information

2D OPTICAL TRAPPING POTENTIAL FOR THE CONFINEMENT OF HETERONUCLEAR MOLECULES RAYMOND SANTOSO A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE

2D OPTICAL TRAPPING POTENTIAL FOR THE CONFINEMENT OF HETERONUCLEAR MOLECULES RAYMOND SANTOSO A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE 2D OPTICAL TRAPPING POTENTIAL FOR THE CONFINEMENT OF HETERONUCLEAR MOLECULES RAYMOND SANTOSO B.SC. (HONS), NUS A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF PHYSICS NATIONAL UNIVERSITY

More information

Introduction to 3D Machine Vision

Introduction to 3D Machine Vision Introduction to 3D Machine Vision 1 Many methods for 3D machine vision Use Triangulation (Geometry) to Determine the Depth of an Object By Different Methods: Single Line Laser Scan Stereo Triangulation

More information

POWDER COATING FINISH MEASUREMENT USING 3D PROFILOMETRY

POWDER COATING FINISH MEASUREMENT USING 3D PROFILOMETRY POWDER COATING FINISH MEASUREMENT USING 3D PROFILOMETRY Prepared by Craig Leising 6 Morgan, Ste156, Irvine CA 92618 P: 949.461.9292 F: 949.461.9232 nanovea.com Today's standard for tomorrow's materials.

More information

Coupling of surface roughness to the performance of computer-generated holograms

Coupling of surface roughness to the performance of computer-generated holograms Coupling of surface roughness to the performance of computer-generated holograms Ping Zhou* and Jim Burge College of Optical Sciences, University of Arizona, Tucson, Arizona 85721, USA *Corresponding author:

More information

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra) Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The

More information

Metrology and Sensing

Metrology and Sensing Metrology and Sensing Lecture 4: Fringe projection 2016-11-08 Herbert Gross Winter term 2016 www.iap.uni-jena.de 2 Preliminary Schedule No Date Subject Detailed Content 1 18.10. Introduction Introduction,

More information

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc. Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light I Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

A SUPER-RESOLUTION MICROSCOPY WITH STANDING EVANESCENT LIGHT AND IMAGE RECONSTRUCTION METHOD

A SUPER-RESOLUTION MICROSCOPY WITH STANDING EVANESCENT LIGHT AND IMAGE RECONSTRUCTION METHOD A SUPER-RESOLUTION MICROSCOPY WITH STANDING EVANESCENT LIGHT AND IMAGE RECONSTRUCTION METHOD Hiroaki Nishioka, Satoru Takahashi Kiyoshi Takamasu Department of Precision Engineering, The University of Tokyo,

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

A three-step system calibration procedure with error compensation for 3D shape measurement

A three-step system calibration procedure with error compensation for 3D shape measurement January 10, 2010 / Vol. 8, No. 1 / CHINESE OPTICS LETTERS 33 A three-step system calibration procedure with error compensation for 3D shape measurement Haihua Cui ( ), Wenhe Liao ( ), Xiaosheng Cheng (

More information

PLASTIC FILM TEXTURE MEASUREMENT USING 3D PROFILOMETRY

PLASTIC FILM TEXTURE MEASUREMENT USING 3D PROFILOMETRY PLASTIC FILM TEXTURE MEASUREMENT USING 3D PROFILOMETRY Prepared by Jorge Ramirez 6 Morgan, Ste156, Irvine CA 92618 P: 949.461.9292 F: 949.461.9232 nanovea.com Today's standard for tomorrow's materials.

More information

3D OPTICAL PROFILER MODEL 7503

3D OPTICAL PROFILER MODEL 7503 3D Optical Profiler MODEL 7503 Features: 3D OPTICAL PROFILER MODEL 7503 Chroma 7503 is a sub-nano 3D Optical Profiler developed using the technology of white light interference to measure and analyze the

More information

CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM. Target Object

CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM. Target Object CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM 2.1 Theory and Construction Target Object Laser Projector CCD Camera Host Computer / Image Processor Figure 2.1 Block Diagram of 3D Areal Mapper

More information

A RADIAL WHITE LIGHT INTERFEROMETER FOR MEASUREMENT OF CYLINDRICAL GEOMETRIES

A RADIAL WHITE LIGHT INTERFEROMETER FOR MEASUREMENT OF CYLINDRICAL GEOMETRIES A RADIAL WHITE LIGHT INTERFEROMETER FOR MEASUREMENT OF CYLINDRICAL GEOMETRIES Andre R. Sousa 1 ; Armando Albertazzi 2 ; Alex Dal Pont 3 CEFET/SC Federal Center for Technological Education of Sta. Catarina

More information

Optics Vac Work MT 2008

Optics Vac Work MT 2008 Optics Vac Work MT 2008 1. Explain what is meant by the Fraunhofer condition for diffraction. [4] An aperture lies in the plane z = 0 and has amplitude transmission function T(y) independent of x. It is

More information

Information page for written examinations at Linköping University TER2

Information page for written examinations at Linköping University TER2 Information page for written examinations at Linköping University Examination date 2016-08-19 Room (1) TER2 Time 8-12 Course code Exam code Course name Exam name Department Number of questions in the examination

More information

SIMULATION AND VISUALIZATION IN THE EDUCATION OF COHERENT OPTICS

SIMULATION AND VISUALIZATION IN THE EDUCATION OF COHERENT OPTICS SIMULATION AND VISUALIZATION IN THE EDUCATION OF COHERENT OPTICS J. KORNIS, P. PACHER Department of Physics Technical University of Budapest H-1111 Budafoki út 8., Hungary e-mail: kornis@phy.bme.hu, pacher@phy.bme.hu

More information

Shape and deformation measurements by high-resolution fringe projection methods February 2018

Shape and deformation measurements by high-resolution fringe projection methods February 2018 Shape and deformation measurements by high-resolution fringe projection methods February 2018 Outline Motivation System setup Principles of operation Calibration Applications Conclusions & Future work

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Lumaxis, Sunset Hills Rd., Ste. 106, Reston, VA 20190

Lumaxis, Sunset Hills Rd., Ste. 106, Reston, VA 20190 White Paper High Performance Projection Engines for 3D Metrology Systems www.lumaxis.net Lumaxis, 11495 Sunset Hills Rd., Ste. 106, Reston, VA 20190 Introduction 3D optical metrology using structured light

More information

SURFACE TEXTURE EFFECT ON LUSTER OF ANODIZED ALUMINUM USING 3D PROFILOMETRY

SURFACE TEXTURE EFFECT ON LUSTER OF ANODIZED ALUMINUM USING 3D PROFILOMETRY SURFACE TEXTURE EFFECT ON LUSTER OF ANODIZED ALUMINUM USING 3D PROFILOMETRY Prepared by Duanjie Li, PhD 6 Morgan, Ste156, Irvine CA 92618 P: 949.461.9292 F: 949.461.9232 nanovea.com Today's standard for

More information

Accurate projector calibration method by using an optical coaxial camera

Accurate projector calibration method by using an optical coaxial camera Accurate projector calibration method by using an optical coaxial camera Shujun Huang, 1 Lili Xie, 1 Zhangying Wang, 1 Zonghua Zhang, 1,3, * Feng Gao, 2 and Xiangqian Jiang 2 1 School of Mechanical Engineering,

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller 3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

MEMS SENSOR FOR MEMS METROLOGY

MEMS SENSOR FOR MEMS METROLOGY MEMS SENSOR FOR MEMS METROLOGY IAB Presentation Byungki Kim, H Ali Razavi, F. Levent Degertekin, Thomas R. Kurfess 9/24/24 OUTLINE INTRODUCTION Motivation Contact/Noncontact measurement Optical interferometer

More information

Agenda. DLP 3D scanning Introduction DLP 3D scanning SDK Introduction Advance features for existing SDK

Agenda. DLP 3D scanning Introduction DLP 3D scanning SDK Introduction Advance features for existing SDK Agenda DLP 3D scanning Introduction DLP 3D scanning SDK Introduction Advance features for existing SDK Increasing scanning speed from 20Hz to 400Hz Improve the lost point cloud 3D Machine Vision Applications:

More information

Roughness parameters and surface deformation measured by "Coherence Radar" P. Ettl, B. Schmidt, M. Schenk, I. Laszlo, G. Häusler

Roughness parameters and surface deformation measured by Coherence Radar P. Ettl, B. Schmidt, M. Schenk, I. Laszlo, G. Häusler Roughness parameters and surface deformation measured by "Coherence Radar" P. Ettl, B. Schmidt, M. Schenk, I. Laszlo, G. Häusler University of Erlangen, Chair for Optics Staudtstr. 7/B2, 91058 Erlangen,

More information

Draft SPOTS Standard Part III (7)

Draft SPOTS Standard Part III (7) SPOTS Good Practice Guide to Electronic Speckle Pattern Interferometry for Displacement / Strain Analysis Draft SPOTS Standard Part III (7) CALIBRATION AND ASSESSMENT OF OPTICAL STRAIN MEASUREMENTS Good

More information

Supplementary materials of Multispectral imaging using a single bucket detector

Supplementary materials of Multispectral imaging using a single bucket detector Supplementary materials of Multispectral imaging using a single bucket detector Liheng Bian 1, Jinli Suo 1,, Guohai Situ 2, Ziwei Li 1, Jingtao Fan 1, Feng Chen 1 and Qionghai Dai 1 1 Department of Automation,

More information

Exam Microscopic Measurement Techniques 4T th of April, 2008

Exam Microscopic Measurement Techniques 4T th of April, 2008 Exam Microscopic Measurement Techniques 4T300 29 th of April, 2008 Name / Initials: Ident. #: Education: This exam consists of 5 questions. Questions and sub questions will be rewarded with the amount

More information

Overview of Active Vision Techniques

Overview of Active Vision Techniques SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active

More information

MICHELSON S INTERFEROMETER

MICHELSON S INTERFEROMETER MICHELSON S INTERFEROMETER Objectives: 1. Alignment of Michelson s Interferometer using He-Ne laser to observe concentric circular fringes 2. Measurement of the wavelength of He-Ne Laser and Na lamp using

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

Ch 22 Inspection Technologies

Ch 22 Inspection Technologies Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface

More information

INFINITY-CORRECTED TUBE LENSES

INFINITY-CORRECTED TUBE LENSES INFINITY-CORRECTED TUBE LENSES For use with Infinity-Corrected Objectives Available in Focal Lengths Used by Thorlabs, Nikon, Leica, Olympus, and Zeiss Designs for Widefield and Laser Scanning Applications

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

12X Zoom. Incredible 12X (0.58-7X) magnification for inspection of a wider range of parts.

12X Zoom. Incredible 12X (0.58-7X) magnification for inspection of a wider range of parts. Incredible 12X (0.58-7X) magnification for inspection of a wider range of parts. Telecentric attachment gives you the world s first parfocal telecentric zoom lens with field coverage up to 50 mm. Increased

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

WARPAGE MEASUREMENT OF PCB USING 3D PROFILOMETRY

WARPAGE MEASUREMENT OF PCB USING 3D PROFILOMETRY WARPAGE MEASUREMENT OF PCB USING 3D PROFILOMETRY Prepared by Craig Leising 6 Morgan, Ste156, Irvine CA 92618 P: 949.461.9292 F: 949.461.9232 nanovea.com Today's standard for tomorrow's materials. 2010

More information

O-RING SURFACE INSPECTION USING 3D PROFILOMETRY

O-RING SURFACE INSPECTION USING 3D PROFILOMETRY O-RING SURFACE INSPECTION USING 3D PROFILOMETRY Prepared by Jorge Ramirez 6 Morgan, Ste156, Irvine CA 92618 P: 949.461.9292 F: 949.461.9232 nanovea.com Today's standard for tomorrow's materials. 2010 NANOVEA

More information

Measurements using three-dimensional product imaging

Measurements using three-dimensional product imaging ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using

More information

Optimized Design of 3D Laser Triangulation Systems

Optimized Design of 3D Laser Triangulation Systems The Scan Principle of 3D Laser Triangulation Triangulation Geometry Example of Setup Z Y X Target as seen from the Camera Sensor Image of Laser Line The Scan Principle of 3D Laser Triangulation Detektion

More information

STEEL SURFACE CHARACTERIZATION USING 3D PROFILOMETRY

STEEL SURFACE CHARACTERIZATION USING 3D PROFILOMETRY STEEL SURFACE CHARACTERIZATION USING 3D PROFILOMETRY Prepared by Andrea Novitsky 6 Morgan, Ste156, Irvine CA 92618 P: 949.461.9292 F: 949.461.9232 nanovea.com Today's standard for tomorrow's materials.

More information

A. K. Srivastava, K.C. Sati, Satyander Kumar alaser Science and Technology Center, Metcalfe House, Civil Lines, Delhi , INDIA

A. K. Srivastava, K.C. Sati, Satyander Kumar alaser Science and Technology Center, Metcalfe House, Civil Lines, Delhi , INDIA International Journal of Scientific & Engineering Research Volume 8, Issue 7, July-2017 1752 Optical method for measurement of radius of curvature of large diameter mirrors A. K. Srivastava, K.C. Sati,

More information

MICRO SCRATCH DEPTH USING 3D PROFILOMETRY

MICRO SCRATCH DEPTH USING 3D PROFILOMETRY MICRO SCRATCH DEPTH USING 3D PROFILOMETRY Prepared by Jorge Ramirez 6 Morgan, Ste156, Irvine CA 92618 P: 949.461.9292 F: 949.461.9232 nanovea.com Today's standard for tomorrow's materials. 2012 NANOVEA

More information

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal

More information

Optical Active 3D Scanning. Gianpaolo Palma

Optical Active 3D Scanning. Gianpaolo Palma Optical Active 3D Scanning Gianpaolo Palma 3D Scanning Taxonomy SHAPE ACQUISTION CONTACT NO-CONTACT NO DESTRUCTIVE DESTRUCTIVE X-RAY MAGNETIC OPTICAL ACOUSTIC CMM ROBOTIC GANTRY SLICING ACTIVE PASSIVE

More information

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter

More information

UNIT IV - Laser and advances in Metrology 2 MARKS

UNIT IV - Laser and advances in Metrology 2 MARKS UNIT IV - Laser and advances in Metrology 2 MARKS 81. What is interferometer? Interferometer is optical instruments used for measuring flatness and determining the lengths of slip gauges by direct reference

More information

Dynamic 3-D surface profilometry using a novel color pattern encoded with a multiple triangular model

Dynamic 3-D surface profilometry using a novel color pattern encoded with a multiple triangular model Dynamic 3-D surface profilometry using a novel color pattern encoded with a multiple triangular model Liang-Chia Chen and Xuan-Loc Nguyen Graduate Institute of Automation Technology National Taipei University

More information

Miniaturized Camera Systems for Microfactories

Miniaturized Camera Systems for Microfactories Miniaturized Camera Systems for Microfactories Timo Prusi, Petri Rokka, and Reijo Tuokko Tampere University of Technology, Department of Production Engineering, Korkeakoulunkatu 6, 33720 Tampere, Finland

More information

New Opportunities for 3D SPI

New Opportunities for 3D SPI New Opportunities for 3D SPI Jean-Marc PEALLAT Vi Technology St Egrève, France jmpeallat@vitechnology.com Abstract For some years many process engineers and quality managers have been questioning the benefits

More information

Measurement of period difference in grating pair based on analysis of grating phase shift

Measurement of period difference in grating pair based on analysis of grating phase shift Measurement of period difference in grating pair based on analysis of grating phase shift Chao Guo, Lijiang Zeng State Key Laboratory of Precision Measurement Technology and Instruments Department of Precision

More information

Depth Camera for Mobile Devices

Depth Camera for Mobile Devices Depth Camera for Mobile Devices Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Stereo Cameras Structured Light Cameras Time of Flight (ToF) Camera Inferring 3D Points Given we have

More information

Self-calibration of telecentric lenses : application to bubbly flow using moving stereoscopic camera.

Self-calibration of telecentric lenses : application to bubbly flow using moving stereoscopic camera. Self-calibration of telecentric lenses : application to bubbly flow using moving stereoscopic camera. S. COUDERT 1*, T. FOURNEL 1, J.-M. LAVEST 2, F. COLLANGE 2 and J.-P. SCHON 1 1 LTSI, Université Jean

More information

SURFACE BOUNDARY MEASUREMENT USING 3D PROFILOMETRY

SURFACE BOUNDARY MEASUREMENT USING 3D PROFILOMETRY SURFACE BOUNDARY MEASUREMENT USING 3D PROFILOMETRY Prepared by Craig Leising 6 Morgan, Ste156, Irvine CA 92618 P: 949.461.9292 F: 949.461.9232 nanovea.com Today's standard for tomorrow's materials. 2013

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

AP* Optics Free Response Questions

AP* Optics Free Response Questions AP* Optics Free Response Questions 1978 Q5 MIRRORS An object 6 centimeters high is placed 30 centimeters from a concave mirror of focal length 10 centimeters as shown above. (a) On the diagram above, locate

More information

Design of three-dimensional photoelectric stylus micro-displacement measuring system

Design of three-dimensional photoelectric stylus micro-displacement measuring system Available online at www.sciencedirect.com Procedia Engineering 15 (011 ) 400 404 Design of three-dimensional photoelectric stylus micro-displacement measuring system Yu Huan-huan, Zhang Hong-wei *, Liu

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

SYNTHETIC SCHLIEREN. Stuart B Dalziel, Graham O Hughes & Bruce R Sutherland. Keywords: schlieren, internal waves, image processing

SYNTHETIC SCHLIEREN. Stuart B Dalziel, Graham O Hughes & Bruce R Sutherland. Keywords: schlieren, internal waves, image processing 8TH INTERNATIONAL SYMPOSIUM ON FLOW VISUALIZATION (998) SYNTHETIC SCHLIEREN Keywords: schlieren, internal waves, image processing Abstract This paper outlines novel techniques for producing qualitative

More information

ksa MOS Ultra-Scan Performance Test Data

ksa MOS Ultra-Scan Performance Test Data ksa MOS Ultra-Scan Performance Test Data Introduction: ksa MOS Ultra Scan 200mm Patterned Silicon Wafers The ksa MOS Ultra Scan is a flexible, highresolution scanning curvature and tilt-measurement system.

More information

Phase error correction based on Inverse Function Shift Estimation in Phase Shifting Profilometry using a digital video projector

Phase error correction based on Inverse Function Shift Estimation in Phase Shifting Profilometry using a digital video projector University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Phase error correction based on Inverse Function Shift Estimation

More information

MICROSPHERE DIMENSIONS USING 3D PROFILOMETRY

MICROSPHERE DIMENSIONS USING 3D PROFILOMETRY MICROSPHERE DIMENSIONS USING 3D PROFILOMETRY Prepared by Craig Leising 6 Morgan, Ste156, Irvine CA 92618 P: 949.461.9292 F: 949.461.9232 nanovea.com Today's standard for tomorrow's materials. 2010 NANOVEA

More information

Advanced Stamping Manufacturing Engineering, Auburn Hills, MI

Advanced Stamping Manufacturing Engineering, Auburn Hills, MI RECENT DEVELOPMENT FOR SURFACE DISTORTION MEASUREMENT L.X. Yang 1, C.Q. Du 2 and F. L. Cheng 2 1 Dep. of Mechanical Engineering, Oakland University, Rochester, MI 2 DaimlerChrysler Corporation, Advanced

More information

SOLAR CELL SURFACE INSPECTION USING 3D PROFILOMETRY

SOLAR CELL SURFACE INSPECTION USING 3D PROFILOMETRY SOLAR CELL SURFACE INSPECTION USING 3D PROFILOMETRY Prepared by Benjamin Mell 6 Morgan, Ste16, Irvine CA 92618 P: 949.461.9292 F: 949.461.9232 nanovea.com Today's standard for tomorrow's materials. 21

More information

A Literature Review on Low Cost 3D Scanning Using Structure Light and Laser Light Scanning Technology

A Literature Review on Low Cost 3D Scanning Using Structure Light and Laser Light Scanning Technology A Literature Review on Low Cost 3D Scanning Using Structure Light and Laser Light Scanning Technology Rohit Gupta 1, Himanshu Chaudhary 2 Master Student, Department of Mechanical Engineering, Sri Sai College

More information

STEP HEIGHT MEASUREMENT OF PRINTED ELECTRODES USING 3D PROFILOMETRY

STEP HEIGHT MEASUREMENT OF PRINTED ELECTRODES USING 3D PROFILOMETRY STEP HEIGHT MEASUREMENT OF PRINTED ELECTRODES USING D PROFILOMETRY Prepared by Andrea Herrmann Morgan, Ste, Irvine CA 98 P: 99..99 F: 99..9 nanovea.com Today's standard for tomorrow's materials. NANOVEA

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

CS4758: Rovio Augmented Vision Mapping Project

CS4758: Rovio Augmented Vision Mapping Project CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer

More information

Registration of Moving Surfaces by Means of One-Shot Laser Projection

Registration of Moving Surfaces by Means of One-Shot Laser Projection Registration of Moving Surfaces by Means of One-Shot Laser Projection Carles Matabosch 1,DavidFofi 2, Joaquim Salvi 1, and Josep Forest 1 1 University of Girona, Institut d Informatica i Aplicacions, Girona,

More information

Step Height Comparison by Non Contact Optical Profiler, AFM and Stylus Methods

Step Height Comparison by Non Contact Optical Profiler, AFM and Stylus Methods AdMet 2012 Paper No. NM 002 Step Height Comparison by Non Contact Optical Profiler, AFM and Stylus Methods Shweta Dua, Rina Sharma, Deepak Sharma and VN Ojha National Physical Laboratory Council of Scientifi

More information

Transparent Object Shape Measurement Based on Deflectometry

Transparent Object Shape Measurement Based on Deflectometry Proceedings Transparent Object Shape Measurement Based on Deflectometry Zhichao Hao and Yuankun Liu * Opto-Electronics Department, Sichuan University, Chengdu 610065, China; 2016222055148@stu.scu.edu.cn

More information

CARBON FIBER SURFACE MEASUREMENT USING 3D PROFILOMETRY

CARBON FIBER SURFACE MEASUREMENT USING 3D PROFILOMETRY CARBON FIBER SURFACE MEASUREMENT USING 3D PROFILOMETRY Prepared by Craig Leising 6 Morgan, Ste156, Irvine CA 92618 P: 949.461.9292 F: 949.461.9232 nanovea.com Today's standard for tomorrow's materials.

More information

Linescan System Design for Robust Web Inspection

Linescan System Design for Robust Web Inspection Linescan System Design for Robust Web Inspection Vision Systems Design Webinar, December 2011 Engineered Excellence 1 Introduction to PVI Systems Automated Test & Measurement Equipment PC and Real-Time

More information

Peak Detector. Minimum Detectable Z Step. Dr. Josep Forest Technical Director. Copyright AQSENSE, S.L.

Peak Detector. Minimum Detectable Z Step. Dr. Josep Forest Technical Director. Copyright AQSENSE, S.L. Peak Detector Minimum Detectable Z Step Dr. Josep Forest Technical Director Peak Detector Minimum Detectable Defect Table of Contents 1.Introduction...4 2.Layout...4 3.Results...8 4.Conclusions...9 Copyright

More information

Calibration of a portable interferometer for fiber optic connector endface measurements

Calibration of a portable interferometer for fiber optic connector endface measurements Calibration of a portable interferometer for fiber optic connector endface measurements E. Lindmark Ph.D Light Source Reference Mirror Beamsplitter Camera Calibrated parameters Interferometer Interferometer

More information

Time-of-flight basics

Time-of-flight basics Contents 1. Introduction... 2 2. Glossary of Terms... 3 3. Recovering phase from cross-correlation... 4 4. Time-of-flight operating principle: the lock-in amplifier... 6 5. The time-of-flight sensor pixel...

More information

Chapter 36. Image Formation

Chapter 36. Image Formation Chapter 36 Image Formation Apr 22, 2012 Light from distant things We learn about a distant thing from the light it generates or redirects. The lenses in our eyes create images of objects our brains can

More information

Sensor based adaptive laser micromachining using ultrashort pulse lasers for zero-failure manufacturing

Sensor based adaptive laser micromachining using ultrashort pulse lasers for zero-failure manufacturing Sensor based adaptive laser micromachining using ultrashort pulse lasers for zero-failure manufacturing Fraunhofer Institute for Production Technology, Aachen M. Sc. Guilherme Mallmann Prof. Dr.-Ing. Robert

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 4 Jan. 24 th, 2019 Slides from Dr. Shishir K Shah and Frank (Qingzhong) Liu Digital Image Processing COSC 6380/4393 TA - Office: PGH 231 (Update) Shikha

More information

HANDBOOK OF THE MOIRE FRINGE TECHNIQUE

HANDBOOK OF THE MOIRE FRINGE TECHNIQUE k HANDBOOK OF THE MOIRE FRINGE TECHNIQUE K. PATORSKI Institute for Design of Precise and Optical Instruments Warsaw University of Technology Warsaw, Poland with a contribution by M. KUJAWINSKA Institute

More information

NEW MONITORING TECHNIQUES ON THE DETERMINATION OF STRUCTURE DEFORMATIONS

NEW MONITORING TECHNIQUES ON THE DETERMINATION OF STRUCTURE DEFORMATIONS Proceedings, 11 th FIG Symposium on Deformation Measurements, Santorini, Greece, 003. NEW MONITORING TECHNIQUES ON THE DETERMINATION OF STRUCTURE DEFORMATIONS D.Stathas, O.Arabatzi, S.Dogouris, G.Piniotis,

More information

Available online at ScienceDirect. Energy Procedia 69 (2015 )

Available online at   ScienceDirect. Energy Procedia 69 (2015 ) Available online at www.sciencedirect.com ScienceDirect Energy Procedia 69 (2015 ) 1885 1894 International Conference on Concentrating Solar Power and Chemical Energy Systems, SolarPACES 2014 Heliostat

More information

Multi-sensor measuring technology. O-INSPECT The best of optical and contact measuring technology for true 3D measurements.

Multi-sensor measuring technology. O-INSPECT The best of optical and contact measuring technology for true 3D measurements. Multi-sensor measuring technology O-INSPECT The best of optical and contact measuring technology for true 3D measurements. 2 // multifunctionality made BY CarL Zeiss The moment you realize that new requirements

More information

VISION MEASURING SYSTEMS

VISION MEASURING SYSTEMS VISION MEASURING SYSTEMS Introducing Mitutoyo s full line of Vision Measuring Equipment. VISION MEASURING SYSTEMS Quick Scope Manual Vision Measuring System Manual XYZ measurement. 0.1 µm resolution glass

More information

Product information. Hi-Tech Electronics Pte Ltd

Product information. Hi-Tech Electronics Pte Ltd Product information Introduction TEMA Motion is the world leading software for advanced motion analysis. Starting with digital image sequences the operator uses TEMA Motion to track objects in images,

More information