ARTIFICIAL INTELLIGENCE-ENHANCED MULTI-MATERIAL FORM MEASUREMENT FOR ADDITIVE MATERIALS

Similar documents
Form Metrology for AM. Primary Investigator: Petros Stavroulakis

Stereo and structured light

Depth Sensors Kinect V2 A. Fornaser

Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry

Measurement of Pedestrian Groups Using Subtraction Stereo

Flexible decoupled camera and projector fringe projection system using inertial sensors

Comparison between 3D Digital and Optical Microscopes for the Surface Measurement using Image Processing Techniques

5 member consortium o University of Strathclyde o Wideblue o National Nuclear Laboratory o Sellafield Ltd o Inspectahire

Dynamic 3-D surface profilometry using a novel color pattern encoded with a multiple triangular model

Metrology and Sensing

Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner

3D Modeling of Objects Using Laser Scanning

New Sony DepthSense TM ToF Technology

Model-based segmentation and recognition from range data

Enhanced two-frequency phase-shifting method

Wavelength scanning interferometry for measuring transparent films of the fusion targets

Phase error correction based on Inverse Function Shift Estimation in Phase Shifting Profilometry using a digital video projector

An Overview of Matchmoving using Structure from Motion Methods

Transparent Object Shape Measurement Based on Deflectometry

ENGN D Photography / Spring 2018 / SYLLABUS

5 member consortium o University of Strathclyde o Wideblue o National Nuclear Laboratory o Sellafield Ltd o Inspectahire

Ch 22 Inspection Technologies

Summary of the Space Coordinate Measuring Technology Via Robots

Structured light , , Computational Photography Fall 2017, Lecture 27

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey

A COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION

Computer Vision: Making machines see

Information-rich metrology: Changing the game

Flying Triangulation Acquiring the 360 Topography of the Human Body on the Fly

MULTIPLE-SENSOR INTEGRATION FOR EFFICIENT REVERSE ENGINEERING OF GEOMETRY

Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features

Pattern Feature Detection for Camera Calibration Using Circular Sample

Metrology and Sensing

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Structured Light II. Guido Gerig CS 6320, Spring (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC)

Optical Imaging Techniques and Applications

GABRIELE GUIDI, PHD POLITECNICO DI MILANO, ITALY VISITING SCHOLAR AT INDIANA UNIVERSITY NOV OCT D IMAGE FUSION

Auto-focusing Technique in a Projector-Camera System

Metrology and Sensing

A Comparison between Active and Passive 3D Vision Sensors: BumblebeeXB3 and Microsoft Kinect

UCSD AUVSI Unmanned Aerial System Team. Joshua Egbert Shane Grant

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK

3D object recognition used by team robotto

A Statistical Consistency Check for the Space Carving Algorithm.

Coherent digital demodulation of single-camera N-projections for 3D-object shape measurement: Co-phased profilometry

AN EFFICIENT BINARIZATION TECHNIQUE FOR FINGERPRINT IMAGES S. B. SRIDEVI M.Tech., Department of ECE

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

Draft SPOTS Standard Part III (7)

Biometric Security System Using Palm print

In-Situ Surface Roughness Measurement of Laser Beam Melted Parts a Feasibility Study of Layer Image Analysis

Measurement of 3D Foot Shape Deformation in Motion

Specular 3D Object Tracking by View Generative Learning

New Sony DepthSense TM ToF Technology

CS 4758: Automated Semantic Mapping of Environment

AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

I. INTRODUCTION. Figure-1 Basic block of text analysis

RGBd Image Semantic Labelling for Urban Driving Scenes via a DCNN

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

The main problem of photogrammetry

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

IMPROVEMENT OF BACKGROUND SUBTRACTION METHOD FOR REAL TIME MOVING OBJECT DETECTION INTRODUCTION

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks

Registration of Moving Surfaces by Means of One-Shot Laser Projection

THREE-DIMENSIONAL MAPPING OF FATIGUE CRACK POSITION VIA A NOVEL X-RAY PHASE CONTRAST APPROACH

ScienceDirect. The use of Optical Methods for Leak Testing Dampers

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

Computing the relations among three views based on artificial neural network

SIMULATION AND VISUALIZATION IN THE EDUCATION OF COHERENT OPTICS

A Background Subtraction Based Video Object Detecting and Tracking Method

Photoneo's brand new PhoXi 3D Camera is the highest resolution and highest accuracy area based 3D

Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland

A Literature Review on Low Cost 3D Scanning Using Structure Light and Laser Light Scanning Technology

HP-L-8.9 LASER SCANNER

視覚情報処理論. (Visual Information Processing ) 開講所属 : 学際情報学府水 (Wed)5 [16:50-18:35]

ATRANSPUTER BASED LASER SCANNING SYSTEM

High-speed three-dimensional shape measurement system using a modified two-plus-one phase-shifting algorithm

CO-REGISTERING AND NORMALIZING STEREO-BASED ELEVATION DATA TO SUPPORT BUILDING DETECTION IN VHR IMAGES

Integration of Emerging Inspection Devices with Rapid Manufacturing Systems

An Image Based 3D Reconstruction System for Large Indoor Scenes

Analysis of photometric factors based on photometric linearization

Registration concepts for the just-in-time artefact correction by means of virtual computed tomography

DEVELOPMENT OF VISION-BASED HANDICAPPED LOGO REGONITION SYSTEM FOR DISABLED PARKING

Image based 3D inspection of surfaces and objects

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

A Comparison Study of 3D Scanners for Diagnosing Deviations in Their Outputs Using Reverse Engineering Technique

COMPUTER-BASED WORKPIECE DETECTION ON CNC MILLING MACHINE TOOLS USING OPTICAL CAMERA AND NEURAL NETWORKS

Image Enhancement Techniques for Fingerprint Identification

Camera Calibration for a Robust Omni-directional Photogrammetry System

Motion Detection Algorithm

APPLICATION OF THE LASER SCANNING MICROSCOPY TO EVALUATION OF ABRASIVE TOOL WEAR 1. INTRODUCTION

Encoder-Decoder Networks for Semantic Segmentation. Sachin Mehta

arxiv: v1 [cs.cv] 28 Sep 2018

STATE-OF-THE-ART in DENSE IMAGE MATCHING

Advanced Reconstruction Techniques Applied to an On-Site CT System

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

Transcription:

Solid Freeform Fabrication 2018: Proceedings of the 29th Annual International Solid Freeform Fabrication Symposium An Additive Manufacturing Conference ARTIFICIAL INTELLIGENCE-ENHANCED MULTI-MATERIAL FORM MEASUREMENT FOR ADDITIVE MATERIALS P. Stavroulakis*, O. Davies*, G. Tzimiropoulos, and R. K. Leach* *Manufacturing Metrology Team, University of Nottingham, Nottingham, UK School of Computer Science, University of Nottingham, UK Abstract The range of materials used in additive manufacturing (AM) is ever growing nowadays. This puts pressure on post-process optical non-contact form measurement systems as different system architectures work most effectively with different types of materials and surface finishes. In this work, a data-driven artificial intelligence (AI) approach is used to recognise the material of a measured object and to fuse the measurements taken from three optical form measurement techniques to improve system performance compared to using each technique individually. More specifically, we present a form measurement system which uses AI and machine vision to enable the efficient combination of fringe projection, photogrammetry and deflectometry. The system has a target maximum permissible error of 50 μm and the prototype demonstrates the ability to measure complex geometries of AM objects, with a maximum size of (10 10 10) cm, with minimal user input. Keywords: Artificial intelligence; data fusion; photogrammetry; fringe projection; deflectometry; segmentation network; Introduction The aerospace, automotive and medical industries are realising the benefits of using noncontact optical three-dimensional (3D) measurement over traditional stylus-based contact methods to control their processes and perform dimensional quality control on their products [1],[2]. Depending on the technology used, non-contact optical measurement can include most if not all of the following benefits: it is non-destructive, has improved speed, increased area of acquisition per measurement cycle and reduced cost [3]. For non-contact optical camera-based techniques especially, the ability to leverage the benefits of readily available AI-based image recognition algorithms to enhance the measurement even further provides an additional benefit over stylusbased contact techniques [4]. Non-contact optical techniques, however, are not a panacea, and for some measurement scenarios, contact stylus-based techniques still need to be used, as optical techniques suffer from lower accuracy, material sensitivity and much higher requirements for data handling and storage [3]. In optical systems, material sensitivity and measurement range can be minimised by changing optical design and hence many different solutions exist [3]. Consequently, there is a large number of different optical techniques for dimensional measurement which have been developed to work at different length scales and for different applications [5]. The selection of the appropriate technique depends on the specific object size, target measurement accuracy and the material of the object that needs to be measured. One way to improve functionality of 3D measurement instruments is to combine multiple techniques in one setup; however, this requires that the data be combined in one frame of reference, 187

which adds to the processing time and complexity of the system. In this work, we perform data fusion by using a common reference camera to employ three different camera-based measurement techniques, hence data fusion is performed without the need to register the frames of reference to each other. The techniques employed here are fringe projection (FP), deflectometry (DM) and photogrammetry (PG). These 3D measurement techniques nominally function at a 100 mm to 1 m operating distance and at a 10 μm to 100 μm accuracy [6]. The techniques selected are ideal for many of the object sizes and material types which need to be measured in automotive, aerospace and additive manufacturing [6]. Background The three chosen measurement techniques operate via different principles and are largely complementary to each other. The principles of operation of all three are analysed in this section: Fringe projection Figure 1. Schema of a fringe projection setup Fringe projection (Figure 1) is a structured light technique which uses a projector and a camera to create a 3D point cloud of a scene observed by a camera either by triangulation (projecting digital fringes) or phase-stepping (sinusoidal fringes). It is used mainly for measuring diffusely reflecting objects [7]. Deflectometry Figure 2. Schema of a deflectometry setup 188

Deflectometry (Figure 2) is structured light technique similar to fringe projection but instead of projecting the fringe pattern, it is displayed on a TFT screen. This configuration works best for reflective objects, as a bundle of rays from each pixel will reflect from the object into the camera via direct reflection. The types of patterns which are usually used are similar to fringe projection [8] Photogrammetry Figure 3. Schema of a photogrammetry setup Photogrammetry (Figure 3) is a technique which creates a 3D point cloud by taking advantage of unique surface features that exist in two or more images of the object taken from different points of view [9]. Multiple camera views are used in order to reconstruct the 3D surface of an object by searching for pixel correspondences in the images taken. In this work, the fringe projection and deflectometry systems use phase stepping (project sinusoidal patterns). The photogrammetry system performs triangulation via feature matching [10]. Fringe projection is most effective on featureless objects, whereas photogrammetry is most effective on objects with a large number of features which can easily be detected by computer vision techniques (e.g. SIFT algorithm) in the image. In terms of speed, photogrammetry usually requires a larger number of calculations to perform dense reconstruction and is, therefore, slower in reconstructing the full 3D point cloud than the other two methods. Lastly, deflectometry is most effective for highly reflective objects in comparison to the other two techniques, which require diffusely scattering objects. Hence, by combining these three techniques, 3D surface point clouds can be acquired for reflective and diffuse objects which can be featureless or have multiple features on the surface. Data fusion To combine the data from all three 3D measurement techniques, we used the same reference camera for all three systems (Figure 4) and a spatio-temporal data collection scheme to collect data from different parts of the image at different times. This scheme has two parts: an image segmentation strategy and a temporal synchronisation strategy. 189

Image segmentation strategy Figure 4. Schema of the combined setup created The image segmentation strategy which was selected employs deep learning through a segmentation network (SegNet) and was trained to identify different materials in an image of the scene taken by the reference camera. Matlab was used as the framework for building, training and testing the network. The network used was an uninitialised semantic image segmentation network based on the SegNet architecture [11], consisting of encoding layers that downsample the input image by a factor of four and corresponding decoding layers that upsample back to the input size; the RGB images had 201 152 pixels to keep the original aspect ratio whilst being small enough to fit the mini-batch into GPU memory during training. Three material classes were trained by acquiring 400 images of each material, and ground truth images of these were obtained by removing the green background through colour thresholding. The class weighting of the final classification layer used inverse frequency weighting of the total pixel count of each class in the ground truth images. Of the 1200 images, 81% were used for training, with 9% set aside for validation and 10% for final testing. Random image augmentation was also used to artificially increase the number of training images, consisting of random rotations up to 360, random reflections horizontally and vertically, and random translations by up to ten pixels horizontally and vertically. The network was trained using an initial learning rate of 0.0005 and a mini-batch size of thirty-two images. In this manner, all the pixels in an image could be labelled by predicted material class using the trained network and a mask for each was created (Figure 5a-b). As the reference image pixels have essentially been classified according to material, the point clouds could now be created only in specific regions by implementing the correct image mask for each measurement method. Therefore, to optimise the measurement success and associate each measurement technique with a specific material, the Ti6Al-4V mask was used when measuring via photogrammetry (Figure 5c), the carbon fibre mask was used when measuring with deflectometry (Figure 5d) and the polymer mask was used when measuring with fringe projection (Figure 5e). 190

(a) (b) (c) (d) (e) Figure 5. Results of image segmentation of a scene containing polymer, titanium, and carbon fibre objects. Showing (a) initial image, (b) image segmentation overlay results, and three masks created for (c) titanium, (d) carbon fibre, and (e) polymer Temporal synchronisation scheme The temporal synchronisation scheme used consists of cycling all three measurement subsystems (fringe projection, deflectometry and photogrammetry) in the time domain by activating the required modules used by each technique (projector, TFT screen, rotation stage) at the appropriate time and taking measurements of the scene. Before cycling the measurement techniques though, an image of the scene is taken in order to segment the regions of the image which will be used by each technique. Thereby three measurement masks are made and are used by each technique in turn to reconstruct a selected group of pixels which have the best chance of working and ignore the rest. As all three point clouds are taken by the same reference camera, they are automatically placed in the same frame of reference. The partial 3D data can, therefore, be automatically fused together to form an optimised 3D point cloud of the scene containing all three objects. This 191

temporal synchronisation strategy allows for both optimised data fusion, minimisation of measurement data and reduction of noise, as each measurement technique works best for a specific material. Results Some of the challenges encountered during the image segmentation strategy were primarily related to spurious reflections and ambient light changes. Below we have identified three cases which needed to be taken into account when training and segmenting the images. Case 1: Reflections from objects in the scene In Figure 6, the effect of reflections from objects of a different class is shown. Regions of the carbon fibre object are confused for being parts of the metal material class. (a) (b) Figure 6. Mistakes in prediction caused by reflections of other objects from the carbon fibre surface with (a) initial image and (b) image segmentation overlay results. Case 2: Reflections from environmental lighting In Figure 7, the effect of ambient lamp reflections is shown. Regions of the carbon fibre object are confused for being parts of the polymer and titanium material class. (a) (b) Figure 7. Mistakes in prediction caused by ambient lamp reflections from the carbon fibre surface with (a) initial image and (b) image segmentation overlay results. 192

Case 3: Illuminating objects with different lighting scheme In Figure 8, the effect of applying a different lighting scheme to that used in the original training scheme is shown. Regions of the carbon fibre object are confused for being a part of the background, and the shadow from titanium and polymer objects are confused for being a part of the titanium material class. (a) (b) Figure 8. Mistakes in prediction caused by illuminating the objects with a different lighting scheme (projector) to that used during the training phase (ambient room lighting). Discussion In this work, an investigation into detecting different AM object materials via use of AI segmentation has been presented. Specifically, the detection of Ti-V6-Al4, nylon polymer and carbon fibre were investigated. Several problems were identified in the segmentation of the images, which were related to the discrepancies in lighting and reflections which existed when acquiring images from the experimental setup. In particular, reflections from other objects, reflections from ambient light sources and the significant change in lighting create errors in the segmentation of the images by labelling a large proportion of pixels erroneously. Future work will include controlling the lighting and reflections used when creating the images by shielding the samples from ambient lighting as well as training the network by including more illumination conditions in order to improve robustness and finally customizing the selection of network parameters to enable more accurate segmentation of the images. Acknowledgements This work is sponsored by EPSRC grants EP/M008983/1 and EP/L016567/1 and the EU Framework Programme for Research and Innovation Horizon 2020 Grant Agreement No 721383. References [1] J. E. Muelaner and P. G. Maropoulos, Large volume metrology technologies for the light controlled factory, Procedia CIRP, vol. 25, pp. 169 176, 2014. [2] K. Harding, Handbook of Optical Dimensional Metrology. Taylor & Francis, 2013. [3] V. Carbone, M. Carocci, E. Savio, G. Sansoni, and L. De Chiffre, Combination of a vision system and a coordinate measuring machine for the reverse engineering of freeform 193

surfaces, Int. J. Adv. Manuf. Technol., vol. 17, no. 4, pp. 263 271, Jan. 2001. [4] R. K. Leach, N. Senin, X. Feng, P. Stavroulakis, R. Su, W. P. Syam, and T. Widjanarko, Information-rich metrology: changing the game, Commer. Micro Manuf., vol. 8, pp. 33 39, 2017. [5] G. Sansoni, M. Trebeschi, and F. Docchio, State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation, Sensors, vol. 9, no. 1. pp. 568 601, 2009. [6] P. I. Stavroulakis and R. K. Leach, Invited Review Article: Review of post-process optical form metrology for industrial-grade metal additive manufactured parts, Rev. Sci. Instrum., vol. 87, no. 4, pp. 0411011 04110115, 2016. [7] S. Zhang, High-Speed 3D Imaging with Digital Fringe Projection Techniques. CRC Press, 2016. [8] M. Servin, A. Quiroga, and M. Padilla, Fringe Pattern Analysis for Optical Metrology. Wiley-VCH, 2014. [9] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd Ed. Cambridge University Press, 2004. [10] D. Sims-Waterhouse, P. Bointon, S. Piano, and R. K. Leach, Experimental comparison of photogrammetry for additive manufactured parts with and without laser speckle projection, in Proc. SPIE 10329, Optical Measurement Systems for Industrial Inspection X, 2017, p. 103290W. [11] V. Badrinarayanan, A. Kendall, and R. Cipolla, SegNet: A deep convolutional encoderdecoder architecture for image segmentation, IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481 2495, Dec. 2017. 194