A PHOTOGRAMMETRIC METHOD FOR ENHANCING THE DETECTION OF BONE FRAGMENTS AND OTHER HAZARD MATERIALS IN CHICKEN FILETS

Similar documents
HIGH SPEED 3-D MEASUREMENT SYSTEM USING INCOHERENT LIGHT SOURCE FOR HUMAN PERFORMANCE ANALYSIS

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Structured Light II. Guido Gerig CS 6320, Spring (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC)

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.

Flexible Calibration of a Portable Structured Light System through Surface Plane

CONSISTENT COLOR RESAMPLE IN DIGITAL ORTHOPHOTO PRODUCTION INTRODUCTION

High-resolution, real-time three-dimensional shape measurement

[Youn *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor

Color-Stripe Structured Light Robust to Surface Color and Discontinuity

High-speed three-dimensional shape measurement system using a modified two-plus-one phase-shifting algorithm

Model-based segmentation and recognition from range data

Overview of Active Vision Techniques

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA

Vision-Based Technologies for Security in Logistics. Alberto Isasi

Measurements using three-dimensional product imaging

Optical Imaging Techniques and Applications

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

3D DEFORMATION MEASUREMENT USING STEREO- CORRELATION APPLIED TO EXPERIMENTAL MECHANICS

Flatness Measurement of a Moving Object Using Shadow Moiré Technique with Phase Shifting

Color-Stripe Structured Light Robust to Surface Color and Discontinuity

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

Sensing Deforming and Moving Objects with Commercial Off the Shelf Hardware

specular diffuse reflection.

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD

An Innovative Three-dimensional Profilometer for Surface Profile Measurement Using Digital Fringe Projection and Phase Shifting

MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION

Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement

A 100Hz Real-time Sensing System of Textured Range Images

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal

Improving the 3D Scan Precision of Laser Triangulation

Stereo Vision System for Real Time Inspection and 3D Reconstruction

Stereo Vision. MAN-522 Computer Vision

Projector Calibration for Pattern Projection Systems

A three-step system calibration procedure with error compensation for 3D shape measurement

Low Cost Motion Capture

C101-E137 TALK LETTER. Vol. 14

PHOTOGRAMMETRIC TECHNIQUE FOR TEETH OCCLUSION ANALYSIS IN DENTISTRY

AUTOMATIC MODELLING METHOD FOR STEEL STRUCTURES USING PHOTOGRAMMETRY

A Comparison between Active and Passive 3D Vision Sensors: BumblebeeXB3 and Microsoft Kinect

Centre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB

RECENT PROGRESS IN CODED STRUCTURED LIGHT AS A TECHNIQUE TO SOLVE THE CORRESPONDENCE PROBLEM: A SURVEY

3D Shape and Indirect Appearance By Structured Light Transport

3-D shape measurement by composite pattern projection and hybrid processing

AUTOMATIC TERRAIN EXTRACTION USING MULTIPLE IMAGE PAIR AND BACK MATCHING INTRODUCTION

Fourier Transform Imaging Spectrometer at Visible Wavelengths

Techniques of Noninvasive Optical Tomographic Imaging

APPROACH TO ACCURATE PHOTOREALISTIC MODEL GENERATION FOR COMPLEX 3D OBJECTS

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Ch 22 Inspection Technologies

APPLICATION OF THE LASER SCANNING MICROSCOPY TO EVALUATION OF ABRASIVE TOOL WEAR 1. INTRODUCTION

Advanced Stamping Manufacturing Engineering, Auburn Hills, MI

3D graphics, raster and colors CS312 Fall 2010

Digital correlation hologram implemented on optical correlator

Exterior Orientation Parameters

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Physics-based Vision: an Introduction

Removing Shadows from Images

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

Image Formation. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

arxiv: v1 [cs.cv] 28 Sep 2018

Stereo Image Rectification for Simple Panoramic Image Generation

Object Shape Recognition in Image for Machine Vision Application

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

Projector View Synthesis and Virtual Texturing

High speed 3-D Surface Profilometry Employing Trapezoidal HSI Phase Shifting Method with Multi-band Calibration for Colour Surface Reconstruction

Highlight detection with application to sweet pepper localization

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant

Introduction to 3D Machine Vision

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

Image Processing using LabVIEW. By, Sandip Nair sandipnair.hpage.com

MODERN DIMENSIONAL MEASURING TECHNIQUES BASED ON OPTICAL PRINCIPLES

Handy Rangefinder for Active Robot Vision

3D object recognition used by team robotto

Performance Improvement of a 3D Stereo Measurement Video Endoscope by Means of a Tunable Monochromator In the Illumination System

3D from Images - Assisted Modeling, Photogrammetry. Marco Callieri ISTI-CNR, Pisa, Italy

CONTENTS. High-Accuracy Stereo Depth Maps Using Structured Light. Yeojin Yoon

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Announcements. Lighting. Camera s sensor. HW1 has been posted See links on web page for readings on color. Intro Computer Vision.

Evaluating Measurement Error of a 3D Movable Body Scanner for Calibration

3D-2D Laser Range Finder calibration using a conic based geometry shape

ADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke*

Depth Camera for Mobile Devices

CSE 252B: Computer Vision II

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh

Integration of 3D Stereo Vision Measurements in Industrial Robot Applications

Mapping textures on 3D geometric model using reflectance image

Dynamic 3-D surface profilometry using a novel color pattern encoded with a multiple triangular model

3D DIGITAL MODELING OF MODERN TIMES BUILDING FOR PRESERVATION AND RESTORATION

Deep Learning-driven Depth from Defocus via Active Multispectral Quasi-random Projections with Complex Subpatterns

Stereo vision. Many slides adapted from Steve Seitz

Precise laser-based optical 3D measurement of welding seams under water

Multiple View Geometry

Introduction to Computer Graphics with WebGL

Metrology and Sensing

Multichannel Camera Calibration

Recognition of Object Contours from Stereo Images: an Edge Combination Approach

Study of the Effects of Target Geometry on Synthetic Aperture Radar Images using Simulation Studies

Transcription:

A PHOTOGRAMMETRIC METHOD FOR ENHANCING THE DETECTION OF BONE FRAGMENTS AND OTHER HAZARD MATERIALS IN CHICKEN FILETS S. Barnea a,* V. Alchanatis b H. Stern a a Dept. of Industrial Engineering and Management, Ben Gurion University of the Negev, 84105 Beer-Sheva Israel (barneas, helman)@bgumail.bgu.ac.il b Inst. of Agricultural Engineering, The Volcani Center - ARO, 50250 Bet Dagan, Israel victor@volcani.agri.gov.il Commission TS, WG V/I KEY WORDS: Close Range, DEM, Automation, Color, Industry, Photogrammetry ABSTRACT: The research suggests a system that automatically generates Digital Elevation Model (DEM) of a chicken filet. The DEM is generated by close range photogrammetry methods, using stereoscopic images of the chicken filet. A major problem of the photogrammetric process is the uniformity of the chicken filet s color and texture, which makes it hard to find matching-points in the two stereo images. The research suggests a method to enhance the accuracy of the photogrammetric process, based on the projection of a continuous multi-color on the chicken filet. The projected is iteratively designed based on the object's optical and textural features and on the image acquisition and projection systems color definition abilities. System measurement accuracies were within the specified limit of.5 mm in height. 1. INTRODUCTION Accurate detection of bone fragments and other hazards in deboned poultry meat is important to ensure food quality and safety for consumers. Automatic machine vision detection can potentially reduce the need for intensive manual inspection on processing lines. Current X-ray technology has the potential to succeed with low false-detection errors. X-ray energy reaching the image detector varies with uneven meat thickness. Differences in X- ray absorption due to meat unevenness inevitably produce false s in X-ray images and make it hard to distinguish between hazardous inclusions and normal meat s. Although methods of local processing of image intensity can be used, varying meat thickness remains a major limitation for detecting hazardous materials by processing X-ray images alone. An approach to overcome the aforementioned difficulties is to use an X-ray imager in conjunction with the chicken filet thickness algorithm, yielding a thickness-invariant recognition system 2. PROBLEM DEFINITION AND OBJECTIVE The present work addresses the problem of automatically generating a 3D model of a chicken filet when it passes under an imaging system on a conveyer. The 3D model of the chicken filet is expressed in terms of a digital elevation model (DEM). The DEM should satisfy the following specifications. 1. The DEM should represent an area of 300 square millimeters. This is the area represented by the X-ray imaging system. 2. Every number of data in the DEM matrix will represent 1 mm 2 in world coordinates, which means that the DEM matrix dimensions will be 300 by 300. This resolution is satisfactory for detection of most hazardous materials. 3. The DEM error at each point should be less then 0.5mm in height. 4. The speed of the conveyor is 1 feet (304.8mm)/sec, which means that the time required generating a single DEM must be less then a second. 3. METHODS FOR GENERATING A DEM 3.1 Moiré Pattern A powerful way of describing a three-dimensional shape is to draw contour lines. A Moiré is an interference resulting from two superimposed gratings. The geometry of the moiré fringes is determined by the spacing and orientation of the grids. If we know the grids and can image the moiré formed on the surface, we can determine the topography of the surface. There are number of ways to produce moiré s and to use them to determine object topography. In the shadow moiré technique, a reference grid is placed close to the object and illuminated so that it casts a shadow on the object. The shadow of the reference grid is distorted by the object's topography. If the light source and viewing point are at the same distance from the reference grid, the moiré fringes represent contours of depth with respect to the reference grid.

In a projection moiré, a reference grid is projected onto the object. The projection is distorted by the object topography. This distorted version of the grid is imaged through the reference grid, and a moiré is formed on the image plane. The main disadvantage of the methodology is that the depth of the surface under test is limited to not more than several tens of the wavelengths of light used. (Yu et al. 1997; Paakkari et al. 1998; Mikhail et al. 2001; Karara 1989) 3.2 Light Stripping Light stripping is a particular case of the structured light technique for range sensing. A light striping range sensor consists of a line projector and a camera that is displaced from the light source. The line projector casts a sheet of light onto the scene. The projection of the plane on the surface results in a line of light. This line is sensed by the camera. The light plane itself has known position in world coordinates that can be determined by measuring the geometry of the projection device or by direct measurements of the resulting plane of light. Every point in the image plane determines a single line of sight in three-dimensional space upon which the world point that produces the image point must lie. This methodology requires a large amount of images. In this method each image contributes a single profile of the object. Generating a DEM with the desired resolution and speed requires capturing and processing of 300 frames per second (Mikhail et al. 2001; Karara 1989). 3.3 Coded Light Projection Techniques Among all the ranging techniques, stereovision is based on imaging the scene from two or more known points of view and then finding pixel correspondences between the different images in order to triangulate the 3D position. Triangulation is possible if cameras are calibrated (Salvi 2001). However, finding the correspondences poses difficulties even after taking into account epipolar constraints. Coded structured light consists of substituting one of the cameras by a device that projects a light onto the surface. Since each point in the is encoded in a way that identifies its coordinates, the correspondence problem is solved with no geometrical constraints. With an appropriate light, correspondences of a large number of pixels can be found. Thus, coded structured light may be used as a solution to simplify the inherent problem of finding correspondences in classical stereovision systems. (Pag`es 2004) 4. OBJECTIVE Considering the advantages and disadvantages of the methodologies presented above, the specific objective of the present work was to develop a DEM generation system for chicken filets using a coded light projection technique. 5. DESCRIPTION OF THE SYSTEM The imaging system contains several major hardware parts (Fig. 1) (a) a projection unit including the projector itself, (b) a computer which creates s that are projected on the object and (c) two cameras to capture a stereoscopic view of the object. The images from the two cameras are sent to frame grabber in the host computer where the analysis is preformed. Figure 1 Major hardware parts In order to calculate the object s height the first task is to determine the point in the space, where the cameras center of projection is located, and its orientation. This process is essential in order to build the 3-D model of the object and for finding the epipolar lines. The aim of this calibration process is to find the six extrinsic parameters for each of the cameras. These parameters contain three location parameters (Xt, Yt, Zt) in millimeters, and three orientation parameters (phi, epsilon, kappa) in radians. In order to calculate the parameters, an image of a calibration object with marked points of known world coordinates was acquired by each one of the cameras. SPR (Single Photo Resection) method was used to estimate the extrinsic parameters of each camera. (Mikhail et al. 2001) After obtaining an estimate of the cameras' extrinsic parameters, the height of any point on the object is calculated by finding its corresponding pixels in the two images and the principle of triangulation. In order to overcome the correspondence problem a continuous color is projected on the object. 6. THE PROJECTED PATTERN Several methods for overcoming the correspondence problem using projected color s can be found in the literature (Tajima et al. 1990; Geng 1996; Sato 1997; Wust and Capson 1991). Those researches present global methods to use a projection that will fit any object. Those objects may be multicolor and multi-texture bodies. Caspi et al. (1998) analyze the illumination model of a structured light system. This model takes into account the light spectrum of the LCD projector, the spectral response of a 3-CCD camera and the surface reflectance. The main benefit of this model is that it assumes a constant reflectance for every scene point in the three RGB channels (Pag`es 2004). Preliminary tests showed that when transforming an image of a color projection captured in RGB to HSL (Hue Saturation Lightness) the hue channel contains most of the information, while the other two channels contain mostly noise. Caspi s work also defined a noise immunity factor α, and presented the relationship between α and the number of possible strips. The number of possible strips in the projected depends on the maximum accuracy under constrains of the projection / sensors equipment. The structured light technique that is presented in the present work can cover different values of α, across the different RGB intensity values. The sensitivity of the CCD sensors changes across the spectral range. The projected s can be

squeezed or extended in order to include the regions where the system can efficiently differentiate between color levels. The colors and texture of chicken filets are relatively constant among filets, and are also uniform across the filet itself. The system presented in this work addresses the problem of matching points in stereoscopic images for objects with uniform color and texture. The main idea is to project a nonuniform varying color. In some wavelengths the spectrum of the reflected color will change rapidly, but in some ranges the color spectrum will change slowly. This change depends on the object, on the imaging system and on the projection system. Projection of a monotonic linear increasing hue value on the object does not result in the same linear increasing on the captured image. This is due to the object's chromatic characteristics and the features of the capturing / projection systems. Analysis of the captured showed that the hue values, which are absorbed by the object, look similar on the captured. Wavelengths, which are reflected by the object body, are well detected as changing on the captured. The ability to detect small amounts of change depends on the projection / capturing systems. The systems sensitivity is also not constant and may change from one wavelength to another. Assuming that the best-fit projection will be the one that is captured as linearly changing across all the hue values, the specific objective of this work was then narrowed down to develop the methodology for constructing a projection that will be captured as linearly changing across all the hue values. For a given projection system, image capturing system and chromatic / textural object characteristic a projected that leads to a linear captured can be found. To find the best projection an iterative method was implemented. Let i be the iteration number. Let L be a with linearly varying hue values. Let Pi be the projected hue at iteration i. Set i =1 and P1 = L. The stages of the iterative method: 1. Project (Pi). 2. Capture the of the hue values as reflected by the object (Ci). 3. Compute the differences between the captured, Ci, and the linear, L. Denote the difference as the delta Di where Di = Ci L. 4. If all elements of D i are smaller than a small value ε then stop. else P i+1 = P i - α D i, where α is the learning rate, go to 1. which means that the hue of the captured varies almost linearly. A B Captured (C1) Figure 2. Iteration 1of the iterative process Captured (C11) Corrected (P2) Projected Figure 3. Iteration 11 of the iterative process Projected (P1) and L 7. EXPERIMENTAL RESULTS The method was initially tested on a white board plane. A linearly varying hue was projected on the surface of the board and two cameras captured the scene. Figure 2 shows the result of the reflected color after the first iteration. The X-axis is the pixel number across one cycle of the varying hue (from red too violet) and the Y-axis is the hue value. In the first iteration L = P1 which means that the projected is the desired linearly varying. C1 is the resultant captured by the imaging system. P2 is calculated according to P2 = P1 - α D1. The learning rate was set to be 0.7. In the next iteration the projection was P2. Figure 3 shows the results at iteration 11 after the process converged. One can see that the captured almost coincides with L, C Figure 4. Error rate of phases 1, 5 and 11 of the iterative process Figures 4 A, B, C show the DEM profiles, generated at iterations 1, 5 and 11 respectively. X-axis is the DEM profile and Y-axis is the pixels height in millimeters. The improvement of the DEM accuracy of the flat white board along with the iterations is clearly seen: as the iterative process

progresses, the height of more pixels is closer to zero (flat board). The overall accuracy of the algorithm was evaluated in each iteration by the percent of pixels that their height was within the required accuracy of 0.5mm. At iterations 1, 5 and 11 the accuracy was 64%, 81% and 98% respectively. The same process was done to a real chicken filet. The number of iterations until conversion using the same parameters (ε,α ) was similar. The resulted projected after the 11 th iteration was different than the one resulted in the white board experiments. As expected, because of the chromatic characteristics of the chicken filet, the span of the red band, is short across the when comparing it to its span along the in the white board experiment (Fig. 5). In the white board it s below the linear line, which means slow change good differentiation ability of the sensor, whereas in the chicken filet experiment most of it above the line (increase sharply in the beginning of the ) which means poor differentiation ability. Figure 6. Chicken filet with color projection on it during the iterative process Figure 5. Color projection after 11 iterations for the white board (blue line) and the chicken filet (green line) Figure 6 shows the chicken filet with the color projected on it. The cyan line is the sampling line across this line the hue values are measured during the iterative process. Figure 7 shows the error rate of a complex topography body, similar to a chicken filet that was measured both by the proposed system and manually by a 3-D digitizer. The X and Y axis represent world coordinates in millimeters. The bar on the right hand side of the figure represents the error rate. 10,000 sample points were measured across the body, 97.5% were within the error rate of ±0.5mm. Figure 7. Error rate of a measured body 8. SUMMARY This research apply photogrammetric / machine vision methodologies in the field of agricultural engineering. Customization of the methods was done as required by the special demands of the field. A method for generating color s of structured light was presented. This method can be used for industrial application when the texture and color of the measured object is uniform. Prototype of the suggested system was build and validated. The tests showed that measurement of chicken filet thickness is feasible in the accuracy level required for detecting bones fragments and other hazardous materials.

9. REFERENCES Caspi, D., Kiryati N. and Shamir, J. 1998. Range imaging with adaptive color structured light. Pattern Analysis Machine Intelligent 20 (5) pp. 470 480. Geng, Z.J., 1996. Rainbow 3-dimensional camera: new concept of high-speed 3-dimensional vision systems. Optical Engineering, 35 (2) pp. 376 383. Karara, H.M. (ed.), 1989. Non-Topographic Photogrammetry. Falls Church va: American Society for Photogrammetry and Remote Sensing. Mikhail, E.M., Bethel, J.S. and McGlone, J.C., 2001. Introduction to modern photogrammetry. John Wiley & Sons, New York. Paakkari, J., 1998. On-line flatness measurement of large steel plates using moiré topography, University of Oulu Finland Salvi, J., Armangu e, X., and Batlle, J., 2002a. A comparative review of camera calibrating methods with accuracy evaluation. Pattern Recognition, 35(7) pp.1617 1635. Salvi, J., Pagès, J. and Batl, J., 2004. Pattern codification strategies in structured light systems. Pattern Recognition, Volume 37, Issue 4, pp. 827-849 Sato, T., 1999. Multispectral projection range finder. In: Proceedings of the Conference on Three-Dimensional Image Capture and Applications II, Vol. 3640, SPIE, San Jose, California, pp. 28 37. Tajima, J. and Iwakawa, M., 1990. 3-D data acquisition by rainbow range finder. In: International Conference on Pattern Recognition, pp. 309 313. Wust, C. and Capson, D.W., 1991. Surface profile measurement using color fringe projection. Machine Vision Application, (4), pp. 193 203. Yu, W.M., Harlock, S.C., Leaf, G.A.V. and Yeung, K.W., 1997. Instrumental design for capturing three-dimensional moiré images. Int J Clothing Sci Tech, 9 pp. 301-310.