Using Edge Detection in Machine Vision Gauging Applications

Similar documents
Counting Particles or Cells Using IMAQ Vision

VisionGauge OnLine Spec Sheet

Ch 22 Inspection Technologies

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Linescan System Design for Robust Web Inspection

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

Contour LS-K Optical Surface Profiler

Chapter 3 Image Registration. Chapter 3 Image Registration

Introduction to High Volume Testing with Part Tracking in Akrometrix Studio 6.0

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

Advanced Vision System Integration. David Dechow Staff Engineer, Intelligent Robotics/Machine Vision FANUC America Corporation

Smart Camera Series LSIS 400i Fast and simple quality assurance and identification through innovative and high-performance camera technology

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.

VISION IMPACT+ OCR HIGHLIGHTS APPLICATIONS. Lot and batch number reading. Dedicated OCR user interface. Expiration date verification

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

VisionGauge OnLine Motorized Stage Configuration Spec Sheet

Building a Basic Application with DT Vision Foundry

F150-3 VISION SENSOR

3DPIXA: options and challenges with wirebond inspection. Whitepaper

Anno accademico 2006/2007. Davide Migliore

Sherlock 7 Technical Resource. Search Geometric

Structured light 3D reconstruction

Multisensor Coordinate Measuring Machines ZEISS O-INSPECT

Vision. OCR and OCV Application Guide OCR and OCV Application Guide 1/14

IRIS recognition II. Eduard Bakštein,

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

NEW! Smart Camera Series LSIS 400i Fast and simple quality assurance and identification through innovative and high-performance camera technology

Real-Time Detection of Road Markings for Driving Assistance Applications

EE795: Computer Vision and Intelligent Systems

1 Background and Introduction 2. 2 Assessment 2

scs1 highlights Phone: Fax: Web:

: Easy 3D Calibration of laser triangulation systems. Fredrik Nilsson Product Manager, SICK, BU Vision

CS4733 Class Notes, Computer Vision

Sherlock 7 Technical Resource. Laser Tools

Defect Detection of Regular Patterned Fabric by Spectral Estimation Technique and Rough Set Classifier

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Motion Estimation. There are three main types (or applications) of motion estimation:

Multimedia Technology CHAPTER 4. Video and Animation

Multisensor Coordinate Measuring Machines ZEISS O-INSPECT

Highspeed. New inspection function F160. Simple operation. Features

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Lumaxis, Sunset Hills Rd., Ste. 106, Reston, VA 20190

Operation of machine vision system

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Designing a Site with Avigilon Self-Learning Video Analytics 1

Defect Inspection of Liquid-Crystal-Display (LCD) Panels in Repetitive Pattern Images Using 2D Fourier Image Reconstruction

Chapter 11 Representation & Description

Category vs. instance recognition

OFFLINE SIGNATURE VERIFICATION

New Opportunities for 3D SPI

Stereo Vision. MAN-522 Computer Vision

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations I

Optimization of optical systems for LED spot lights concerning the color uniformity

Gregory Walsh, Ph.D. San Ramon, CA January 25, 2011

Measurements using three-dimensional product imaging

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

MURA & DEFECT DETECTION WITH TrueTest

ENGR3390: Robotics Fall 2009

Improving the 3D Scan Precision of Laser Triangulation

Digital Image Processing

Integrating Machine Vision and Motion Control. Huntron

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Experiments with Edge Detection using One-dimensional Surface Fitting

ZEISS O-SELECT Digital Measuring Projector

RASNIK Image Processing with a Steepest Ascent Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA

Recognize Virtually Any Shape by Oliver Sidla

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

A System for the Quality Inspection of Surfaces of Watch Parts

TABLE OF CONTENTS PRODUCT DESCRIPTION VISUALIZATION OPTIONS MEASUREMENT OPTIONS SINGLE MEASUREMENT / TIME SERIES BEAM STABILITY POINTING STABILITY

Region-based Segmentation

technique: seam carving Image and Video Processing Chapter 9

Operators-Based on Second Derivative double derivative Laplacian operator Laplacian Operator Laplacian Of Gaussian (LOG) Operator LOG

Identifying and Reading Visual Code Markers

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Tracking Trajectories of Migrating Birds Around a Skyscraper

Development of system and algorithm for evaluating defect level in architectural work

At first glance, some machine-vision

High definition digital microscope. visioneng.us/lynxevo

WITH A KEEN EYE FOR QUALITY AND COST

BCC Optical Stabilizer Filter

Miniaturized Camera Systems for Microfactories

Thermal and Optical Cameras. By Philip Smerkovitz TeleEye South Africa

Good Practice guide to measure roundness on roller machines and to estimate their uncertainty

FIDUCIAL BASED POSE ESTIMATION ADEWOLE AYOADE ALEX YEARSLEY

Local Feature Detectors

ZEISS O-SELECT Digital Measuring Projector

Study on road sign recognition in LabVIEW

Practice Exam Sample Solutions

Vision-based Frontal Vehicle Detection and Tracking

IDL Tutorial. Working with Images. Copyright 2008 ITT Visual Information Solutions All Rights Reserved

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Slide 1. Technical Aspects of Quality Control in Magnetic Resonance Imaging. Slide 2. Annual Compliance Testing. of MRI Systems.

Digital image processing

Two ducial marks are used to solve the image rotation []. Tolerance of about 1 is acceptable for NCS due to its excellent matching ability and reliabi

Motion Estimation for Video Coding Standards

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

Transcription:

Application Note 125 Using Edge Detection in Machine Vision Gauging Applications John Hanks Introduction This application note introduces common edge-detection software strategies for applications such as inspection for missing parts, measurement of critical part dimensions using gauging, and identification and verification of electronic user interface displays. What You Will Learn Nine steps for machine vision success About common inspection applications How to use gauging techniques to measure tolerances How to inspect for missing components How to create an alignment application About identification systems for LCDs, meters, and bar codes Nine Steps for Machine Vision Success Clearly understanding what is and what is not a defect in your application as well as integrating a calibration phase into your machine vision system are critical steps for the success of your system. To help you in your machine vision system design, the step-by-step guide below reviews the important considerations when creating a machine vision system. The steps are designed to give you an overview of the issues to consider when developing a system. 1. Identify all defects Clearly understand what is a good and bad part. Rank defects based on the frequency of their occurrence to quantify what is a bad part. 2. Calculate the FOV Select the camera and lens to inspect the smallest defect that may occur. Can a human inspector see the defect? If so, typically 8-bit analog cameras will work. If not, digital cameras may be necessary. For applications where the part is moving select the appropriate camera so that the image is not blurred. 3. Lighting Select a lighting technique that gives the maximum contrast for the defects and features of interest. Experiment with directional light, back lighting, ring lights, and polarizing lenses. Which lighting technique highlights the defect the most? 4. Calibrate Calibrate the lighting and camera system. Quantify the state of the lighting and camera as a system before testing the inspection system. Determine if the lighting in the field of view is homogeneous. Ensure that the background does not change with time. Calculate the average pixel gray scale value and standard deviation for the image. Inspect the calibration image with line profiles to determine if there are lighting gradients in the image. Guarantee lighting, background, and camera consistency. Product and company names are trademarks or trade names of their respective companies. 341547A-01 Copyright 1998 National Instruments Corporation. All rights reserved. August 1998

5. Compensate and correct If needed, correct for poor lighting with software. Condition the image scene so that it is easier to process with software. If you cannot create a consistent homogeneously illuminated scene, use software to correct for poor lighting. 6. Identify a fiducial element Select a unique feature that is not a defect but is always present in the image. This unique feature is used as a point of reference or fiducial. Offset from the fiducial to inspect. If the fiducial is not present then the part being inspected is bad. 7. Locate features Select a feature-locating technique based on the features and speed requirements for your application. If the feature is of a known size and orientation, use grayscale pattern matching. In general, if the feature is of a known shape and unknown size, use binary shape matching. If the feature is of a known area and perimeter but with varying orientation, use blob analysis. 8. Test inspection Test the inspection strategy with ideal images and defects. Then test the inspection strategy with images that show atypical defects. 9. Automate Include lighting and camera calibration in the automated inspection system. Common Inspection Applications Driven by advancements in converging technologies such as faster CPUs, robust operating systems, PCI local bus, and user friendly image processing and image acquisition hardware and software, PC-based vision systems are becoming commonplace in industry. In industrial applications, which make up the largest segment of the vision market, vision is used to test incoming parts for quality and to detect missing components. Computer vision systems are more reliable and cost-effective than humans in the high-speed, detailed repetitive manufacturing processed required in making semiconductor, electronic, medical, pharmaceutical, and computer products. Industrial inspection of electronic components, such as connectors, switches, LCD/LED displays, relays, and watch parts, is one of the key applications for PC-based vision systems. Because these components are manufactured in high quantity and are small, human inspection is tedious and time-consuming. Vision systems, on the other hand, can be used robustly to perform such tasks. Another growing application area for vision systems in industry is telecommunication and computer peripherals. Consumer products, such as pagers, printers, monitors, cellular phones, disk drives and components, must pass quality standards required by ISO 9000. Vision systems can assist in the ISO 9000 certification process by recording fault incidents. They can also be used to verify the quality of the vendor s product. Overall, many machine vision applications can be classified into four areas: 1. Measuring tolerance measurements or gauging a component 2. Part present/not present 3. Alignment determining the orientation and position of a part 4. Identification identifying a bar code, seven-segment display, meter, or written words Parameters That Describe an Edge Edge located Contrast Pixels Width Steepness Width Figure 1. Diagram Describing Edge-Detection Parameters 2

The IMAQ Vision Edge Tool VI uses three parameters contrast, width, and steepness to calculate the location of edges along a path within the image defined by pixel coordinates. Edges can occur on lines or arbitrary regions of interest. The contrast parameter specifies the threshold for the contrast of the edge. Only edges with a contrast greater than the specified value are used in the detection process. Contrast is defined as the difference between the average pixel intensity before the edge and the average pixel intensity after the edge. The filter width specifies the number of pixels that are averaged to find the contrast at each side of the edge. The steepness specifies the slope of the edge. This value represents the number of pixels that correspond to the transition area of the edge. For an edge to be located in the line profile, using the filter width and steepness settings, the edge contrast between foreground and background must be greater than the contrast setting. Edge locations can be calculated to subpixel accuracy using quadratic or cubic spline interpolation. The subpixel accuracy specifies the number of samples that are obtained from a pixel. For example, a subpixel accuracy of one fourth specifies that each pixel is split into four subpixels. How to Use Gauging Techniques to Measure Tolerances In-Line Gauging Applications Gauging refers to making critical measurements such as lengths, diameters, angles, and counts to determine if the product is manufactured correctly. If the gauged parameter does not fall within tolerance limits, the component or part is rejected. Gauging is often used both in line and off line in production. In in-line processes, each component is inspected as it is manufactured. In-line gauging inspection is often used in mechanical assembly verification, electronic packaging inspection, container inspection, glass vile inspection, and electronic connector inspection. Off-line Gauging Applications Often gauging applications measure the quality of products off line. A sample of products is extracted from the production line. Then measured distances between features on the object are used to determine if the sample falls within a tolerance range. Using gauging techniques you can measure the distance between blobs and edges in binary images and easily quantify image measurements. How to Inspect for Missing Components Part present/not present applications are typical in electronic connector assembly and mechanical assembly applications. The objective of the application is to use line profiles and edge detection to determine if a part is present or not present. An edge along the line profile is defined by the level of contrast between background and foreground and the slope of the transition. Using this technique, you can count the number of edges along the line profile and compare the result to an expected number of edges. The method of limiting processing to lines, known as line profiling, offers a less numerically intensive alternative to other image processing methods such as image correlation and pattern matching. How to Create an Alignment Application In many inspection applications, the object or part that is being inspected can occupy different parts of an image and can be in different orientations. 3

Origin of local coordinate system 2 points ambiguous Figure 2. Line profiles across the horizontal and vertical sides of the part can be used to determine the orientation of the part. Three points are needed to describe the orientation of a rectangular part. Two points are ambiguous. Consider for example a floppy disk inspection application. The objective of this application is to determine if the label that specifies the density of the disk HD is printed correctly. Because these disks usually come down a conveyer belt in production, in each acquired image the disk can be translated and rotated. To be able to track the correct location of the HD symbol on the disk, a coordinate system with respect to the disk boundaries must be used. In IMAQ Vision, this is done by using a Coordinate Reference.VI, which requires an input of two points along the top boundary of the disk (the x-axis of the disk) and one point along the left boundary (the y-axis of the disk). Using these three points the function computes a coordinate system for the disk, in a sense, a local coordinate system for the disk. Using a local coordinate system resolves the orientation issue. The floppy disk can be at a wide range of orientations for inspection. These three points are obtained by finding the location of the boundary (or edge) using the Edge Tool.VI at the top and left boundaries of the disk. The location of the HD label on the disk can then be determined as an offset to this coordinate system. For each acquired image of the disk, three points are required to establish the disk current coordinate system. The translation and rotation of the disk are then computed. These values are then used to calculate the position of the HD label in the most recent image. The current HD label is then matched to a template image using the Shape Matching.VI to determine its quality. This process of establishing a local coordinate system for the object in order to make measurements insensitive to the orientation of the object in the image is called alignment. Identification In many applications, you simply want to identify or make a reading. For example, you might want to read a bar code, inspect a speedometer to make sure it is calibrated, or to read the seven segment display of a microwave oven during production to ensure the readings work correctly. IMAQ Vision has many built-in functions for reading LCDs, bar codes, and gauges. Figure 3. Inspection of a Seven-Segment Display to Verify the Correct Reading 4

A straightforward way to inspect an LCD is first to calibrate the size of the digits when all segments are active. This calibration will determine the location of the boundaries of each seven-segment group. By knowing the location of the boundaries, the inspection algorithm can quickly analyze subsequent images to determine which of the segments is on. Once you know which segments are on, you can map the inspected value to a corresponding display value. The final step involves comparison of the inspected value and the known value to determine if the part is displayed correctly or flawed. Moving to the details of the algorithm, we first calibrate the size of the seven-segment number by inspecting when all segments are on (giving a display of 8). For this calibration, an operator captures an image of the display and draws a calibration box around all of the displays shown on the acquired image. The operator need not draw the region of interest (ROI) exactly, but it must at least surround all of the displays. After drawing the ROI, the operator next activates an automatic calibration algorithm. The algorithm first finds all of the vertical bars of the displays by drawing a horizontal line at levels that are 1/3 and 2/3 the height of the ROI. The line slices through the activated segments and returns the position of each vertical segment by using an image processing function called edge detection. An edge along the line profile is defined by the level of contrast between background and foreground and the slope of the transition. Using this technique you can determine if any of the seven segments are defective active at the wrong time or inactive at the wrong time. This method of limiting processing to lines, known as line profiling, offers a less numerically intensive alternative than other image processing methods such as image correlation and pattern matching. Once the position of the vertical segments is known, the algorithm next locates horizontal segments by inspecting the pixels along a vertical line profile drawn between the location of the vertical segments. The calibration algorithm, which is designed for LCD and electroluminescent indicators, is insensitive to light drift because it uses contrast values along a line profile. In other words, as long as there are 30 grayscale levels between the foreground and the background, then an edge (or segment) will be detected. (We quantify light drift as the difference between the average pixel values at the top left and the bottom right of the background of the LCD screen.) After locating the vertical and horizontal segments, the function returns an array that is the area of interest and contains the digits. Overall, a local coordinate system defining the digits is returned. A local coordinate system based on this calibration image simplifies and improves the performance of inspection on subsequent images. Upon completion of calibration, we assume that the size and boundaries of the segment groups will not shift with each new devict under test (DUT), and we run an inspection algorithm many times without recalibration. The inspection function draws a line profile through each of the segments and uses edge detection to determine the on or off state of each segment. The corresponding numeric value (0-9) is then assigned to the display inspected. In the final step, we compare this number with an expected value that is fed to the DUT through a serial line. A mismatch indicates a defective part. Unfortunately, our algorithm is not perfect. Four factors can cause a bad detection: 1) horizontal or vertical light drift (greater than 90 in an 8-bit image); 2) insufficient contrast between the background and the segments; 3) noise; and 4) insufficient resolution. To quantify several of these factors and determine representative minima and maxima that ensure accuracy for 8-bit (256 gray level) images, we make several definitions. For one, light drift greater than 90, as defined above, will cause problems. Contrast must exceed 30, when defined as the difference between the average pixel values in rectangular regions in the background and foreground. Noise must not exceed 15, when we define it as the standard deviation of the pixel values contained in a rectangular region in the background. Finally, in terms of resolution, the digit must be larger than 12 to 18 pixels to obtain accurate results. Despite such imperfections, the algorithm is useful for other inspection tasks. For instance, with some modifications, it can test analog gauges and speedometers. The strategy for inspecting gauges is straightforward; you calibrate the full range of the gauge by drawing a line profile along the needle at the minimum reading and at the maximum reading. In so doing, we determine the center point about which the needle swings and the perimeter of the area of swing. After the calibration, we can detect the needle position with line profiles. If the needle is black with a white background, the line profile with the lowest value is the location of the needle. Overall, edge detection and line profiling are very common and simple to understand. They are imaging techniques that can be used in a wide range of inspection appli- 5

cations from mechanical assembly, electronic packaging inspection, quality of markings, and electronic connector inspection. Example Using Gauging to Inspect an Aerosol Can Objective: To understand the concept of gauging in inspection. The objective of this gauging application is to determine if the spray can tip has been correctly assembled. The spray tip must be in the vertical 90 position or within a tolerance of ±5 degrees. If the spray tip, as installed, is out of tolerance, the part is rejected. Because aerosol cans are coming down a conveyer belt, the position of the can in the image is not fixed. The cans may shift in both horizontal and vertical directions. To correctly determine the position of the nozzle, a local coordinate system must be associated with the can. We do this by finding three points, two along the x-axis and one along the y-axis. Using three points, you can calculate the shift and rotation of the can and move the ROI in the image accordingly. Description: This aerosol can inspection system calculates the spray tip angle to determine if it has been properly placed. 6

Example Inspecting an Assembly Objective: To inspect for missing parts to determine if the item has been properly assembled. Figure 4. A LabVIEW diagram for acquiring an image and using a line profile and the IMAQ Edge Tool.VI to count the number of edges under the line. 7

Figure 5. A Connector with Only Three Wires Assembled The number of edges under the line profile is used to determine if this connector has been properly assembled. Detection of eight edges means that there are four wires. Any other edge count means that the part has not been assembled correctly. IMAQ Vision Functions (IMAQ Vision»Inspection Tools»Caliper Tools»IMAQ ROIProfile.vi) Calculates the profile of the pixels along the boundary of an ROI descriptor. This VI returns a data type (cluster) that is compatible with a LabVIEW or BridgeVIEW graph. This VI also returns other information such as pixel statistics and the true coordinates of the ROI boundary. (IMAQ Vision»Inspection Tools»Caliper Tools»IMAQ Edge Tool.vi) Finds edges along a path defined in the image. Edges are determined based on their contrast, width, and steepness. (Get Line.vi) This is a subvi that is not shipped with IMAQ Vision. Its function is to wait for a line to be drawn on the image and then pass out the ROI descriptor. Summary Driven by converging technologies, advanced PC-based vision and image processing for test, measurement, and industrial automation are a reality. For machine vision developers who need to quickly develop gauging applications, National Instruments LabVIEW and IMAQ Vision software contains high-level gauging and caliper tools that speed up application development. These functions, which deliver a high level of accuracy, are reliable tools for missing part inspection, guidance, and gauging applications. Thanks to the graphical language of LabVIEW and IMAQ Vision, you are empowered to develop sophisticated machine vision applications. Moreover, because of the diverse tools within LabVIEW, other types of I/O such as motion control, instrument control, and data acquisition are easily integrated into your application.