Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Similar documents
Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Digital Image Processing

EE795: Computer Vision and Intelligent Systems

Anno accademico 2006/2007. Davide Migliore

Broad field that includes low-level operations as well as complex high-level algorithms

Chapter 3: Intensity Transformations and Spatial Filtering

Advanced Vision System Integration. David Dechow Staff Engineer, Intelligent Robotics/Machine Vision FANUC America Corporation

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Image Processing: Final Exam November 10, :30 10:30

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface

SECTION 5 IMAGE PROCESSING 2

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Schedule for Rest of Semester

Vision. OCR and OCV Application Guide OCR and OCV Application Guide 1/14

Counting Particles or Cells Using IMAQ Vision

Using Edge Detection in Machine Vision Gauging Applications

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

IRIS SEGMENTATION OF NON-IDEAL IMAGES

Digital Image Processing COSC 6380/4393

Binary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5

Lecture 6: Edge Detection

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

Digital Image Processing COSC 6380/4393

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

Digital Image Processing

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Laboratory of Applied Robotics

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing

Classification of image operations. Image enhancement (GW-Ch. 3) Point operations. Neighbourhood operation

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

2: Image Display and Digital Images. EE547 Computer Vision: Lecture Slides. 2: Digital Images. 1. Introduction: EE547 Computer Vision

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

OCR and OCV. Tom Brennan Artemis Vision Artemis Vision 781 Vallejo St Denver, CO (303)

Introducing Robotics Vision System to a Manufacturing Robotics Course

The. Handbook ijthbdition. John C. Russ. North Carolina State University Materials Science and Engineering Department Raleigh, North Carolina

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

An Introduc+on to Mathema+cal Image Processing IAS, Park City Mathema2cs Ins2tute, Utah Undergraduate Summer School 2010

Object Shape Recognition in Image for Machine Vision Application

EECS490: Digital Image Processing. Lecture #19

Topic 4 Image Segmentation

VisionGauge OnLine Spec Sheet

Filtering Images. Contents

CITS 4402 Computer Vision

Processing of binary images

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

Digital Image Processing, 2nd ed. Digital Image Processing, 2nd ed. The principal objective of enhancement

Ch 22 Inspection Technologies

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Carmen Alonso Montes 23rd-27th November 2015

Ulrik Söderström 16 Feb Image Processing. Segmentation

Lecture 4: Spatial Domain Transformations

[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera

Edge and local feature detection - 2. Importance of edge detection in computer vision

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Wafer Defect Detection Using Directional Morphological Gradient Techniques

ECEN 447 Digital Image Processing

Edges and Binary Images

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Practical Image and Video Processing Using MATLAB

Lecture 8 Object Descriptors

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

[ ] Review. Edges and Binary Images. Edge detection. Derivative of Gaussian filter. Image gradient. Tuesday, Sept 16

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

Robot vision review. Martin Jagersand

Fundamentals of Digital Image Processing

Detection of Edges Using Mathematical Morphological Operators

Edges and Binary Image Analysis. Thurs Jan 26 Kristen Grauman UT Austin. Today. Edge detection and matching

Credit Card Processing Using Cell Phone Images

Outline 7/2/201011/6/

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Point operation Spatial operation Transform operation Pseudocoloring

CoE4TN4 Image Processing

Image Restoration and Reconstruction

COMPUTER AND ROBOT VISION

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Previously. Edge detection. Today. Thresholding. Gradients -> edges 2/1/2011. Edges and Binary Image Analysis

DT Vision Foundry. Now With FDA 21 CFR Part 11 Support, New Color Split Tool, and Many Enhancements!

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

Motion Estimation and Optical Flow Tracking

Unit - I Computer vision Fundamentals

Final Exam Study Guide

IT Digital Image ProcessingVII Semester - Question Bank

Other Linear Filters CS 211A

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary)

Machine Vision Tools for Solving Auto ID Applications

Digital Image Processing

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

Lecture 4 Image Enhancement in Spatial Domain

Segmentation and Grouping

Local Features: Detection, Description & Matching

Review for Exam I, EE552 2/2009

Transcription:

Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments

Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features Identifying and Verifying Components 2

Class Goals Teach the fundamental image processing tools available in machine vision software Provide some background into how the algorithms work Accelerate your machine vision learning curve What not to expect Learn how to develop a complete vision system Learn specific, proprietary functions 3D vision, color or other advanced topics Discussion of the benefits of different application development environments 3

Image Processing for Machine Vision Objective To extract useful information present in an image, in a limited time Secondary To display an image for users Not Improve appearance of image in general Used for Image pre-processing Minimize variations of information in the image Prepare the image for processing and measurement Application specific processing Use image to count, locate, and measure attributes 4

Image Types Grayscale 8-bit: pixel values range from 0 to 255 16-bit: pixel values range from 0 to 65535 Color Composed of 3 grayscale images (RGB) Other types Binary: pixel values: 0 and 1 Commonly used to identify objects of interest in an image Usually the result of image processing step Floating point Usually a result of a computation Complex Grayscale Image Binary Image 5

What is an Ideal Image? Range of grayscale values Spread out between 0 and 255 No pixels saturated at 255 (for most applications) Loss of information, impossible to distinguish between saturated pixels Good contrast Between the parts of the image of interest Repeatable In short, an ideal image requires the least number of image processing steps to obtain the result. 6

Class Organization Enhance Check Locate Measure Identify Filter noise or unwanted features Remove distortion Calibrate images Measure intensity Create particles Analyze particles Match patterns Match geometry Set-up coordinate systems Detect edges Measure distance Calculate geometry Read text (OCR) Read 1D barcodes Read 2D codes

Class Organization Enhance Check Locate Measure Identify Filter noise or unwanted features Remove distortion Calibrate images Measure intensity Create particles Analyze particles Match patterns Match geometry Set-up coordinate systems Detect edges Measure distance Calculate geometry Read text (OCR) Read 1D barcodes Read 2D codes

Motivation Read characters on a textured surface 9

Motivation Not possible to cleanly segment characters 10

Motivation Results without preprocessing 11

Motivation Read characters on a textured surface 12

Motivation Image after periodic pattern is removed 13

Motivation 14

Motivation Results without preprocessing Results with preprocessing 15

Objective of Image Preprocessing Process an image so that the resulting image is more suitable than the original for a specific application A preprocessing method that works well for one application may not be the best method for another application 16

Image Preprocessing Pre-processing occurs before the application specific processing Acquisition Preprocessing Application Specific Processing Shading Correction De-blurring De-noising Contrast Enhancement Feature Enhancement Intensity Measurements OCR Pattern Matching Gauging Barcode Particle Analysis 17

Enhancement Techniques Spatial Domain: pixel-wise operations Brightness, contrast and gamma Lookup tables Gray morphology Filtering (smoothing, median, general convolution) Frequency Domain Deblurring Filtering 18

Brightness Adds a constant grayscale to all of the image Can improve appearance but not useful for most image processing steps 19

Contrast Used to increase or decrease contrast Normal = 45 (degree line) High = Higher than 45 Low = Lower than 45 Typical use is to improve edge detection Sacrifices one range of values to improve another 20

Gamma Nonlinear contrast enhancement Higher gamma improves contrast for larger grayscale values Does not cause saturation 21

Lookup Tables A lookup table is a function that maps grayscale values in an image to new grayscale values, to create a new result image For example: reverse, square, power Reverse Square Power (x=1.5) 22

Histogram Indicates the number of pixels at each gray level Provides a good description of the composition of an image Helps to identify various populations # Pixels Pixel values 23 Saturation

Histogram Equalization Alters grayscale values of pixels so that they become evenly distributed across the full grayscale range The function associates an equal number of pixels per constant grayscale interval Takes full advantage of the available shades of gray Enhances contrast of the image without modifying the structure Original Image 24 Equalized

Histogram Equalization Cumulative Histogram Bright Dark Low Contrast High Contrast 25

Spatial Filtering Filter Type Filters Linear Highpass Gradient, Laplacian Gradient Linear Lowpass Smoothing, Gaussian Gaussian Nonlinear Highpass Gradient, Roberts Sobel, Prewitt, Differentiation, Sigma Sobel Nonlinear Lowpass Median, Nth Order Median 26

Gray Morphology Morphology Function Erosion Min(Neighbors) Erosion Dilation Max(Neighbors) Dilation Open Dilation(Erosion(I)) Open Close Erosion(Dilation(I)) Close 27

Frequency Domain Filtering Standard filtering can be done in frequency domain Low Pass, High Pass, Band Pass, Band Stop, etc. Compute the Fourier transform of the image Multiply with the transfer function of the filter Take the inverse Fourier transform to get the filtered image Input Image FFT H(u,v) IFFT I(x,y) F(u,v) F(u,v).H(u,v) R(x,y) Output Image Periodic noise 28 Bandstop filtered

Low Pass Filter Examples Low Pass with Frequency Domain Filter Low Pass with Gaussian Filter 29

High Pass Filtering Examples Detect edges Sharpen image 30

ENHANCE IMAGES: IMAGE CALIBRATION

Types of Calibration 2D spatial calibration Applied only to a plane Corrects for lens and perspective distortion Does not improve resolution of a measurement Cannot compensate for poor lighting or unstable conditions 3D spatial calibration: x, y, z Intensity calibration Color 32

Spatial Calibration Corrects for lens and perspective distortion Allows the user to take real-world measurements from image based on pixel locations. lens distortion perspective known orientation offset 33

Calibrating Your Image Setup Acquire image of a calibration grid with known realworld distances between the dots Learn the calibration (mapping information) from its perspective and distortion Apply this mapping information to subsequent images and measurements 34

Image Spatial Calibration Example Acquired image Calibrated measurements Calibration grid 35

Image Correction Use calibration to adjust image geometry so features are represented properly. Acquired image Calibration grid 36 Corrected Images

Class Organization Enhance Check Locate Measure Identify Filter noise or unwanted features Remove distortion Calibrate images Measure intensity Create particles Analyze particles Match patterns Match geometry Set-up coordinate systems Detect edges Measure distance Calculate geometry Read text (OCR) Read 1D barcodes Read 2D codes

Region of Interest (ROI) Region of Interest (ROI) A portion of the image upon which an image processing step may be performed Can be defined statically (fixed) or dynamically (based on features located in the image) Used to process only pixels that are interesting Increases reliability Reduces processing time 38

Measure Intensity Intensity statistics of Region of Interest (Search Area) can be used as a very simple check for presence/absence 39

Thresholding Thresholding Converts each pixel value in an image to 0 or 1 according to the value of the original pixel Helps extract significant structures in an image 40

Histogram and Thresholding Original Image Histogram of Image Binary Image Threshold Range 41

Finding Gray Objects Can also set upper and lower limits for pixel values Pixels inside the bounds of the limit (red region) are set to 1, and those outside the limit are set to 0 42

Automatic Thresholding 43

Global vs. Local Threshold Global Threshold 44 Local Threshold

Particles and Connectivity Thresholding creates binary image: Pixels are either 0 or 1 A particle in a binary image is a group of connected 1 pixels Defining connectivity 45

How Connectivity Affects a Particle Connectivity-4: two pixels are considered part of the same particle if they are horizontally or vertically adjacent 1 1 1 1 1 2 2 2 2 2 2 2 2 3 2 2 2 2 Connectivity-8: two pixels are considered part of the same particle if they are horizontally, vertically, or diagonally adjacent. 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 46

Particle measurements Measures particle features including: location, orientation, area, perimeter, holes, shape equivalences, moments 47

Binary Morphology Binary morphological functions extract and alter the structure of particles in a binary image Morphological functions remove unwanted information caused by the thresholding process: Noise particles Removing holes within particles Particles touching the border of an image Particles touching each other Particles with uneven borders 48

Erosion Decreases the size of objects in an image Removes a layer of pixels along the boundary of the particle Eliminates small isolated particles in the background and removes narrow peninsulas Use Erode to: Separate particles for counting Erosion 49

Dilation Increases the size of objects in an image Adds a layer of pixels around the boundary of an object (including the inside boundary for objects with holes) Eliminates tiny holes in objects Removes gaps or bays of insufficient width Use Dilate to connect particles Dilation 50

Erosion vs. Dilation Erosion Dilation 51

Open An erosion followed by a dilation Remove small particles and smooth boundaries Does not significantly alter the size or shape of particles Borders removed by the erosion process are replaced by the dilation process Use Open To: Eliminate small particles that constitute noise Open 52

Close A dilation followed by an erosion Fills holes and creates smooth boundaries Does not significantly alter the size or shape of particles Particles that do not connect after the dilation are not changed Use Close To: Eliminate small holes that constitute noise Close 53

Advanced Morphology Advanced morphological functions are combinations of operations, each of which is responsible for a single operation These functions execute the following tasks: Remove small or large particles Remove particles from an image border Fill holes Separate particles Keep or remove particles identified by morphological parameters Segmenting the image 54

Particle Filtering Keeps or removes particles based on geometric features Area, width, height, aspect ratio and other features are commonly used to filter Typically used on binary images Cleans up noisy images Threshold 55 Particle Filter (Keep Heywood Circularity Factor < 1.1)

Application: Particle counting Original Threshold Particle Filter

Class Organization Enhance Check Locate Measure Identify Filter noise or unwanted features Remove distortion Calibrate images Measure intensity Create particles Analyze particles Match patterns Match geometry Set-up coordinate systems Detect edges Measure distance Calculate geometry Read text (OCR) Read 1D barcodes Read 2D codes

LOCATING PARTS: PATTERN MATCHING

Introduction to Matching Locates regions of a grayscale image that match a predefined template Calculate a score for each matching region Score indicates quality of match Returns the XY coordinates, rotation angle and scale for each match 59

Applications Presence Detection Counting Locating Inspection 60

How It Works Two step process: Step 1: Learn Template Extract information useful for uniquely characterizing the pattern Organize information to facilitate faster search of the pattern in the image Step 2: Match Use the information present in the template to locate regions in the target image Emphasis is on search methods that quickly and accurately locate matched regions 61

Pattern Matching Methods Different ways to perform pattern matching based on the information extracted from the template Two common methods: Correlation Pattern Matching Relies on the grayscale information in the image for matching Geometric Pattern Matching Relies on edges and geometric features in the image for matching 62

Correlation Pattern Matching Assumes grayscale information present in the image Directly uses the underlying grayscale information in the image for matching Grayscale values in the pattern are matched to regions in the image using normalized crosscorrelation Score ranges from 0-1000. Used to allow imperfect match. Template (Pattern) 63

Correlation Pattern Matching When to use: Template primarily characterized by grayscale information Matching under uniform light changes Little occlusion and scale changes in image Good for the general case Good template Bad template 64

Correlation Pattern Matching When NOT to use correlation-based pattern matching: Non-uniform lighting Occlusion more than 10% Scale changes 65

Geometric Pattern Matching Matching tool you can use to locate parts that contain distinct edge information Not useful when template is predominantly defined by texture. 66

GPM is Tolerant to Occlusion Scale Changes Non-uniform Lighting Background Changes 67

GPM Feature-based Target Image Image Template Extract Curves Extract Features Match Features circles parallel lines 68

Feature Comparison Feature CPM GPM Template contains texture-like information Yes Template contains geometric information Yes Yes Find multiple match locations Yes Yes Rotation Yes Yes Scale Occlusion Matching under non-uniform lighting Yes Yes Yes Sub-pixel match locations Yes Yes 69

LOCATING PARTS: COORDINATE SYSTEMS

Region of Interest (ROI) Region of Interest (ROI) a.k.a. Search Area A portion of the image upon which an image processing step may be performed. The object under inspection must always appear inside the defined ROI in order to extract measurements from that ROI. ROIs need to be repositioned when the location of the part varies 71

Coordinate Systems Defined by a reference point (origin) and angle within the image, or by the lines that make up its axes Allows you to define search areas that can move around the image with the object you are inspecting Usually based on a characteristic feature of the object under inspection Use pattern match, edge detection or geometry tools to locate features Use features to establish coordinate system 72

Coordinate Systems Set Up 1) Define an origin Locate an easy-to-find feature in your reference image. Feature must be stable from image to image. Set a coordinate system based on its location and orientation 2) Set up measurement ROIs in reference to the new origin Acquire a new image Locate reference point Reposition measurement ROIs 73

Class Organization Enhance Check Locate Measure Identify Filter noise or unwanted features Remove distortion Calibrate images Measure intensity Create particles Analyze particles Match patterns Match geometry Set-up coordinate systems Detect edges Measure distance Calculate geometry Read text (OCR) Read 1D barcodes Read 2D codes

Edge Detection Overview Process of detecting transitions in an image One of the most commonly used machine vision tools Attractive because: Simple to understand and use Localized processing fast Applicable to many applications Tolerant to illumination changes Different Edges

1D Edge Detection Detect edge points along a line basic operation: 1) Get pixel values along the line 2) Compute gradient information 3) Find peaks and valleys (edge locations) based on contrast, width, steepness. 4) Select edge(s) based on: Order: first, last, first & last Polarity: rising, falling Score: best edge Gradient Value Pixel Value Position Along Line (pixels) 76 Position Along Line (pixels)

Subpixel Accuracy Subpixel location of edge can be computed using parabolic interpolation Parabolic interpolation Parabolic Fit Gradient Value Position Along Line (pixels) Subpixel Location 77

Edge Detector Tools Several high level edge tools are build on the single edge detectors Rake: Used to find multiple edges and fit a shape through them Configurable search directions, filtering options and sub-sampling ratios. 78

Straight Edge (Line) Detection Detect straight lines in an image Extension of 1D edge detection Straight edge detection options: Rake-based Projection-based Hough transform-based Rake-based find straight edge Projection-based find straight edge 79 Locating multiple straight edges

Edge Detection Applications Detect Features Alignment Gauging Inspection 80

Application: Inspecting Parts Locating part using find straight edge Check for remnant plastic using intensity measurement Check liquid level using find straight edge 81 Check tips using pattern matching 81

Application: Dimension Verification Dimensional measurements, such as lengths, distance, and diameter Inline gauging inspections are used to verify assembly and packaging routines Offline gauging is used to judge product quality according to a sample 82

Class Organization Enhance Check Locate Measure Identify Filter noise or unwanted features Remove distortion Calibrate images Measure intensity Create particles Analyze particles Match patterns Match geometry Set-up coordinate systems Detect edges Measure distance Calculate geometry Read text (OCR) Read 1D barcodes Read 2D codes

Identify 1D and 2D Codes Marking methods Reading Examples 84

1D Codes Applications using 1D bar codes have been around for over 35 years Barcode data is an index into a large central data storage Code is easily read by laser scanners Low data capacity in large footprint Code 3 of 9 Code 128 EAN 13 85

2D Codes Usually not an index into a database Camera-based vision systems are preferred reading method High data capacity in small footprint PDF 417 Data Matrix QR Code 86

1D vs. 2D 1D Codes Low data capacity Index into large database Large footprint Redundancy in Y dimension Readable by laser scanner Requires as much as 80% contrast 2D Codes High data capacity Self contained data Small footprint Error correction capability Requires camera based reader Can be read in low contrast 87

Optical Character Recognition (OCR) 88

OCR/OCV OCR/OCV: Optical Character Recognition/Verification: Reads or verifies (printed) characters Typical steps: Region of Interest around lines of text Threshold Character Segmentation Compare to Library (classification) Character is learned or recognized Optical Character Verification (OCV): Compare character recognized against golden reference 89

Class Organization Enhance Check Locate Measure Identify Filter noise or unwanted features Remove distortion Calibrate images Measure intensity Create particles Analyze particles Match patterns Match geometry Set-up coordinate systems Detect edges Measure distance Calculate geometry Read text (OCR) Read 1D barcodes Read 2D codes

Contact Information Nicolas Vazquez Principal Software Engineer National Instruments 11500 N. Mopac Expwy Austin, Texas 78759 USA Phone: +1 512-683-8494 Email: nicolas.vazquez@ni.com www.ni.com