Lecture 6: Segmentation by Point Processing

Similar documents
Lecture Image Enhancement and Spatial Filtering

Lecture 3: Basic Morphological Image Processing

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

EE795: Computer Vision and Intelligent Systems

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

11/10/2011 small set, B, to probe the image under study for each SE, define origo & pixels in SE

Chapter 10 Image Segmentation. Yinghua He

Image Enhancement: To improve the quality of images

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary)

Binary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5

Part 3: Image Processing

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

VC 16/17 TP5 Single Pixel Manipulation

Schedule for Rest of Semester

Processing of binary images

Image segmentation. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Image Analysis Image Segmentation (Basic Methods)

Image Analysis Lecture Segmentation. Idar Dyrdal

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

Three-Dimensional Coordinate Systems

Segmentation

EECS490: Digital Image Processing. Lecture #22

Region & edge based Segmentation

Albert M. Vossepoel. Center for Image Processing

Digital Image Processing Lecture 7. Segmentation and labeling of objects. Methods for segmentation. Labeling, 2 different algorithms

CS4733 Class Notes, Computer Vision

CHAPTER 3 FACE DETECTION AND PRE-PROCESSING

ECG782: Multidimensional Digital Signal Processing

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Segmentation

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Classification of image operations. Image enhancement (GW-Ch. 3) Point operations. Neighbourhood operation

Edge and local feature detection - 2. Importance of edge detection in computer vision

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Pictures at an Exhibition

Chapter 9 Morphological Image Processing

REGION & EDGE BASED SEGMENTATION

IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

Image Analysis. Morphological Image Analysis

Edges and Binary Images

1.Some Basic Gray Level Transformations

Motivation. Gray Levels

Basic Algorithms for Digital Image Analysis: a course

3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers

CITS 4402 Computer Vision

Today INF How did Andy Warhol get his inspiration? Edge linking (very briefly) Segmentation approaches

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking

ELEC Dr Reji Mathew Electrical Engineering UNSW

Image Processing: Final Exam November 10, :30 10:30

1 Background and Introduction 2. 2 Assessment 2

Digital image processing

Bounded, Closed, and Compact Sets

Lecture 4: Spatial Domain Transformations

Mathematical morphology for grey-scale and hyperspectral images

Chapter 3: Intensity Transformations and Spatial Filtering

morphology on binary images

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

Image Processing

Math 21a Homework 22 Solutions Spring, 2014

ECG782: Multidimensional Digital Signal Processing

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

Chapter 10: Image Segmentation. Office room : 841

Digital Image Processing. Image Enhancement - Filtering

MATH 234. Excercises on Integration in Several Variables. I. Double Integrals

Digital Image Processing Chapter 11: Image Description and Representation

Motivation. Intensity Levels

An Introduc+on to Mathema+cal Image Processing IAS, Park City Mathema2cs Ins2tute, Utah Undergraduate Summer School 2010

CSE 152 Lecture 7. Intro Computer Vision

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Intensive Course on Image Processing Matlab project

Requirements for region detection

Ulrik Söderström 16 Feb Image Processing. Segmentation

Analysis of Binary Images

GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Morphology-form and structure. Who am I? structuring element (SE) Today s lecture. Morphological Transformation. Mathematical Morphology

Digital image processing

Outline 7/2/201011/6/

Fundamentals of Digital Image Processing

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

Basic relations between pixels (Chapter 2)

Bioimage Informatics

Lecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation

Hw 4 Due Feb 22. D(fg) x y z (

EE795: Computer Vision and Intelligent Systems

SECTION 5 IMAGE PROCESSING 2

COMPUTER AND ROBOT VISION

INTENSITY TRANSFORMATION AND SPATIAL FILTERING

From Pixels to Blobs

Morphological Compound Operations-Opening and CLosing

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Topic 4 Image Segmentation

Biomedical Image Analysis. Mathematical Morphology

Face Detection and Recognition in an Image Sequence using Eigenedginess

Introduction to Medical Imaging (5XSA0)

Math 233. Lagrange Multipliers Basics

Chapter 11 Representation & Description

Transcription:

Lecture 6: Segmentation by Point Processing Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu September 27, 2005 Abstract Applications of point processing to image segmentation by global and regional segmentation are constructed and demonstrated. An adaptive threshold algorithm is presented. Illumination compensation is shown to improve global segmentation. Finally, morphological waterfall region segmentation is demonstrated. DIP Lecture 6

Image Segmentation Image segmentation can be used to separate pixels associated with objects of interest from the image background. This is an important step in many imaging applications of automated analysis and robotics. The opposing requirements of accuracy and speed are always present. Segmentation on a simple pixel-by-pixel basis using threshold decisions is very fast but restricted to applications where there is sufficient contrast between objects and background. We will examine techniques for threshold segmentation using global and adaptive methods. DIP Lecture 6 1

Global Segmentation The object in this figure is light on a dark background. A histogram shows that the object and background are well-separated. A binary image that is produced by thresholding at the level indicated by the dashed line is shown at the right. DIP Lecture 6 2

Global Segmentation Example 1 The effect of thresholding the fingerprint image at the threshold indicated is shown as the binary image to the right. DIP Lecture 6 3

Global Segmentation Example 2 A greyscale image and its histogram is shown below. Selection of a threshold for segmentation is facilitated by examining a surface plot or a contour plot. Image Histogram Inverted surface plot Contour plot DIP Lecture 6 4

Global Segmentation Example 2 The set of pixels S = {p : A[p] T } with T = 80 are selected in the middle figure. The objects can be counted using LABEL REGION as shown on the right. Original Segmented Labeled DIP Lecture 6 5

Global Segmentation Example 3 The upper figure is an electrophoresis image in which the dark areas are of interest. Global thresholding with T = 70 produces the bottom image. Note that the objects in the lower right corner are missed. Other thresholds have similar difficulty, either being too high for the upper left corner or too low for the lower right corner. DIP Lecture 6 6

Global Segmentation Example 3 The image was divided into four regions as shown in the upper image. A different threshold was used for each region, as indicated on the figure. The result is shown in the lower image. There was an improvement in the results because the threshold was adapted to the brightness of each region. DIP Lecture 6 7

Global Segmentation Example 3 It appears that the average brightness is a function of position. If we assume that the image is of the form A[x, y] = F [x, y] + G[x, y] where F [x, y] is the image of interest and G[x, y] is a varying background, then perhaps we can remove the baseline variation. Let G[x, y] be approximated by a surface of the form G[x, y] = a 0 + a 1 x + a 2 y + a 3 xy The coefficients can be estimated so that G is a least-mean-square approximation to A. For this image we find a 0 a 1 a 2 a 3 142 0.06-0.16 1e-5 DIP Lecture 6 8

Global Segmentation Example 3 Original Image A Compensated Image F Region Thresholding of A Global Threshold of F Region thresholding and global thresholding after compensation achieve similar results. DIP Lecture 6 9

Global Segmentation with Multiple Thresholds An image that is segmented by using two thresholds as indicated by the dashed lines is shown at the right. The brighter segment corresponds to the region between the thresholds. DIP Lecture 6 10

Automated Threshold Setting An algorithm can be made more flexible and more easily used if the threshold can be selected automatically. When global thresholding is appropriate, this can be done by a simple iterative algorithm. 1. Select an estimate for T. This could be the mean grey value of the image. 2. Segment the image using T to produce two sets of pixels: G 1 containing pixels with grey values > T and G 2 containing pixels with grey values T. 3. Compute the average values µ 1 and µ 2 of sets G 1 and G 2. 4. Compute a new threshold value T = µ 1 + µ 2 2 5. Repeat steps 2 through 4 until changes in T are small. DIP Lecture 6 11

Difficulty with Global Segmentation A global threshold may fail if the conditions change across an image. An image with nonuniform illumination, which increases linearly from left to right. The image histogram shows that there is no suitable threshold for global segmentation. DIP Lecture 6 12

Difficulty with Global Segmentation Shown below is the effect of segmentation with two similar threshold settings. Note that neither is satisfactory. DIP Lecture 6 13

Difficulty with Global Segmentation We can attempt to clean up the segmentation with other techniques. Shown below is the effect of a morphological opening operation. T = 70 Image T = 80 Image DIP Lecture 6 14

Effect of Varying Illumination An image can be considered to be the result of illumination and reflection. f(x, y) = r(x, y)i(x, y) The illumination is usually a slowly varying function compared to the scene reflectance. If we know the illumination function then we can solve for the reflectance directly. f(x, y) r(x, y) = i(x, y) The reflectance function can then be processed directly. If we do not know i(x, y) then we can try to determine it by analysis. DIP Lecture 6 15

Solving for i(x, y) Consider the function z(x, y) = ln f(x, y) = ln r(x, y) + ln i(x, y) = r l (x, y) + i l (x, y) If r(x, y) and i(x, y) are independent random variables, which would be the usual case, then r l (x, y) and i l (x, y) are also statistically independent. The probability density function of independent random variables is the convolution of the separate probability density functions. The probability density function of i l (x, y) is likely to be approximately p i (ξ) = δ(ξ ξ 0 ). Convolution with the probability density function of the reflectance, p r (ξ), will leave the shape essentially unchanged. Therefore, if r(x, y) can be segmented by global thresholding under constant illumination, we expect that r l (x, y) can be segmented by global thresholding DIP Lecture 6 16

Solving for i(x, y) We will approximate i l (x, y) with a slowly varying function. î l (x, y) = a 0 + a 1 x + a 2 y + a 3 xy We will choose the coefficients to minimize the mean-squared difference between z(x, y) and î l (x, y). Let e(x, y) = z(x, y) î l (x, y) = z(x, y) (a 0 + a 1 x + a 2 y + a 3 xy) and let the mean-squared error be denoted by e 2 = 1 MN = 1 MN N 1 x=0 N 1 x=0 M 1 y=0 M 1 y=0 e 2 (x, y) (z(x, y) (a 0 + a 1 x + a 2 y + a 3 xy)) 2 DIP Lecture 6 17

Solving for i(x, y) Differentiate with respect to the coefficients a k, k = 0, 1, 2, 3 and set each equation equal to zero. We will use the bracket notation for the double sums to simplify the appearance of the equations. e 2 a 0 = 2 (z(x, y) (a 0 + a 1 x + a 2 y + a 3 xy)) = 0 e 2 a 1 = 2 (z(x, y) (a 0 + a 1 x + a 2 y + a 3 xy)) x = 0 e 2 a 2 = 2 (z(x, y) (a 0 + a 1 x + a 2 y + a 3 xy)) y = 0 e 2 a 3 = 2 (z(x, y) (a 0 + a 1 x + a 2 y + a 3 xy)) xy = 0 DIP Lecture 6 18

The equations can be simplified to Solving for i(x, y) a 0 + a 1 x + a 2 y + a 3 xy = z(x, y) a 0 x + a 1 x 2 + a 2 xy + a 3 x 2 y = xz(x, y) a 0 y + a 1 xy + a 2 y 2 + a 3 xy 2 = yz(x, y) a 0 xy + a 1 x 2 y + a 2 xy 2 + a 3 x 2 y 2 = xyz(x, y) All of the bracketed terms can be computed from the image information. We can then solve for the coefficients in this set of linear equations. The matrix form is 1 x y xy x x 2 xy x 2 y y xy y 2 xy 2 xy x 2 y xy 2 x 2 y 2 a 0 a 1 a 2 a 3 Ca = v a = C 1 v = z xz yz xyz DIP Lecture 6 19

Solving for i(x, y) The terms in the matrix are based only on the image coordinates while those on the right depend upon both the image and the coordinates. ;Open the image F=read_image( D:\Harvey\Classes\SIMG782\2004\lectures\lecture_06\doc\fig7.png ) sf=size(f,/dim) ;Construct coordinate arrays for the image x=findgen(sf[0])#replicate(1,sf[1]) y=findgen(sf[1])##replicate(1,sf[0]) ;Compute the log image FL=BYTSCL(ALOG(F)) ;Construct the moments used to solve for the a coefficients mx=mean(x) & my=mean(y) & mxy=mean(x*y) mx2=mean(x^2) & my2=mean(y^2) & mx2y=mean(x^2*y) mxy2=mean(x*y^2) & mx2y2=mean(x^2*y^2) C=[[1,mx,my,mxy],[mx,mx2,mxy,mx2y],[my,mxy,my2,mxy2],[mxy,mx2y,mxy2,mx2y2]] CI=invert(C) DIP Lecture 6 20

Solving for i(x, y) ;Construct an illumination predictor for FL ml=mean(fl) & mlx=mean(fl*x) & mly=mean(fl*y) & mlxy=mean(fl*x*y) v=transpose([ml,mlx,mly,mlxy]) al=ci##v GL=aL[0]+aL[1]*x+aL[2]*y+aL[3]*x*y For this image, which is of size 500 400 we find a = = 1.0 249.5 199.5 49775.3 249.5 83083.5 49775.3 16575158.0 199.5 49775.3 53133.5 13256808.0 49775.3 16575158.0 13256808.0 4414517248.0 95.227890 0.207723 0.001120 0.000018 1 147.7 41253.9 29511.4 8245948.0 DIP Lecture 6 21

Solving for i(x, y) Having computed the coefficients, the rest is now easy. ;Illumination approximation GL=aL[0]+aL[1]*x+aL[2]*y+aL[3]*x*y ;Compute the log reflectance RL=bytscl(FL-GL) ;Compute the log reflectance histogram hr=histogram(rl) ;Threshold the log reflectance image TL=175 RLT=RL GE TL The results are illustrated on the following pages. DIP Lecture 6 22

Segmentation Example The figure below has an illumination gradient from left to right across the image. As shown by its histogram, it is difficult to segment. The log(reflectance) has a separable histogram. Original Histogram of Original Segmentation Log(Illumination) Log(Reflectance) Histogram DIP Lecture 6 23

Segmentation Example The figure below has an illumination gradient from left to right across the image. As shown by its histogram, it is difficult to segment. The log(reflectance) has a separable histogram. Original Histogram Segmentation Log(Reflectance) Histogram Segmentation DIP Lecture 6 24

Example 3 Continued Removal of illumination trend by compensation of the log image has slightly better results for the electrophoresis image. Original Image Illumination Subtraction Global Threshold Log Image Illumination Division Global Threshold DIP Lecture 6 25

Morphological Waterfall Segmentation Watershed segmentation is a metaphor for a computational method that iteratively fills the local minima of a greyscale image to find segment boundaries. As regions are filled from the bottom, they reach a level where they would merge. At this point dams are constructed to maintain the separation. The dams become the segment boundaries. The iterative erosion process is illustrated in the figures below (Gonzalez & Woods, Section 10.5). DIP Lecture 6 26

2002 R. C. Gonzalez & R. E. Woods Digital Image Processing, 2nd ed. Chapter 10 Image Segmentation www.imageprocessingbook.com

2002 R. C. Gonzalez & R. E. Woods Digital Image Processing, 2nd ed. Chapter 10 Image Segmentation www.imageprocessingbook.com

2002 R. C. Gonzalez & R. E. Woods Digital Image Processing, 2nd ed. Chapter 10 Image Segmentation www.imageprocessingbook.com

2002 R. C. Gonzalez & R. E. Woods Digital Image Processing, 2nd ed. Chapter 10 Image Segmentation www.imageprocessingbook.com

2002 R. C. Gonzalez & R. E. Woods Digital Image Processing, 2nd ed. Chapter 10 Image Segmentation www.imageprocessingbook.com

Watershed Segmentation Example 1 The small greyscale image of smooth black blobs is a natural test candidate for watershed segmentation. This image was segmented using the IDL watershed.pro function. Code shown on next page. Original Image Watershed regions After morphological greyscale erosion with disk of radius 9 Boundaries overlaid on original image. DIP Lecture 6 27

;Watershed Demo ;Radius of disc for greyscale erosion r = 9 ;Create a disc of radius r disc = SHIFT(DIST(2*r+1), r, r) LE r B=READ_IMAGE( doc/fig11a.png ) sa=size(b,/dim) window,/free,xs=2*sa[0],ys=2*sa[1] ;window,/free,xs=sb[0],ys=sb[1],xpos=0,ypos=0 TVSCL, b,0 ;Remove holes of radii less than r c = MORPH_CLOSE(b, disc, /GRAY) TVSCL, c,1 ;Create watershed image d = WATERSHED(c,connectivity=8) ;Display it, showing the watershed regions tek_color TV, d,2 loadct,0,/silent Watershed Demo Code ;Merge original image with boundaries of watershed regions e = a > (MAX(a) * (d EQ 0b)) TVSCL, e,3 DIP Lecture 6 28

Watershed Segmentation Example 2 Revisit segmentation of electrophoresis image DIP Lecture 6 29

Watershed Segmentation Example 3 Segmentation of pollen image from IDL demonstration package. DIP Lecture 6 30