#1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu
Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform Opportunity of manipulate, process and analyze the image in a frequency domain Frequency Domain Filtering Can be more efficient in certain cases #2
Frequency Domain Methods Spatial Domain Frequency Domain
Major filter categories Typically, filters are classified by examining their properties in the frequency domain: (1) Low-pass (2) High-pass (3) Band-pass (4) Band-stop
Example Original signal Low-pass filtered High-pass filtered Band-pass filtered Band-stop filtered
Low-pass filters (i.e., smoothing filters) Preserve low frequencies - useful for noise suppression frequency domain time domain Example:
High-pass filters (i.e., sharpening filters) Preserves high frequencies - useful for edge detection frequency domain time domain Example:
Band-pass filters Preserves frequencies within a certain band frequency domain time domain Example:
Band-stop filters How do they look like? Band-pass Band-stop
Frequency Domain Methods Case 1: H(u,v) is specified in the frequency domain. Case 2: h(x,y) is specified in the spatial domain.
Frequency domain filtering: steps F(u,v) = R(u,v) + ji(u,v)
Frequency domain filtering: steps (cont d) (case 1) G(u,v)= F(u,v)H(u,v) = H(u,v) R(u,v) + jh(u,v)i(u,v)
Example f(x,y) f p (x,y) f p (x,y)(-1) x+y F(u,v) H(u,v) - centered G(u,v)=F(u,v)H(u,v) g p (x,y) g(x,y)
h(x,y) specified in spatial domain: how to generate H(u,v) from h(x,y)? If h(x,y) is given in the spatial domain (case 2), we can generate H(u,v) as follows: 1.Form h p (x,y) by padding with zeroes. 2. Multiply by (-1) x+y to center its spectrum. 3. Compute its DFT to obtain H(u,v)
Edge Detection #15 Definition of an Edge Edge Modeling and Edge Descriptors Edge Detection Methods First and Second Derivative Approximations
Image Segmentation Image segmentation methods will look for objects that either have some measure of homogeneity within themselves, or have some measure of contrast with the objects on their border The homogeneity and contrast measures can include features such as gray level, color, and texture #16
Edge Detection: First Step to Image Segmentation #17 The goal of image segmentation is to find regions that represent objects or meaningful parts of objects Division of the image into regions corresponding to objects of interest is necessary for scene interpretation and understanding Identification of real objects, pseudo-objects, shadows, or actually finding anything of interest within the image, requires some form of segmentation
Image Edges #18 Image Features that are Local, meaningful, detectable parts of the image.
Edge Detection Objectives Edge detection operators are used as a first step in the line (or curve) detection process. #19 Edge detection is also used to find complex object boundaries by marking potential edge points corresponding to places in an image where rapid changes in brightness occur
Edge Detection Objectives After the edge points have been marked, they can be merged to form a line/curves or object outlines #20 A curve is a continuous collection of edge points along a certain direction Different methods can use the edge information to construct meaningful regions for image analysis
Edges & Lines Relationship #21 L i n e Edge Edge Edges and lines are perpendicular The line shown here is vertical and the edge direction is horizontal. In this case the transition from black to white occurs along a row, this is the edge direction, but the line is vertical along a column.
Definition of Edges Edges are significant local changes of intensity in an image.
Why is Edge Detection Useful? Important features can be extracted from the edges of an image (e.g., corners, lines, curves). These features are used by higher-level computer vision algorithms (e.g., recognition).
Goals of Edge Detection Goal of Edge Detection Produce a line drawing of a scene from an image of that scene. Important features can be extracted from the edges of an image (e.g., corners, lines, curves). These features are used by higher-level computer vision algorithms (e.g., segmentation, recognition). #24
Challenges: Effect of Illumination
Goals of an Edge Detector #26 Goal to construct edge detection operators that extracts the orientation information (information about the direction of the edge) and the strength of the edge. Some methods can return information about the existence of an edge at each point for faster processing
What Causes Intensity Changes? Geometric events surface orientation (boundary) discontinuities depth discontinuities color and texture discontinuities Non-geometric events illumination changes specularities shadows inter-reflections surface normal discontinuity depth discontinuity color discontinuity illumination discontinuity
What Causes Intensity Changes? Reflectance change: appearance information, texture Depth discontinuity: object boundary Cast shadows Change in surface orientation: shape
Good Examples: Edge Detection #29 Minimal Noise in the Image
Example: Edge Detection Different scales of edges ( Hair, Face, ) #30
Example: Edge Edge Detection Detection Challenge to extract meaningful edges not corrupted by Noise
Modeling Intensity Changes Step edge: the image intensity abruptly changes from one value on one side of the discontinuity to a different value on the opposite side.
Modeling Intensity Changes (cont d) Ridge edge: the image intensity abruptly changes value but then returns to the starting value within some short distance (i.e., usually generated by lines).
Modeling Intensity Changes (cont d) Roof edge: a ridge edge where the intensity change is not instantaneous but occur over a finite distance (i.e., usually generated by the intersection of two surfaces).
Modeling Intensity Changes (cont d) Ramp edge: a step edge where the intensity change is not instantaneous but occur over a finite distance.
Summary: Edge Definition Edge is a boundary between two regions with relatively distinct gray level properties. Edges are pixels where the brightness function changes abruptly. Edge detectors are a collection of very important local image preprocessing methods used to locate (sharp) changes in the intensity function. #36
Edge Descriptors Edge descriptors Edge normal: unit vector in the direction of maximum intensity change. Edge direction: unit vector to perpendicular to the edge normal. Edge position or center: the image position at which the edge is located. Edge strength: related to the local image contrast along the normal. #37
What are the steps for Edge Detection Main Steps in Edge Detection (1) Smoothing: suppress as much noise as possible, without destroying true edges. ( we talked about this step before ) (2) Enhancement: apply differentiation to enhance the quality of edges (i.e., sharpening).
Main Steps in Edge Detection (cont d) (3) Thresholding: determine which edge pixels should be discarded as noise and which should be retained (i.e., threshold edge magnitude). (4) Localization: determine the exact edge location. sub-pixel resolution might be required for some applications to estimate the location of an edge to better than the spacing between pixels.
Edge Detection using Derivatives Edge detection using derivatives #40 Calculus describes changes of continuous functions using derivatives. An image is a 2D function, so operators describing edges are expressed using partial derivatives. Points which lie on an edge can be detected by either: detecting local maxima or minima of the first derivative detecting the zero-crossing of the second derivative
1 st order derivatives Interpretation
2 nd order derivatives Interpretation
1 st order 2 nd order Zero crossing, locating edges
Comparison of Derivatives Looking at nonzero-ness 1 st order derivative gives thick edges 2 nd order derivative gives double edge 2 nd order derivatives enhance fine detail much better. 44
1 st order 2 nd order Zero crossing, locating edges
Edge Detection Using Derivatives Often, points that lie on an edge are detected by: (1) Detecting the local maxima or minima of the first derivative. 1 st derivative (2) Detecting the zero-crossings of the second derivative. 2 nd derivative
Image Derivatives How can we differentiate a digital image? Option 1: reconstruct a continuous image, f(x,y), then compute the derivative. Option 2: take discrete derivative (i.e., finite differences) Pick Option 2 for Digital Processing
Edge Detection using Derivatives For 2D function, f(x,y), the partial derivative is: For discrete data, we can approximate using finite differences: To implement above as convolution, what would be the associated filter? [-1 1] or [1-1]? ε ε ε ), ( ), ( lim ), ( 0 y x f y x f x y x f + = 1 ), ( ) 1, ( ), ( y x f y x f x y x f +
Edge Detection Using First Derivative Computing the 1 st derivative cont. #49 f ( x) f ( x) f ( x 1) Backward difference f ( x) = f ( x + 1) f ( x) Forward difference f ( x) = f ( x + 1) f ( x 1) Central difference
Edge Detection Using First Derivative Cartesian vs pixel-coordinates: - j corresponds to x direction - i to -y direction
Edge Detection Using First Derivative sensitive to horizontal edges! sensitive to vertical edges!
Edge Detection Using First Derivative 1D functions (not centered at x) (centered at x) (upward) step edge ramp edge (downward) step edge roof edge
Edge Detection Using First Derivative Computing the 1 st #53 derivative cont. Examples using the edge models and the mask [ -1 0 1] (centered about x):
Edge Detection Using First Derivative f ( x, y) x f ( x, y) y -1 1-1 1? or 1-1 Which shows changes with respect to x?
Edge Detection Using First Derivative We can implement and using the following masks: good approximation at (x+1/2,y) good approximation at (x,y+1/2) (x,y+1/2) * * (x+1/2,y)
Edge Detection Using First Derivative A different approximation of the gradient: good approximation (x+1/2,y+1/2) * and can be implemented using the following masks:
Approximation of First Derivative ( Gradient ) Consider the arrangement of pixels about the pixel (i, j): 3 x 3 neighborhood: The partial derivatives can be computed by: The constant c implies the emphasis given to pixels closer to the center of the mask.
Prewitt Operator Setting c = 1, we get the Prewitt operator:
Sobel Operator Setting c = 2, we get the Sobel operator:
Edge Detection Steps Using Gradient (i.e., sqrt is costly!)
Example (using Prewitt operator) Note: in this example, the divisions by 2 and 3 in the computation of f x and f y are done for normalization purposes only
Edge Descriptors Using Gradient The gradient is a vector which has magnitude and direction: or (approximation) f f + x y Magnitude: indicates edge strength. Direction: indicates edge direction. i.e., perpendicular to edge direction
Edge Descriptors Using Gradient The gradient of an image: The gradient points in the direction of most rapid change in intensity The gradient direction (orientation of edge normal) is given by: The edge strength is given by the gradient magnitude Slide credit Steve Seitz
Edge Detection Using Second Derivative Approximate finding maxima/minima of gradient magnitude by finding places where: Can t always find discrete pixels where the second derivative is zero look for zerocrossing instead.
Edge Detection Using Second Derivative Computing the 2 nd derivative: #65 This approximation is centered about x + 1 By replacing x + 1 by x we obtain:
Image Derivatives 1 st order 2 nd order
Edge Detection Using Second Derivative (cont d)
Edge Detection Using Second Derivative (cont d) Computing the 2 nd derivative cont. Examples using the edge models: #68
Edge Detection Using Second Derivative (cont d) (upward) step edge (downward) step edge ramp edge roof edge
Edge Detection Using Second Derivative (cont d) Four cases of zero-crossings: {+,-}, {+,0,-},{-,+}, {-,0,+} Slope of zero-crossing {a, -b} is: a+b. To detect strong zero-crossing, threshold the slope.
Noise Effect on Derivates Where is the edge??
Solution: smooth first Where is the edge? Look for peaks in
Effect of Smoothing on Derivatives (cont d)
Combine Smoothing with Differentiation (i.e., saves one operation)
Consider Laplacian of Gaussian Laplacian of Gaussian operator ME5286 Where Lecture is the edge? 6 Zero-crossings of bottom graph Slide credit: Steve Seitz
2D edge detection filters Laplacian of Gaussian Gaussian derivative of Gaussian is the Laplacian operator: Slide credit: Steve Seitz
Derivative of Gaussian filters ( I g) h = I ( g h) [ ] 0.0030 0.0133 0.0219 0.0133 0.0030 0.0133 0.0596 0.0983 0.0596 0.0133 0.0219 0.0983 0.1621 0.0983 0.0219 0.0133 0.0596 0.0983 0.0596 0.0133 0.0030 0.0133 0.0219 0.0133 0.0030 [ 1 1 ]
Derivative of Gaussian filters x-direction y-direction Source: L. Lazebnik
Smoothing with a Gaussian Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of smoothing.
Effect of σ on derivatives The apparent structures differ depending on Gaussian s scale parameter. Larger values: larger scale edges detected Smaller values: finer features detected σ = 1 pixel σ = 3 pixels
Another Example d dx I d dy I
Another Example (cont d) d d = I + I dx dy 2 2 Threshold = 100
Isotropic property of gradient magnitude The magnitude of the gradient detects edges in all directions. d dx I d dy I d d = I + I dx dy 2 2
Second Derivative in 2D: Laplacian
Second Derivative in 2D: Laplacian
Variations of Laplacian
Laplacian - Example detect zero-crossings
Properties of Laplacian It is an isotropic operator. It is cheaper to implement than the gradient (i.e., one mask only). It does not provide information about edge direction. It is more sensitive to noise (i.e., differentiates twice).
Laplacian of Gaussian (LoG) (Marr-Hildreth operator) To reduce the noise effect, the image is first smoothed. When the filter chosen is a Gaussian, we call it the LoG edge detector. σ controls smoothing It can be shown that: (inverted LoG) 2σ 2
Laplacian of Gaussian (LoG) - (inverted LoG) Example (inverted LoG) filtering zero-crossings
Difference of Gaussians (DoG) The Laplacian of Gaussian can be approximated by the difference between two Gaussian functions: approximation actual LoG
Difference of Gaussians (DoG) (cont d) (a) (b) (b)-(a)
Gradient vs LoG Gradient works well when the image contains sharp intensity transitions and low noise. Zero-crossings of LOG offer better localization, especially when the edges are not very sharp. step edge ramp edge
Gradient vs LoG (cont d) LoG behaves poorly at corners
Criteria for Optimal Edge Detection (1) Good detection Minimize the probability of false positives (i.e., spurious edges). Minimize the probability of false negatives (i.e., missing real edges). (2) Good localization Detected edges must be as close as possible to the true edges. (3) Single response Minimize the number of local maxima around the true edge.
Practical Issues Noise suppression-localization tradeoff. Smoothing depends on mask size (e.g., depends on σ for Gaussian filters). Larger mask sizes reduce noise, but worsen localization (i.e., add uncertainty to the location of the edge) and vice versa. smaller mask larger mask
Practical Issues (cont d) Choice of threshold. gradient magnitude low threshold high threshold
Standard thresholding Standard thresholding: - Can only select strong edges. - Does not guarantee continuity. gradient magnitude low threshold high threshold
Hysteresis thresholding Hysteresis thresholding uses two thresholds: - low threshold t l - high threshold t h (usually, t h = 2t l ) t l t h t h t l For maybe edges, decide on the edge if neighboring pixel is a strong edge.
Directional Derivative f The partial derivatives of f(x,y) will give the slope f/ x in the positive x direction and the slope f / y in the positive y direction. We can generalize the partial derivatives to calculate the slope in any direction (i.e., directional derivative).
Directional Derivative (cont d) Directional derivative computes intensity changes in a specified direction. Compute derivative in direction u
Directional Derivative (cont d) (From vector calculus) Directional derivative is a linear combination of partial derivatives. + =
Directional Derivative (cont d) u =1 u cos x u y θ =, sinθ = ux = cos θ, uy = sinθ u u + = cosθ sinθ
Edge Detection Review Edge detection operators are based on the idea that edge information in an image is found by looking at the relationship a pixel has with its neighbors If a pixel's gray level value is similar to those around it, there is probably not an edge at that point #104
Edge Detection Review Edge detection operators are often implemented with convolution masks and most are based on discrete approximations to differential operators #105 Differential operations measure the rate of change in a function, in this case, the image brightness function
Edge Detection Review Preprocessing of image is required to eliminate or at least minimize noise effects There is tradeoff between sensitivity and accuracy in edge detection The parameters that we can set so that edge detector is sensitive include the size of the edge detection mask and the value of the gray level threshold A larger mask or a higher gray level threshold will tend to reduce noise effects, but may result in a loss of valid edge points #106