Lecture Image Enhancement and Spatial Filtering

Similar documents
Classification of image operations. Image enhancement (GW-Ch. 3) Point operations. Neighbourhood operation

Lecture 6: Segmentation by Point Processing

Introduction to Digital Image Processing

2D Image Processing INFORMATIK. Kaiserlautern University. DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

1.Some Basic Gray Level Transformations

Lecture 4. Digital Image Enhancement. 1. Principle of image enhancement 2. Spatial domain transformation. Histogram processing

Image processing in frequency Domain

Chapter 3: Intensity Transformations and Spatial Filtering

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

Announcements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3.

Edge and local feature detection - 2. Importance of edge detection in computer vision

Lecture 4 Image Enhancement in Spatial Domain

EE795: Computer Vision and Intelligent Systems

Digital Image Processing. Image Enhancement - Filtering

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing

Motivation. Gray Levels

Image Processing

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Motivation. Intensity Levels

Lecture 4: Spatial Domain Transformations

What will we learn? Neighborhood processing. Convolution and correlation. Neighborhood processing. Chapter 10 Neighborhood Processing

Filtering and Edge Detection. Computer Vision I. CSE252A Lecture 10. Announcement

Lecture 3: Basic Morphological Image Processing

EELE 5310: Digital Image Processing. Lecture 2 Ch. 3. Eng. Ruba A. Salamah. iugaza.edu

Filtering Images. Contents

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

Digital Image Processing

EELE 5310: Digital Image Processing. Ch. 3. Eng. Ruba A. Salamah. iugaza.edu

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

UNIT - 5 IMAGE ENHANCEMENT IN SPATIAL DOMAIN

Filtering and Enhancing Images

Biomedical Image Analysis. Spatial Filtering

Computer Vision I - Basics of Image Processing Part 1

Image Enhancement. Digital Image Processing, Pratt Chapter 10 (pages ) Part 1: pixel-based operations

Hom o om o or o phi p c Processing

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Sobel Edge Detection Algorithm

Image Enhancement in Spatial Domain (Chapter 3)

Image Differentiation

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters

Image restoration. Lecture 14. Milan Gavrilovic Centre for Image Analysis Uppsala University

Chapter 11 Image Processing

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017

Digital Image Fundamentals

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

Computer Vision and Graphics (ee2031) Digital Image Processing I

UNIT 3 EXPRESSIONS AND EQUATIONS Lesson 3: Creating Quadratic Equations in Two or More Variables

JNTUWORLD. 4. Prove that the average value of laplacian of the equation 2 h = ((r2 σ 2 )/σ 4 ))exp( r 2 /2σ 2 ) is zero. [16]

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

Digital Image Processing

Lecture 2 Image Processing and Filtering

Lecture 7: Most Common Edge Detectors

Lecture 6: Edge Detection

C2: Medical Image Processing Linwei Wang

Digital Image Processing, 2nd ed. Digital Image Processing, 2nd ed. The principal objective of enhancement

B.Stat / B.Math. Entrance Examination 2017

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Image restoration. Restoration: Enhancement:

Computer Vision: 4. Filtering. By I-Chen Lin Dept. of CS, National Chiao Tung University

Anno accademico 2006/2007. Davide Migliore

Chapter - 2 : IMAGE ENHANCEMENT

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

INTENSITY TRANSFORMATION AND SPATIAL FILTERING

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Segmentation and Grouping

Spatial Enhancement Definition

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

SYDE 575: Introduction to Image Processing

Image Sampling and Quantisation

Biomedical Image Analysis. Point, Edge and Line Detection

Chapter 3 Image Enhancement in the Spatial Domain

Image Sampling & Quantisation

Warm-Up Exercises. Find the x-intercept and y-intercept 1. 3x 5y = 15 ANSWER 5; y = 2x + 7 ANSWER ; 7

Islamic University of Gaza Faculty of Engineering Computer Engineering Department

Edge Detection (with a sidelight introduction to linear, associative operators). Images

3. (a) Prove any four properties of 2D Fourier Transform. (b) Determine the kernel coefficients of 2D Hadamard transforms for N=8.

Lecture: Edge Detection

Algorithms for Recognition of Low Quality Iris Images. Li Peng Xie University of Ottawa

2.2 Graphs Of Functions. Copyright Cengage Learning. All rights reserved.

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

EE795: Computer Vision and Intelligent Systems

Neighborhood operations

Name: Chapter 7 Review: Graphing Quadratic Functions

Image Processing. Application area chosen because it has very good parallelism and interesting output.

image filtration i Ole-Johan Skrede INF Digital Image Processing

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Separable Kernels and Edge Detection

Filtering Images in the Spatial Domain Chapter 3b G&W. Ross Whitaker (modified by Guido Gerig) School of Computing University of Utah

Stat 528 (Autumn 2008) Density Curves and the Normal Distribution. Measures of center and spread. Features of the normal distribution

CSci 4968 and 6270 Computational Vision, Fall Semester, Lectures 2&3, Image Processing

CPSC 425: Computer Vision

Why is computer vision difficult?

Transcription:

Lecture Image Enhancement and Spatial Filtering Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu September 29, 2005 Abstract Applications of point processing to image segmentation by global and regional segmentation are constructed and demonstrated. An adaptive threshold algorithm is presented. Illumination compensation is shown to improve global segmentation. Finally, morphological waterfall region segmentation is demonstrated. DIP Lecture 7

Uses of spatial filtering Image enhancement Feature detection. Spatial Filtering Spatial filtering may be: Linear Nonlinear Spatially invariant Spatially varying DIP Lecture 7 1

Noise Suppression by Image Averaging In applications such as astronomy noisy images are unavoidable. Noise can be reduced by averaging several images of the same scene with independent noise. Here we simulate the process by adding copies of an image with independent noise with µ = 0 and σ = 64 levels. Original n = 1 n = 8 n = 16 n = 64 n = 128 DIP Lecture 7 2

Effect of Averaging Shown at the right is the histogram of the pixel noise after 8, 16, 64 and 128 images have been averaged. The standard deviation, which is proportional to the width of each curve, is reduced by n. The reduction factors are 2.8, 4, 8, 11.3 for n = 8, 16, 64, 128, respectively. Averaging is clearly a very effective tool for the reduction of noise provided that enough independent samples are available. DIP Lecture 7 3

Region Averaging Suppose that only a single noisy image is available. Can averaging still be employed for noise reduction? Strategy: Average the noise from pixels in a neighborhood. This necessarily involves averaging of the image as well as the noise. DIP Lecture 7 4

Linear Spatial Filtering Region averaging is one form of spatial filtering. In linear spatial filtering, each output pixel is a weighted sum of pixels. Shown at the right is a reproduction of G&W Figure 3.32, which illustrates the mechanics of spatial filtering. In general, linear spatial filtering of an image f by a filter with a weight mask w of size (2a + 1, 2b + 1) is g(x, y) = ax bx s= a t= b w(s, t)f(x + s, y + t) DIP Lecture 7 5

Filters A linear spatially invariant filter can be represented with a mask that is convolved with the image array. The weights are represented by the values w i. w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 If the gray levels of the pixels under the mask are denoted by z 1, z 2,..., z 9 then the response of the linear mask is the sum R = w 1 z 1 + w 2 z 2 + + w 9 z 9 The result R is written to the output array at the position of the filter origin (usually the center of the filter). DIP Lecture 7 6

Smoothing Filters Smoothing filters are used for blurring and noise reduction. Blurring is a common preprocessing step to remove small details when the objective is location of large objects. High-frequency noise is reduced by the lowpass characteristic of smoothing filters. Smoothing filters have all positive weights. The weights are typically chosen to sum to unity so that the average brightness values is maintained. 1 1 1 1 9 1 1 1 1 1 1 1 25 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 49 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 DIP Lecture 7 7

Smoothing Filters Smoothing filters calculate a weighted average of the pixels under the mask. frequency response becomes more pronounced as the filter size is increased. The low Masks are usually chosen to have odd dimensions to provide a center pixel location. The output is written to that pixel. Larger filters do more smoothing but also produce more blurring. An example is shown below. Original Image Noisy Image 3 3 smoothing 5 5 smoothing DIP Lecture 7 8

Smoothing Filters Smoothing is linear and spatially invariant. It is equivalent to convolution of the image and the mask. In the frequency domain this is equivalent to multiplication of the image transform with the mask transform. The frequency response of smoothing filters of several sizes is shown below. The frequency range is in normalized units. Note how the frequency response becomes narrower as the smoothing filter becomes larger. DIP Lecture 7 9

Frequency Response of Smoothing Filters DIP Lecture 7 M=3 M=5 M=7 M=9 10

Frequency Response of Smoothing Filters The frequency response along a slice through the origin in the frequency plane is shown below for several values of M. DIP Lecture 7 11

Filter Design The larger smoothing filters remove more high-frequency energy. This removes more of the noise and it also removes detail information from edges and other image features. Averaging filters can emphasize some pixels more than others. emphasizes the center pixels more than the edges. Here is a mask that 1 25 1 1 2 1 1 1 2 3 2 1 2 3 4 3 2 1 2 3 2 1 1 1 2 1 1 DIP Lecture 7 12

Filter Design The frequency response of this lowpass filter compared with the order 5 averaging filter is shown below. (LP filter: solid line; AVG filter, dashed line) The LP filter has lower sidelobes but is wider in the main lobe. This illustrates the tradeoff that can be made by adjusting the mask weights. DIP Lecture 7 13

Filtered Images The effect of the lowpass filter compared with the order 5 averaging filter is illustrated below with filtered noisy images. The LP filter has a slightly broader frequency response, which shows up as slightly less blurring in the right-hand picture. Averaging Filter Lowpass Filter DIP Lecture 7 14

Example: Smoothing of Test Pattern Shown at the right is the test pattern image of G&W Figure 3.35. We will examine the effects of smoothing this image with filters of different sizes. We will make graphs of the horizontal slice through the vertical bars and noisy box along row 125, which is indicated by the gray horizontal line. DIP Lecture 7 15

Smoothing Filter of Size 3 3 (Right Top) original image; (Right Bottom) result of smoothing with a 3 3 filter; (Below) Plots along row 125 (top) entire row, (middle) a section of the vertical bars, (bottom) a section of the noisy box. DIP Lecture 7 16

Smoothing Filter of Size 9 9 (Right Top) original image; (Right Bottom) result of smoothing with a 9 9 filter; (Below) Plots along row 125 (top) entire row, (middle) a section of the vertical bars, (bottom) a section of the noisy box. DIP Lecture 7 17

Smoothing Filter of Size 25 25 (Right Top) original image; (Right Bottom) result of smoothing with a 25 25 filter; (Below) Plots along row 125 (top) entire row, (middle) a section of the vertical bars, (bottom) a section of the noisy box. DIP Lecture 7 18

Smoothing Filter of Size 35 35 (Right Top) original image; (Right Bottom) result of smoothing with a 35 35 filter; (Below) Plots along row 125 (top) entire row, (middle) a section of the vertical bars, (bottom) a section of the noisy box. DIP Lecture 7 19

Programming Linear filtering programs are quite easy to write in IDL by using the built-in function, CONVOL. When all of the filter weights are equal then one can use the function, SMOOTH. It is worthwhile using these functions because they run much faster than any user functions that would be written in IDL. CONVOL is related to mathematical convolution and to filter weighting. depends upon the choices of some keyword parameters. How it works DIP Lecture 7 20

CONVOL The CONVOL function convolves an array with a kernel, and returns the result. If A is an N M array and K is a n m kernel, with n < N and m < M, then B=CONVOL(A,K,S) computes an output array B by B[t, u] = 1 S Xn 1 i=0 X m 1 j=0 A[t + i n/2, u + j m/2]k[i, j] Example 1: B=CONVOL(A,K,1) where A = 2 64 0 0 1 0 0 0 0 3 2 4 1 2 3 75 and K = 4 5 6 7 8 9 3 5 DIP Lecture 7 21

Example 1 (cont) Since n = m = 3 we have n/2 = m/2 = 1 (integer division). Since A is zero everywhere except A[2, 2] = 1 we can write A[r, s] = δ[r 2, s 2]. Then A[t + i 1, u + j 1] = δ[t + i 3, u + j 3] so that 2X 2X B[t, u] = δ[t + i 3, u + j 3]K[i, j] = K[3 t, 3 u] i=0 j=0 2 3 3 A = 64 0 0 1 0 0 0 0 2 4 1 2 3 75 K = 4 5 6 7 8 9 264 3 5 B = 0 9 8 7 0 0 0 0 6 5 4 0 0 0 0 3 2 1 0 0 0 75 DIP Lecture 7 22

A = 2 64 Example 1 (cont) The result B contains the kernel pattern, reversed in rows and columns, and centered on the location of the impulse in the array A. The kernel is reversed in the result because it is not reversed in the equation, as it would be with true convolution. Mathematical convolution is carried out by unsetting the CENTER keyword. C=C0NVOL(A,K,S,CENTER=0) with S=1 produces 0 0 1 0 0 0 0 3 2 4 1 2 3 75 K = 4 5 6 7 8 9 264 3 5 B = 0 0 1 2 3 0 0 0 0 4 5 6 0 0 0 0 7 8 9 0 0 3 75 DIP Lecture 7 23

CONVOL with CENTER=0 When the keyword CENTER is explicitly set to zero, then the equation computed by the routine is (in 2D) C[t, u] = Xn 1 i=0 X m 1 j=0 In the case A[r, s] = δ[r 2, s 2] we have C[t, u] = Xn 1 i=0 X m 1 j=0 A[t i, u j]k[i, j] δ[t i 2, s j 2]K[i, j] = K[t 2, s 2] The result has the kernel unflipped but shifted so that its corner is in the position of the impulse in A. DIP Lecture 7 24

Using CONVOL CONVOL uses the keywords CENTER, EDGE TRUNCATE, and EDGE WRAP to control the computation. We will do several computation examples in class to illustrate the different results. The result returned by CONVOL is always of the same size as the input array, namely, N M. True convolution produces an output of size N + n 1, M + m 1. It is necessary to use zero-padding to achieve this result. This will be discussed further in connection with the FFT. The kernel must be smaller than the array. DIP Lecture 7 25

SMOOTH The IDL function SMOOTH implements a strictly uniform kernel. It is useful to achieve uniform smoothing. The code has been written to take advantage of the uniform kernel, which makes it faster than CONVOL. DIP Lecture 7 26

Discrete 1D Convolution Let f(x) be defined for x = 0, 1, 2,..., A 1 and h(x) be defined for x = 0, 1, 2,..., B 1. Let M be an integer such that M A + B 1 Let f e (x) and h e (x) be extended by zero padding such that f e (x) = h e (x) = { f(x) 0 x A 1 0 A < x M 1 { h(x) 0 x B 1 0 B < x M 1 for the interval 0 x M 1. Then extend f e (x) and h e (x) for all integers x by repeating the basic interval with period M. DIP Lecture 7 1

Discrete 1D Convolution (cont) f e (x + km) = f e (x) h e (x + km) = h e (x) Then the discrete convolution of f and h is g(x) = f h = M 1 m=0 f e (x)h e (m x) Note that g(x) is periodic with period M and is defined for all x Z. How would you show that? DIP Lecture 7 2

Matrix Expression for 1D Convolution The main period of g(x), 0 x M 1 can be computed by g = Hf where g and f are column vectors of length M and H is a M M matrix [ g(0) g(1). g(m 1) ] = he(0) he( 1) he( 2)... he( M + 1) he(1) he(0) he( 1)... he( M + 2) he(2) he(1) he(0)... he( M + 3). he(m 1) he(m 2) he(m 3)... he(0) fe(0) fe(1) fe(2). fe(m 1) By making use of periodicity of h e we have the equivalent expression [ g(0) g(1). g(m 1) ] = he(0) he(m 1) he(m 2)... he(1) he(1) he(0) he(m 1)... he(2) he(2) he(1) he(0)... he(3). he(m 1) he(m 2) he(m 3)... he(0) fe(0) fe(1) fe(2). fe(m 1) Each row of H is a circular shift of the one above. This form is called a circulant matrix. DIP Lecture 7 3

2D Convolution In this case f(x, y) and h(x, y) are 2D discrete arrays. f e (x, y) = h e (x, y) = { f(x, y) 0 x A 1 0 y B 1 0 A x M 1 B y N 1 { h(x, y) 0 x C 1 0 y D 1 0 C < x M 1 C y N 1 where M = A+C 1 and N = B +D 1 The function repeats periodically in a tiling fashion with tiles of size M N. g(x, y) = M 1 m=0 N 1 n=0 f e (m, n)h e (x m, y n) DIP Lecture 7 4