Digital Image Processing, 3rd ed.

Similar documents
Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Chapter 3: Intensity Transformations and Spatial Filtering

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

Topic 4 Image Segmentation

Digital Image Processing. Image Enhancement - Filtering

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

EECS490: Digital Image Processing. Lecture #19

Biomedical Image Analysis. Point, Edge and Line Detection

3.4& Fundamentals& mechanics of spatial filtering(page 166) Spatial filter(mask) Filter coefficients Filter response

Lecture 12 Color model and color image processing

Lecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation

Filtering and Enhancing Images

Digital Image Processing. Image Enhancement in the Spatial Domain (Chapter 4)

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Color Image Processing

Biomedical Image Analysis. Spatial Filtering

Chapter 10: Image Segmentation. Office room : 841

Lecture 6: Edge Detection

EE795: Computer Vision and Intelligent Systems

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Line, edge, blob and corner detection

Sharpening through spatial filtering

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

Image Enhancement in Spatial Domain (Chapter 3)

Digital Image Processing

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

IMAGE PROCESSING >FILTERS AND EDGE DETECTION FOR COLOR IMAGES UTRECHT UNIVERSITY RONALD POPPE

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

Ulrik Söderström 16 Feb Image Processing. Segmentation

Lecture 4: Image Processing

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

(10) Image Segmentation

ELEN E4830 Digital Image Processing. Homework 6 Solution

ECG782: Multidimensional Digital Signal Processing

Part 3: Image Processing

Lecture 4: Spatial Domain Transformations

Chapter 3 Image Enhancement in the Spatial Domain

Anno accademico 2006/2007. Davide Migliore

Lecture: Edge Detection

1.Some Basic Gray Level Transformations

Lecture 7: Most Common Edge Detectors

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

What will we learn? Neighborhood processing. Convolution and correlation. Neighborhood processing. Chapter 10 Neighborhood Processing

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

ECG782: Multidimensional Digital Signal Processing

Digital Image Processing. Lecture 6

Edge Detection Lecture 03 Computer Vision

Chapter 10 Image Segmentation. Yinghua He

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation

Digital Image Processing

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Point and Spatial Processing

Filtering and Edge Detection. Computer Vision I. CSE252A Lecture 10. Announcement

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017

1. What are the derivative operators useful in image segmentation? Explain their role in segmentation.

What Are Edges? Lecture 5: Gradients and Edge Detection. Boundaries of objects. Boundaries of Lighting. Types of Edges (1D Profiles)

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Analysis and Processing

Classification of image operations. Image enhancement (GW-Ch. 3) Point operations. Neighbourhood operation

CHAPTER 4 EDGE DETECTION TECHNIQUE

Image segmentation. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director of Technology, OTTE, NEW YORK

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY

Digital Image Procesing

Announcements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3.

CS 445 HW#6 Solutions

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Broad field that includes low-level operations as well as complex high-level algorithms

BIM472 Image Processing Image Segmentation

Crop Counting and Metrics Tutorial

EE795: Computer Vision and Intelligent Systems

Computer Vision and Graphics (ee2031) Digital Image Processing I

Image Enhancement: To improve the quality of images

SYDE 575: Introduction to Image Processing

Lectures Remote Sensing

Digital filters. Remote Sensing (GRS-20306)

Segmentation I: Edges and Lines

Image Processing

Chapter 6 Color Image Processing

Spatial Enhancement Definition

Image features. Image Features

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

Image Segmentation Image Thresholds Edge-detection Edge-detection, the 1 st derivative Edge-detection, the 2 nd derivative Horizontal Edges Vertical

Image Analysis Image Segmentation (Basic Methods)

Digital Image Processing, 2nd ed. Digital Image Processing, 2nd ed. The principal objective of enhancement

Image Enhancement in Spatial Domain. By Dr. Rajeev Srivastava

Point Operations and Spatial Filtering

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Massachusetts Institute of Technology. Department of Computer Science and Electrical Engineering /6.866 Machine Vision Quiz I

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Other Linear Filters CS 211A

EN1610 Image Understanding Lab # 3: Edges

Comparison between Various Edge Detection Methods on Satellite Image

11. Gray-Scale Morphology. Computer Engineering, i Sejong University. Dongil Han

Segmentation

Cvision 3 Color and Noise

Segmentation

By Colin Childs, ESRI Education Services. Catalog

Transcription:

Chapter 6 Color Image Processing

Chapter 6 Color Image Processing Pseudocolor processing (vs. truecolor) Pseudocolor = false color Is the process of assigning color to a grayscale (or a set of grayscale) image. This is to enhance/ emphasize some properties of the image.

Chapter 6 Color Image Processing Intensity slicing: HSI representation thresholds on the intensity value. Assign a color c1 for the points having intensity less than a prescribed value l, and a color c2 for those between l and L-1.

Chapter 6 Color Image Processing Example: Matlab s surface view is a pseudocolor transformation, assigning a color to height (in this case, grayscale intensity)

Chapter 6 Color Image Processing jet colormap

Chapter 6 Color Image Processing HSV (HSI) colormap NB. Since the hue is an angle, this map is very useful for representing periodic images

Chapter 6 Color Image Processing

Chapter 6 Color Image Processing Mono Lake, California Nasa/Landsat This Landsat 7 image of Mono Lake was acquired on July 27, 2000. This image is a false-color composite made from the midinfrared, near-infrared, and green spectral channels of the Landsat 7 ETM+ sensor it also includes the panchromatic 15- meter band for spatial sharpening purposes. In this image, the waters of Mono Lake appear a bluish-black and vegetation appears bright green. You will notice the vegetation to the west of the lake and following the tributaries that enter the lake.

Full color processing when we have a full color image and we apply processing to it Chapter 6 Color Image Processing Most of the techniques studies earlier, can be applied to color images. Consider each of the color component as a grayscale image: It is possible to: - apply the known procedure to the each of the grayscale image - think about each color point as a vector and apply a method to a vector image rather than a scalar image.

Chapter 6 Color Image Processing Basic intensity manipulations: - Color intensity transformations and complement - Color slicing/segmentation - Color edge detection

Chapter 6 Color Image Processing Color transformations Color complements: RGB <=> CMY C 1-R M = 1-G Y 1-B

Chapter 6 Color Image Processing Spatial techniques for smoothing and sharpening Color smoothing: c(x, y) = 1 K Using the Laplacian: (s,t) S xy c(x, y) = 2 c(x, y) = 1 K 1 K 1 K (s,t) S xy R(x, y) (s,t) S xy G(x, y) (s,t) S xy B(x, y) K 2 R(x, y) 2 G(x, y) 2 B(x, y) Similarly, one can apply the previous techniques using masks to each color channel R,G,B.

Chapter 6 Color Image Processing based on color. - Recall that a grayscale image can be segmented into two parts by thresholding. - Similarly color images may be segmentet by one ore more thresholds in one of the color channels - In some applications, thresholding the hue-value may give good results.

Chapter 6 Color Image Processing

Digital Image Processing, 3rd ed. Chapter 6 Color Image Processing

Digital Image Processing, 3rd ed. Chapter 6 Color Image Processing

Digital Image Processing, 3rd ed. Chapter 6 Color Image Processing

based on color. If we want to consider all the channels at the same time, a color is associated to a vector a. We must define an enclosing D(z, a) D 0 Digital Image Processing, 3rd ed. z a = Chapter 6 Color Image Processing distance from a. This can be done using different distances. Euclidean norm (z a) T (z a), D(z, a) = Anisotropic Euclidean norm (z a) T C 1 (z a), infinity norm max{ z R a R, z G a G, z B a B } D 0

based on color. If we want to consider all the channels at the same time, a color is associated to a vector a. We must define an enclosing D(z, a) D 0 Digital Image Processing, 3rd ed. z a = Chapter 6 Color Image Processing distance from a. This can be done using different distances. Euclidean norm (z a) T (z a), D(z, a) = Anisotropic Euclidean norm (z a) T C 1 (z a), infinity norm NB. The same can be done using HSV or other colormaps max{ z R a R, z G a G, z B a B } D 0

Chapter 6 Color Image Processing

Chapter 6 Color Image Processing

Chapter 6 Color Image Processing Color edge detection and color image gradient The gradient is defined for scalar functions f = grad(f) = f x f y = fx The application of the gradient to each of the RGB channels can yield erroneous results. We need a manner to extend the definition to vector functions (Di Zenzo, 1986). f y

Chapter 6 Color Image Processing

Digital Image Processing, 3rd ed. Chapter 6 Color Image Processing { } Denote by r, g, b, the unit vectors on the RGB color space. Define u = R x r + G x g + B x b v = R y r + G y g + B y b Define also g xx = u 2 = u T u g yy = v 2 = v T v g xy = u T v = u, v (scalar product)

Chapter 6 Color Image Processing The direction of maximum change: Rate of change: θ(x, y) = 1 2 tan 1 2gx,y g xx g yy 1 1 F θ (x, y) = 2 (g 2 xx + g yy )+(g xx g yy ) cos 2θ +2g xy sin 2θ Note that the direction is defined up to a 90-degree value. This in facts defines 2 directions (maximum change, minimum change). The gxx, gyy, gxy can be computed using methods seen before (ex. Sobel masks).

Chapter 6 Color Image Processing

Chapter 6 Color Image Processing

Chapter 6 Color Image Processing

Chapter 6 Color Image Processing

Chapter 6 Color Image Processing

Chapter 6 Color Image Processing This concludes our study of color images and chapter 6 (if you wish to learn more, you can consult the book). Next topic: image segmentation ().

: Is the process of decomposing the image into subregions/objects. The level of detail (that defines the subregion or the object) depends on the application under consideration.

Examples:

Examples:

Examples:

Examples:

Examples:

Examples:

Examples:

Examples:

We will consider 2 types of segmentation: - based on intensity values (discontinuity) - based on similarity properties (according to some predefined criteria)

We will consider 2 types of segmentation: - based on intensity values (discontinuity) - based on similarity properties (according to some predefined criteria)

Foundaments Denote by R the whole image region. The segmentation produces a partition of R in n subregions, R i,i=1,...,n, such that i=1,...,n R i = R R i R j = for i = i (regions are disjoint) Q(R i ) = TRUE, for i =1, 2,...,n(Q is some condition that the regions must satisfy) Q(R i R j ) = FALSE for adjacent regions

Intensity based methods: use the discontinuity of the signal (edges) Recall from Lecture 6 (on smoothing and sharpening) (first and second derivative)

Derivative operator: 1. Must be 0 on constant regions 2. Is nonzero on ramps/ steps 3. Is nonzero on the onset of ramps/steps

Derivative operator: 1. Must be 0 on constant regions 2. Is nonzero on ramps/ steps 3. Is nonzero on the onset of ramps/steps Second derivative operator: 1. must be 0 on constant regions 2. must be nonzero on the onset of steps/ramps 3. must be 0 along ramps with constant slope

This is a roof edge Derivative operator: 1. Must be 0 on constant regions 2. Is nonzero on ramps/ steps 3. Is nonzero on the onset of ramps/steps Second derivative operator: 1. must be 0 on constant regions 2. must be nonzero on the onset of steps/ramps 3. must be 0 along ramps with constant slope

f x f(x+1) f(x), 2 f x 2 f(x 1) 2f(x)+f(x+1)

f x f(x+1) f(x), 2 f x 2 f(x 1) 2f(x)+f(x+1) We see that:

f x f(x+1) f(x), 2 f x 2 f(x 1) 2f(x)+f(x+1) We see that: - First order derivative produce thicker edges in an image

f x f(x+1) f(x), 2 f x 2 f(x 1) 2f(x)+f(x+1) We see that: - First order derivative produce thicker edges in an image - Second order derivative have high response to detail (points, noise...)

f x f(x+1) f(x), 2 f x 2 f(x 1) 2f(x)+f(x+1) We see that: - First order derivative produce thicker edges in an image - Second order derivative have high response to detail (points, noise...) - Second order derivative produce double-edge response at ramps/steps

f x f(x+1) f(x), 2 f x 2 f(x 1) 2f(x)+f(x+1) We see that: - First order derivative produce thicker edges in an image - Second order derivative have high response to detail (points, noise...) - Second order derivative produce double-edge response at ramps/steps - The sign of the second derivative can be used to determine transitions dark->light or light -> dark

f x f(x+1) f(x), 2 f x 2 f(x 1) 2f(x)+f(x+1) We see that: - First order derivative produce thicker edges in an image - Second order derivative have high response to detail (points, noise...) - Second order derivative produce double-edge response at ramps/steps - The sign of the second derivative can be used to determine transitions dark->light or light -> dark

f x f(x+1) f(x), 2 f x 2 f(x 1) 2f(x)+f(x+1) We see that: - First order derivative produce thicker edges in an image - Second order derivative have high response to detail (points, noise...) - Second order derivative produce double-edge response at ramps/steps - The sign of the second derivative can be used to determine transitions dark->light or light -> dark

f x f(x+1) f(x), 2 f x 2 f(x 1) 2f(x)+f(x+1) We see that: - First order derivative produce thicker edges in an image - Second order derivative have high response to detail (points, noise...) - Second order derivative produce double-edge response at ramps/steps - The sign of the second derivative can be used to determine transitions dark->light or light -> dark

To apply first/second order derivatives, we have seen how to use filter masks: R = 9 i=1 w i z i (R is the response of the mask, here done by correlation)

Detection of isolated points: use second derivative Laplacian mask 2 f = 2 f x 2 + 2 f y 2 x y 2 f f(x 1,y)+f(x +1,y)+f(x, y 1) + f(x, y + 1) 4f(x, y) Set ( )= ( )+ [ ( )] g(x, y) = 1 if R(x, y) T 0 otherwise (T threshold value)

Detection of edges: second and first derivative Second derivatives give thinner lines (and generally higher response) Need to handle properly the double-line effect Taking the abs(nabla f) gives thicker lines only positive values of laplacian

NB. The Laplacian is isotropic, i.e. it picks up equally well edges in all the directions. If we are interested in some specific directions for the edges, it is useful to use other types of masks (anisotropic). The preferred direction is weighted by a higher value. The sum of the mask coefficients is 0 (consistent with constant images)

NB. The Laplacian is isotropic, i.e. it picks up equally well edges in all the directions. If we are interested in some specific directions for the edges, it is useful to use other types of masks (anisotropic). The preferred direction is weighted by a higher value. The sum of the mask coefficients is 0 (consistent with constant images) Thresholding: g(x, y) = 1 if R(x, y) T 0 otherwise

Edge models ideal edges: step edges (clean, distance 1 pixel) typical of computed-generated images In practice, edges in digital images are not clean: they appear rather as a ramp. misfocussing, blurring, noise of the imaging system The more the blurring, the larger the ramp is. The slope of the ramp is inversely proportional to the blurring

Ideal ramps: - The magnitude of the first derivative can be used to detect a ramp (edge). - The sign of the second derivative to decide if the point lies on the dark side or light side

If the ramp is contaminated by noise, the derivative operators can have problems. Three fundamental steps: 1. Smooth the image (to reduce the noise) 2. Detect candidate edge points 3. Localize the edge