ECEN 447 Digital Image Processing

Size: px
Start display at page:

Download "ECEN 447 Digital Image Processing"

Transcription

1 ECEN 447 Digital Image Processing Lecture 8: Segmentation and Description Ulisses Braga-Neto ECE Department Texas A&M University

2 Image Segmentation and Description Image segmentation and description are the essential components of Image Analysis the quantification of images for object recognition and image understanding. Segmentation partitions an image into its constituent regions or objects. This is a hard problem to solve, except in trivial cases. Segmentation accuracy determines the eventual success of any image analysis problem, such as industrial inspection applications, which rely on correct identification of the objects in the image. Image description techniques, on the other hand, summarize into a few characteristics (called features) the regions or objects found by segmentation. They can be based on boundary or region features.

3 Segmentation Approaches Segmentation based on discontinuity (edge-based). original image edge image segmentation Segmentation based on similarity (region-based). original image edge image (not a good idea) region-based segmentation

4 Edge Detection As we mentioned in connection with image sharpening, derivative operators are used to detect sharp variations (edges). Edge models: step ramp "roof"

5 Derivative Filters A variety of masks implement derivative filters for edge detection. edge-detecting masks simple 1-D edge-detecting masks diagonal edge-detecting masks

6 Gradient Edge-detecting operators g x and g y, for horizontal and vertical edges, respectively, can be combined to form the gradient vector. As we saw before, the gradient points in the direction of maximum change. Its magnitude is an image that gives the edge strength while the gradient angle is an image that gives the orthogonal direction to the edge

7 Gradient Example original image Sobel g x Sobel g y gradient g x + gy

8 The Role of Noise in Edge Detection Because the filters employed are derivative, they amplify noise. Prior application of image smoothing is thus essential.

9 Gradient with Smoothing Example original image smoothed with a 5x5 avg filter Sobel g x Sobel g y gradient g x + gy

10 Combining Gradient with Thresholding The gradient image can be thresholded to produce a binary image indicating the location of edges. gradient without smooting thresholded at 33% gradient with smooting thresholded at 33%

11 Laplacian of Gaussian Following the same idea that the image should be smoothed prior to applying derivative filters, one can apply Gaussian smoothing prior to a Laplacian filter. The 2-D Gaussian with zero mean and standard deviation is The Laplacian of the Gaussian (LoG) is thus

12 Laplacian of Gaussian - II Negative of the Laplacian of Gaussian:

13 Marr-Hildreth Edge Detector The Marr-Hildreth algorithm is a classical procedure for edge detection. It consists of three steps 1. Apply n x n mask approximating a Gaussian lowpass filter. 2. Apply Laplacian mask to result. 3. Find the zero crossings of that as the edge location. The Marr-Hildreth edge detector consists thus of a Laplacian of Gaussian filter followed by zero-crossing detection. The latter step is the key feature of the procedure. The zero crossings are found by looking in a 3x3 neighborhood of a point and looking for sign changes. A threshold is used to require changes of a certain minimum magnitude.

14 Marr-Hildreth Edge Detector - Example original image LoG with sigma = 4 and n = 25 zerocrossings with T = 0 zerocrossings with T = 4%

15 Canny Edge Detector The Canny algorithm is based on the same basic principle. It consists of three steps 1. Apply n x n mask approximating a Gaussian lowpass filter. 2. Compute gradient magnitude and direction of result. 3. Process gradient magnitude image using direction information to detect, thin, and link edges. Other than step 3, the Canny edge detector is the same as the Marr-Hildreth edge detector, but using the gradient of Gaussian rather than the zero-crossings of the Laplacian of Gaussian.

16 Canny Edge Detector - Example original image thresholded gradient Marr-Hildreth edge detector Canny edge detector

17 Canny Edge Detector - Another Example original image thresholded gradient Marr-Hildreth edge detector Canny edge detector

18 Edge Detection for Color Images The gradient and Laplacian cannot be applied directly to vector functions, only to scalar functions. One possibility is to apply edge detection to each band of a color image and then combine the results. For example, one can compute the magnitude of the gradient of the R, G, and B bands of an RGB color image, and then sum the results to obtain an edge image. Each band can be independently smoothed to improve its gradient, and thresholding can be applied after summation of the gradients, just as before.

19 Edge Detection for Color Images - Example original image sum of gradient magnitudes red gradient magnitude green gradient magnitude blue gradient magnitude

20 Thresholding Thresholding is a region-based segmentation method, which relies on the intensity of gray values. Thesholding produces a binary image according to the equation

21 Types of Thresholding In global thresholding, T is a constant. On the other hand, if the value of T changes over the image, we have variable thresholding. There are several kinds of strategies for variable thresholding. For example: Local thresholding: T depends on the grayscale values in a neighborhood of (x,y). Adaptive thresholding: T depends on the coordinates (x,y) themselves.

22 Multiple Thresholding In multiple thresholding, there are three or more modes in the histogram, requiring two or more threshold parameters:

23 Multiple Thresholding - Example original image histogram thresholded image with T 1 = 80, T 2 = 177

24 Basic Global Thresholding The idea is to identify the mean intensities of each class and take T to be the middle point between them.

25 Basic Global Thresholding - II In practice, the parameters and are not known and need to be estimated from the histogram of the image. This can be done by means of the following algorithm. 1. Select an initial estimate for the global threshold T (for example, the overall mean intensity value of the image). 2. Apply threshold T to the image. 3. Compute the mean intensity value m1 of gray values below T and the intensity value m2 of gray values above T 4. Compute a new threshold value 5. Repeat steps 2 through 4 until the difference between values of T in successive iteration is smaller than a tolerance ΔT

26 Basic Thresholding - Example Fingerprint imaging: applying the preceding algorithm with initial T equal to overall image mean, and ΔT = 0 leads to a final T = original image histogram thresholded image with T = 125

27 Otsu's Thresholding Otsu's method of thresholding automates the choice of the best threshold T as the value that maximizes a criterion of separability between the foreground and background pixel values. Let T = k and consider the histogram of an image {p i ; i=0,...,l-1}. The probability that a pixel is assigned to the background is while the probability that a pixel is assigned to the foreground is

28 Otsu's Thresholding - II The mean value of the pixels assigned to the background is a weighted average, with weights given by the histogram values Similarly, the mean value of the pixels assigned to the foreground is The global mean is given simply by

29 Otsu's Thresholding - III Otsu's method uses as the criterion of separability to be maximized a ratio of variances where the between-class variance is given by while the total variance is simply

30 Otsu's Thresholding - IV The ratio of variances is adimensional and between 0 and 1. It is maximal when is maximal, which occurs when the means of background and foreground are well separated. Otsu's method is therefore to pick the best as the value that maximizes the between-class variance The ratio of variances evaluated at the best threshold serves as a measure of the effectiveness of thresholding:

31 Otsu's Thresholding - Example original image histogram basic thresholding T = 169 Otsu thresholding T = 181 η = 0.467

32 The Role of Noise in Thresholding Noise smears the histogram, making thesholding more difficult. Example: zero-mean Gaussian noise. no noise std = 10 std = 50

33 The Role of Illumination in Thresholding Nonuniform illumination also makes thesholding more difficult. We saw an example of this in the MM lecture. Example: ramp illumination.

34 Using Smoothing to Improve Thresholding Smoothing reduces noise and makes thresholding easier. Example: zero-mean Gaussian noise (std = 50) and 5x5 avg filter. noisy image histogram Otsu thresholding smoothed image histogram Otsu thresholding

35 Using Edges to Improve Thresholding In some cases, it is necessary to compute a threshold value based only on grayscale information in the edges of an image. Example: small object in noise. noisy image histogram Otsu thresholding smoothed image histogram Otsu thresholding

36 Using Edges to Improve Thresholding - II One can use an edge image to mask the original image, and then compute the optimal T using that. Example: Gradient magnitude with thresholding. noisy image histogram binary edge image masked image histogram Otsu thresholding

37 Using Edges to Improve Thresholding - III Realistic example: segmentation of yeast cell nuclei. original image histogram Otsu thresholding thresholded Laplacian histogram of masked image Otsu thresholding

38 Variable Thresholding by Partitioning One can make the threshold T local simply by computing a different value for different regions of a partition of the image. original image histogram basic thresholding Otsu thresholding partition variable Otsu thresholding

39 Variable Thresholding by Moving Average One can also make the threshold local by computing the value of T based on an average of the values in a neighborhood of a pixel. original image Otsu thresholding moving average original image Otsu thresholding moving average

40 Color Image Thresholding Thresholding of a color image can be accomplished by where d(z,a) is the distance of point z in RGB space to a given fixed point a. This defines a ROI with center in a. For example: Euclidean distance: Mahalanobis distance: where C is a given matrix (usually a covariance matrix) Maximum or "chessboard" distance:

41 Color Image Thresholding - II The previous distances correspond to the following ROIs in RGB space. Euclidean distance Mahalanobis distance Chessboard distance

42 Color Image Thresholding - Example Chessboard distance is used, with values of a and T derived directly from the image by specifying a region containing the desired colors. original image thresholded image

43 Morphological Watershed The watershed transformation is a method for image segmentation originally proposed in the context of Mathematical Morphology. It is based on the simple idea of watersheds of a topographical surface. In geography, the main rivers and their affluent rivers and streams partition the land in catchment basins. A catchment basin is defined as a connected region such that any drop of water placed at a point of it falls to the same regional minimum, and does not fall into any other region. The borders between the catchment basins are the watershed lines. Watershed lines are therefore crest lines that separate the basins. Alternatively, the watershed lines can be found as dams in a flooding simulation, where water rises from each regional minimum, and a dam is built at the line where rising water from different basins merge.

44 Watershed - Example The artificial image below has three regional minima, which produce a watershed segmentation with three catchment basins.

45 Flooding Simulation original image original image viewed as a topographic surface beginning of flooding further flooding

46 Flooding Simulation - II more flooding further flooding and beginning of dam construction more flooding and longer dams final watershed lines overlaid on orignal image

47 Watershed of Gradient In practice, the watershed is applied on the (magnitude of) gradient of an image, for which the crests locate the boundary between objects. original image gradient image watershed lines watershed lines overlaid on original image

48 Oversegmentation In practice, due to noise and the fact that each minimum produces one catachment basin, direct application of the watershed method produces oversegmentation. original image watershed of gradient = oversegmentation

49 Marker-Based Watershed The oversegmentation problem can be overcome by specifying ("imposing") the minima one wants on the image, while eliminating all other undesirable minima. This can be done by means of internal and external markers and a closing by reconstruction operation. This is sometimes called a homotopy modification, as the markers become the only minima of the image. The markers can be specified by a human operator (in which case the process is semi-automatic), or they can be obtained directly from the image itself for a fully automatic procedure. This is similar in spirit to the positive effect of smoothing applied prior to using derivative filters for edge detection.

50 Marker-Based Watershed - Example In this example, the internal markers are simply the regional minima of a smoothed version of the image, while the external markers are the wateshed lines. After homotopy modification, the watershed is applied again to obtain the final result. original image with overlaid markers marker-based watershed

51 Marker-Based Watershed - Another Example Segmentation of heel bone in an magnetic-resonance image of foot. original foot MR image magnitude of gradient using Sobel operators original image with overlaid markers gradient after imposition of minima marker-based watershed line result overlaid on original image

52 Marker-Based Watershed - Yet Another Example Segmentation using markers specified manually. original image internal and external markers overlaid on image result overlaid on original image (from Serge Beucher's watershed website at Centre de Morphologie Mathematique - Paris)

53 Watershed for Binary Segmentation Binary segmentation refers to the identification of overlapping objects in a binary image. Rather than the image gradient, here the watershed is applied on the distance transform of the binary image. original image distance transform of grains result overlaid on original image (from Serge Beucher's watershed website at Centre de Morphologie Mathematique - Paris)

54 Image Description Basics The last step in the image processing and analysis pipeline is often image description. The objective is usually to produce a short numeric vector that quantifies objects obtained by segmentation. It produces the raw material for image-based pattern recognition. This numeric vector is often called a feature vector, and image description is called feature extraction. Because it compresses information, image description is also called dimensionality reduction. Similarly to segmentation, two main approaches for this are boundary descriptors regional descriptors Regardless of the approach, it is important that the shape of the object be described, rather than translation, rotation and scale. Thus, normalization and invariance with regard to these factors is important.

55 Boundary Following In all boundary description, it is necessary to obtain the sequence of pixels in the boundary of the object. The following boundary tracking algorithm produces the ordered sequence of pixels in the outer boundary of a binary object: 1. Let b 0 (starting point) = the uppermost, leftmost point. 2. Let c 0 = west neighbor of b 0 (this must be a background point). 3. Proceeding through the 8-neighbors of b 0 clockwise, starting from c 0, until a foreground pixel is found. Call this b 1 and call the last background pixel visited c Obtain b 2 and c 2 from b 1 and c 1 in the same fashion. 5. Repeat 2 though 4 until b k = b 0 and b k+1 = b 1. Stop and return sequence of pixels {b 0,b 1,...,b k-1 }.

56 Boundary Following - Example This shows the first few steps of the algorithm: This shows that the stopping condition must be b k = b 0 and b k+1 = b 1 :

57 Chain Codes Once the sequence of boundary pixels is found, one can code it by the directions of the displacement between one pixel and the next. 4-direction code 8-direction code sub-sampling scheme 8-code:

58 Chain Codes - Example The following example illustrates segmentation, followed by boundary extraction, subsampling and code representation. 8-code:

59 Boundary Descriptors Once a suitable boundary representation has been obtained, the next step is to obtain a short feature vector that can describe it. Several simple alternatives are possible: The length is the simplest descriptor. In a chain code, the number of horizontal and vertical components plus srqt(2) x number of diagonal components gives the length. The diameter is defined as The line corresponding to the diameter is the major axis. The minor axis is a line perpendicular to that and passing through the centroid of the shape. The ratio of the two acis is the eccentricity. The axes also define the basic box.

60 Signatures A signature is a 1-D functional representation of a boundary. One common form is to plot the distance from the centroid of the boundary as a funtion of the angle.

61 Signatures - Example shape shape boundary boundary signature signature

62 Shape Numbers Shape numbers are based on the first difference of a chain code. The shape number is the first difference of smallest magnitude. The order of a shape number is its number of digits.

63 Shape Numbers - Example Suppose the order n=18 is specified for the shape number. One should find the basic rectangle and then discretize as a 6x3 = 18 grid. The chain code is computed and from it the shape number is derived.

64 Fourier Descriptors The idea behnd Fourier descriptors is simple, but powerful. One represents the boundary pixels of an object as points in the complex plane, and computes the DFT of that. complex representation Fourier descriptors

65 Fourier Descriptors - II Fourier descriptors have nice properties that help deal with translation, rotation, and scaling issues. Not all K Fourier descriptors need to be kept. Keeping the first P descriptors leads to a smoothed approximation

66 Fourier Descriptors - Example 2868 FD (100%) 1434 FD (50%) 286 FD (10%) 144 FD (5%) 72 FD (2.5%) 36 FD (1.25%) 18 FD (0.63%) 8 FD (0.28%)

67 Regional Descriptors Rather than extracting a boundary and describing that, it is possible to define descriptors of the region itself that corresponds to an object. Several simple alternatives are possible: The area (number of pixels) is the simplest descriptor. The compactness is the dimensionaless ratio (perimeter) 2 / area, where the perimeter is the length of the boundary. The mean and median intensity levels of pixels in the region.

68 Topological Descriptors The topological properties of regions are by definition invariant to translation, rotation, and scaling, and even to continuous stretching (which does not involve tearing or joining). In particular, topological properties do not depend in general on any given distance measure. The simplest example is the number of holes. The following region has two holes, which does not change with respect to any transformation of it, as long as it does not involve tearing or joining.

69 Topological Descriptors - II The number of connected components is another useful example of topological descriptor. three connected components The Euler number is defined as the number of connected components minus the number of holes. euler number = 0 euler number = -1

70 Texture As mentioned previously, texture is a very important cue in image analysis, both by computer and human. A textural descriptor provides information on the smoothness, coarseness, and regularity of textures. smooth coarse regular

71 Histogram-Based Texture Descriptors The histogram {p(z i ); i=0,...,l-1} of a texture contains valuable information about a texture. The n-th moment about the mean or n-th central moment is given by: where m is the mean For example, the 2nd central moment is the familiar variance, while the third central moment gives the skewness of the histogram.

72 Histogram-Based Texture Descriptors - II One can also define the "uniformity" as well as the entropy The following table gives several histogram-based descriptors for the textures given in the previous figure.

73 Co-Occurrence Matrix Given a texture with L levels of intensity, the co-occurrence matrix has at its general position g ij the number of pixel pairs with intensities z i and z j that satisfy in the position specified by a predicate Q. For example, let L = 8 and Q = "one pixel immediately to the right."

74 Co-Occurrence Matrix - II A co-occurrence matrix can be normalized by dividing all elements by the sum of all elments, p ij = g ij /n, where n = sum g ij. Based on this several measures can be defined

75 Co-Occurrence Matrix - Example texture co-occurrence matrix random periodic mixed patterns

76 Co-Occurrence Matrix - Example (Cont'd) The following table gives some descriptors evaluated from the co-occurrence matrices for the textures on the previous slide.

CoE4TN4 Image Processing

CoE4TN4 Image Processing CoE4TN4 Image Processing Chapter 11 Image Representation & Description Image Representation & Description After an image is segmented into regions, the regions are represented and described in a form suitable

More information

Chapter 11 Representation & Description

Chapter 11 Representation & Description Chain Codes Chain codes are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. The direction of each segment is coded by using a numbering

More information

Lecture 8 Object Descriptors

Lecture 8 Object Descriptors Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh

More information

EECS490: Digital Image Processing. Lecture #23

EECS490: Digital Image Processing. Lecture #23 Lecture #23 Motion segmentation & motion tracking Boundary tracking Chain codes Minimum perimeter polygons Signatures Motion Segmentation P k Accumulative Difference Image Positive ADI Negative ADI (ADI)

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents

More information

Image Analysis Image Segmentation (Basic Methods)

Image Analysis Image Segmentation (Basic Methods) Image Analysis Image Segmentation (Basic Methods) Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008. Computer Vision course

More information

Practical Image and Video Processing Using MATLAB

Practical Image and Video Processing Using MATLAB Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and

More information

EECS490: Digital Image Processing. Lecture #22

EECS490: Digital Image Processing. Lecture #22 Lecture #22 Gold Standard project images Otsu thresholding Local thresholding Region segmentation Watershed segmentation Frequency-domain techniques Project Images 1 Project Images 2 Project Images 3 Project

More information

Image segmentation. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

Image segmentation. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year Image segmentation Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Methods for Image Processing academic year 2017 2018 Segmentation by thresholding Thresholding is the simplest

More information

EECS490: Digital Image Processing. Lecture #19

EECS490: Digital Image Processing. Lecture #19 Lecture #19 Shading and texture analysis using morphology Gray scale reconstruction Basic image segmentation: edges v. regions Point and line locators, edge types and noise Edge operators: LoG, DoG, Canny

More information

Segmentation

Segmentation Lecture 6: Segmentation 24--4 Robin Strand Centre for Image Analysis Dept. of IT Uppsala University Today What is image segmentation? A smörgåsbord of methods for image segmentation: Thresholding Edge-based

More information

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking Representation REPRESENTATION & DESCRIPTION After image segmentation the resulting collection of regions is usually represented and described in a form suitable for higher level processing. Most important

More information

Segmentation

Segmentation Lecture 6: Segmentation 215-13-11 Filip Malmberg Centre for Image Analysis Uppsala University 2 Today What is image segmentation? A smörgåsbord of methods for image segmentation: Thresholding Edge-based

More information

Lecture: Segmentation I FMAN30: Medical Image Analysis. Anders Heyden

Lecture: Segmentation I FMAN30: Medical Image Analysis. Anders Heyden Lecture: Segmentation I FMAN30: Medical Image Analysis Anders Heyden 2017-11-13 Content What is segmentation? Motivation Segmentation methods Contour-based Voxel/pixel-based Discussion What is segmentation?

More information

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 10 Segmentation 14/02/27 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

Digital Image Processing. Image Enhancement - Filtering

Digital Image Processing. Image Enhancement - Filtering Digital Image Processing Image Enhancement - Filtering Derivative Derivative is defined as a rate of change. Discrete Derivative Finite Distance Example Derivatives in 2-dimension Derivatives of Images

More information

- Low-level image processing Image enhancement, restoration, transformation

- Low-level image processing Image enhancement, restoration, transformation () Representation and Description - Low-level image processing enhancement, restoration, transformation Enhancement Enhanced Restoration/ Transformation Restored/ Transformed - Mid-level image processing

More information

Topic 4 Image Segmentation

Topic 4 Image Segmentation Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive

More information

Lecture 10: Image Descriptors and Representation

Lecture 10: Image Descriptors and Representation I2200: Digital Image processing Lecture 10: Image Descriptors and Representation Prof. YingLi Tian Nov. 15, 2017 Department of Electrical Engineering The City College of New York The City University of

More information

Digital Image Processing Chapter 11: Image Description and Representation

Digital Image Processing Chapter 11: Image Description and Representation Digital Image Processing Chapter 11: Image Description and Representation Image Representation and Description? Objective: To represent and describe information embedded in an image in other forms that

More information

Ulrik Söderström 21 Feb Representation and description

Ulrik Söderström 21 Feb Representation and description Ulrik Söderström ulrik.soderstrom@tfe.umu.se 2 Feb 207 Representation and description Representation and description Representation involves making object definitions more suitable for computer interpretations

More information

9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B

9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B 8. Boundary Descriptor 8.. Some Simple Descriptors length of contour : simplest descriptor - chain-coded curve 9 length of contour no. of horiontal and vertical components ( no. of diagonal components

More information

Digital Image Processing Fundamentals

Digital Image Processing Fundamentals Ioannis Pitas Digital Image Processing Fundamentals Chapter 7 Shape Description Answers to the Chapter Questions Thessaloniki 1998 Chapter 7: Shape description 7.1 Introduction 1. Why is invariance to

More information

Topic 6 Representation and Description

Topic 6 Representation and Description Topic 6 Representation and Description Background Segmentation divides the image into regions Each region should be represented and described in a form suitable for further processing/decision-making Representation

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

CS 490: Computer Vision Image Segmentation: Thresholding. Fall 2015 Dr. Michael J. Reale

CS 490: Computer Vision Image Segmentation: Thresholding. Fall 2015 Dr. Michael J. Reale CS 490: Computer Vision Image Segmentation: Thresholding Fall 205 Dr. Michael J. Reale FUNDAMENTALS Introduction Before we talked about edge-based segmentation Now, we will discuss a form of regionbased

More information

Chapter 11 Representation & Description

Chapter 11 Representation & Description Chapter 11 Representation & Description The results of segmentation is a set of regions. Regions have then to be represented and described. Two main ways of representing a region: - external characteristics

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus Image Processing BITS Pilani Dubai Campus Dr Jagadish Nayak Image Segmentation BITS Pilani Dubai Campus Fundamentals Let R be the entire spatial region occupied by an image Process that partitions R into

More information

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I) Edge detection Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Elaborazione delle immagini (Image processing I) academic year 2011 2012 Image segmentation Several image processing

More information

Binary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5

Binary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5 Binary Image Processing CSE 152 Lecture 5 Announcements Homework 2 is due Apr 25, 11:59 PM Reading: Szeliski, Chapter 3 Image processing, Section 3.3 More neighborhood operators Binary System Summary 1.

More information

EE 584 MACHINE VISION

EE 584 MACHINE VISION EE 584 MACHINE VISION Binary Images Analysis Geometrical & Topological Properties Connectedness Binary Algorithms Morphology Binary Images Binary (two-valued; black/white) images gives better efficiency

More information

Digital Image Processing Lecture 7. Segmentation and labeling of objects. Methods for segmentation. Labeling, 2 different algorithms

Digital Image Processing Lecture 7. Segmentation and labeling of objects. Methods for segmentation. Labeling, 2 different algorithms Digital Image Processing Lecture 7 p. Segmentation and labeling of objects p. Segmentation and labeling Region growing Region splitting and merging Labeling Watersheds MSER (extra, optional) More morphological

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

Region & edge based Segmentation

Region & edge based Segmentation INF 4300 Digital Image Analysis Region & edge based Segmentation Fritz Albregtsen 06.11.2018 F11 06.11.18 IN5520 1 Today We go through sections 10.1, 10.4, 10.5, 10.6.1 We cover the following segmentation

More information

Image representation. 1. Introduction

Image representation. 1. Introduction Image representation Introduction Representation schemes Chain codes Polygonal approximations The skeleton of a region Boundary descriptors Some simple descriptors Shape numbers Fourier descriptors Moments

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

REGION & EDGE BASED SEGMENTATION

REGION & EDGE BASED SEGMENTATION INF 4300 Digital Image Analysis REGION & EDGE BASED SEGMENTATION Today We go through sections 10.1, 10.2.7 (briefly), 10.4, 10.5, 10.6.1 We cover the following segmentation approaches: 1. Edge-based segmentation

More information

Lecture 18 Representation and description I. 2. Boundary descriptors

Lecture 18 Representation and description I. 2. Boundary descriptors Lecture 18 Representation and description I 1. Boundary representation 2. Boundary descriptors What is representation What is representation After segmentation, we obtain binary image with interested regions

More information

Final Review. Image Processing CSE 166 Lecture 18

Final Review. Image Processing CSE 166 Lecture 18 Final Review Image Processing CSE 166 Lecture 18 Topics covered Basis vectors Matrix based transforms Wavelet transform Image compression Image watermarking Morphological image processing Segmentation

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 03 Image Processing Basics 13/01/28 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

Edge Detection Lecture 03 Computer Vision

Edge Detection Lecture 03 Computer Vision Edge Detection Lecture 3 Computer Vision Suggested readings Chapter 5 Linda G. Shapiro and George Stockman, Computer Vision, Upper Saddle River, NJ, Prentice Hall,. Chapter David A. Forsyth and Jean Ponce,

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spatial Domain Filtering http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Background Intensity

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering Digital Image Processing Prof. P.K. Biswas Department of Electronics & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Image Segmentation - III Lecture - 31 Hello, welcome

More information

Today INF How did Andy Warhol get his inspiration? Edge linking (very briefly) Segmentation approaches

Today INF How did Andy Warhol get his inspiration? Edge linking (very briefly) Segmentation approaches INF 4300 14.10.09 Image segmentation How did Andy Warhol get his inspiration? Sections 10.11 Edge linking 10.2.7 (very briefly) 10.4 10.5 10.6.1 Anne S. Solberg Today Segmentation approaches 1. Region

More information

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7) 5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?

More information

INTENSITY TRANSFORMATION AND SPATIAL FILTERING

INTENSITY TRANSFORMATION AND SPATIAL FILTERING 1 INTENSITY TRANSFORMATION AND SPATIAL FILTERING Lecture 3 Image Domains 2 Spatial domain Refers to the image plane itself Image processing methods are based and directly applied to image pixels Transform

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director of Technology, OTTE, NEW YORK

PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director of Technology, OTTE, NEW YORK International Journal of Science, Environment and Technology, Vol. 3, No 5, 2014, 1759 1766 ISSN 2278-3687 (O) PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director

More information

Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang

Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang NICTA & CSE UNSW COMP9314 Advanced Database S1 2007 jzhang@cse.unsw.edu.au Reference Papers and Resources Papers: Colour spaces-perceptual, historical

More information

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points Feature extraction Bi-Histogram Binarization Entropy What is texture Texture primitives Filter banks 2D Fourier Transform Wavlet maxima points Edge detection Image gradient Mask operators Feature space

More information

Lecture 4: Image Processing

Lecture 4: Image Processing Lecture 4: Image Processing Definitions Many graphics techniques that operate only on images Image processing: operations that take images as input, produce images as output In its most general form, an

More information

Image Analysis. Edge Detection

Image Analysis. Edge Detection Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

More information

Image Processing

Image Processing Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore

More information

Machine vision. Summary # 6: Shape descriptors

Machine vision. Summary # 6: Shape descriptors Machine vision Summary # : Shape descriptors SHAPE DESCRIPTORS Objects in an image are a collection of pixels. In order to describe an object or distinguish between objects, we need to understand the properties

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING DS7201 ADVANCED DIGITAL IMAGE PROCESSING II M.E (C.S) QUESTION BANK UNIT I 1. Write the differences between photopic and scotopic vision? 2. What

More information

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS Shubham Saini 1, Bhavesh Kasliwal 2, Shraey Bhatia 3 1 Student, School of Computing Science and Engineering, Vellore Institute of Technology, India,

More information

Image and Multidimensional Signal Processing

Image and Multidimensional Signal Processing Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Representation and Description 2 Representation and

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

Chapter 3: Intensity Transformations and Spatial Filtering

Chapter 3: Intensity Transformations and Spatial Filtering Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing

More information

Image Segmentation Image Thresholds Edge-detection Edge-detection, the 1 st derivative Edge-detection, the 2 nd derivative Horizontal Edges Vertical

Image Segmentation Image Thresholds Edge-detection Edge-detection, the 1 st derivative Edge-detection, the 2 nd derivative Horizontal Edges Vertical Image Segmentation Image Thresholds Edge-detection Edge-detection, the 1 st derivative Edge-detection, the 2 nd derivative Horizontal Edges Vertical Edges Diagonal Edges Hough Transform 6.1 Image segmentation

More information

Comparison between Various Edge Detection Methods on Satellite Image

Comparison between Various Edge Detection Methods on Satellite Image Comparison between Various Edge Detection Methods on Satellite Image H.S. Bhadauria 1, Annapurna Singh 2, Anuj Kumar 3 Govind Ballabh Pant Engineering College ( Pauri garhwal),computer Science and Engineering

More information

Neighborhood operations

Neighborhood operations Neighborhood operations Generate an output pixel on the basis of the pixel and its neighbors Often involve the convolution of an image with a filter kernel or mask g ( i, j) = f h = f ( i m, j n) h( m,

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Image Processing. Traitement d images. Yuliya Tarabalka  Tel. Traitement d images Yuliya Tarabalka yuliya.tarabalka@hyperinet.eu yuliya.tarabalka@gipsa-lab.grenoble-inp.fr Tel. 04 76 82 62 68 Noise reduction Image restoration Restoration attempts to reconstruct an

More information

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich. Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction

More information

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han Computer Vision 10. Segmentation Computer Engineering, Sejong University Dongil Han Image Segmentation Image segmentation Subdivides an image into its constituent regions or objects - After an image has

More information

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam: INF 4300 / INF 9305 Digital image analysis Date: Thursday December 21, 2017 Exam hours: 09.00-13.00 (4 hours) Number of pages: 8 pages

More information

Ulrik Söderström 16 Feb Image Processing. Segmentation

Ulrik Söderström 16 Feb Image Processing. Segmentation Ulrik Söderström ulrik.soderstrom@tfe.umu.se 16 Feb 2011 Image Processing Segmentation What is Image Segmentation? To be able to extract information from an image it is common to subdivide it into background

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

IT Digital Image ProcessingVII Semester - Question Bank

IT Digital Image ProcessingVII Semester - Question Bank UNIT I DIGITAL IMAGE FUNDAMENTALS PART A Elements of Digital Image processing (DIP) systems 1. What is a pixel? 2. Define Digital Image 3. What are the steps involved in DIP? 4. List the categories of

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington A^ ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

Image features. Image Features

Image features. Image Features Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in

More information

Comparative Study of ROI Extraction of Palmprint

Comparative Study of ROI Extraction of Palmprint 251 Comparative Study of ROI Extraction of Palmprint 1 Milind E. Rane, 2 Umesh S Bhadade 1,2 SSBT COE&T, North Maharashtra University Jalgaon, India Abstract - The Palmprint region segmentation is an important

More information

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES 1 B.THAMOTHARAN, 2 M.MENAKA, 3 SANDHYA VAIDYANATHAN, 3 SOWMYA RAVIKUMAR 1 Asst. Prof.,

More information

Detection of Edges Using Mathematical Morphological Operators

Detection of Edges Using Mathematical Morphological Operators OPEN TRANSACTIONS ON INFORMATION PROCESSING Volume 1, Number 1, MAY 2014 OPEN TRANSACTIONS ON INFORMATION PROCESSING Detection of Edges Using Mathematical Morphological Operators Suman Rani*, Deepti Bansal,

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions Others -- Noise Removal Techniques -- Edge Detection Techniques -- Geometric Operations -- Color Image Processing -- Color Spaces Xiaojun Qi Noise Model The principal sources of noise in digital images

More information

Image Segmentation Based on Watershed and Edge Detection Techniques

Image Segmentation Based on Watershed and Edge Detection Techniques 0 The International Arab Journal of Information Technology, Vol., No., April 00 Image Segmentation Based on Watershed and Edge Detection Techniques Nassir Salman Computer Science Department, Zarqa Private

More information

Fundamentals of Digital Image Processing

Fundamentals of Digital Image Processing \L\.6 Gw.i Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Chris Solomon School of Physical Sciences, University of Kent, Canterbury, UK Toby Breckon School of Engineering,

More information

Operators-Based on Second Derivative double derivative Laplacian operator Laplacian Operator Laplacian Of Gaussian (LOG) Operator LOG

Operators-Based on Second Derivative double derivative Laplacian operator Laplacian Operator Laplacian Of Gaussian (LOG) Operator LOG Operators-Based on Second Derivative The principle of edge detection based on double derivative is to detect only those points as edge points which possess local maxima in the gradient values. Laplacian

More information

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT. Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,

More information

EECS490: Digital Image Processing. Lecture #20

EECS490: Digital Image Processing. Lecture #20 Lecture #20 Edge operators: LoG, DoG, Canny Edge linking Polygonal line fitting, polygon boundaries Edge relaxation Hough transform Image Segmentation Thresholded gradient image w/o smoothing Thresholded

More information

Processing of binary images

Processing of binary images Binary Image Processing Tuesday, 14/02/2017 ntonis rgyros e-mail: argyros@csd.uoc.gr 1 Today From gray level to binary images Processing of binary images Mathematical morphology 2 Computer Vision, Spring

More information

Lecture 4: Spatial Domain Transformations

Lecture 4: Spatial Domain Transformations # Lecture 4: Spatial Domain Transformations Saad J Bedros sbedros@umn.edu Reminder 2 nd Quiz on the manipulator Part is this Fri, April 7 205, :5 AM to :0 PM Open Book, Open Notes, Focus on the material

More information

Practical Image and Video Processing Using MATLAB

Practical Image and Video Processing Using MATLAB Practical Image and Video Processing Using MATLAB Chapter 14 Edge detection What will we learn? What is edge detection and why is it so important to computer vision? What are the main edge detection techniques

More information

CHAPTER 4 EDGE DETECTION TECHNIQUE

CHAPTER 4 EDGE DETECTION TECHNIQUE 56 CHAPTER 4 EDGE DETECTION TECHNIQUE The main and major aim of edge detection is to significantly reduce the amount of data significantly in an image, while preserving the structural properties to be

More information