Operators-Based on Second Derivative double derivative Laplacian operator Laplacian Operator Laplacian Of Gaussian (LOG) Operator LOG

Similar documents
Topic 4 Image Segmentation

Image Segmentation. Schedule. Jesus J Caban 11/2/10. Monday: Today: Image Segmentation Topic : Matting ( P. Bindu ) Assignment #3 distributed

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Region-based Segmentation

Object Segmentation. Jacob D. Furst DePaul CTI

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

Processing and Others. Xiaojun Qi -- REU Site Program in CVMA

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

Segmentation and Grouping

Ulrik Söderström 16 Feb Image Processing. Segmentation

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

Contents.

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Part 3: Image Processing

EE 701 ROBOT VISION. Segmentation

ECG782: Multidimensional Digital Signal Processing

Image Segmentation. Selim Aksoy. Bilkent University

Image Segmentation. Selim Aksoy. Bilkent University

Lecture: Segmentation I FMAN30: Medical Image Analysis. Anders Heyden

Introduction to Medical Imaging (5XSA0) Module 5

Segmentation by Clustering. Segmentation by Clustering Reading: Chapter 14 (skip 14.5) General ideas

Segmentation by Clustering Reading: Chapter 14 (skip 14.5)

EDGE BASED REGION GROWING

Chapter 10: Image Segmentation. Office room : 841

School of Computing University of Utah

Image Segmentation Based on Watershed and Edge Detection Techniques

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

EECS490: Digital Image Processing. Lecture #19

Segmentation of Images

Connected components - 1

Other Linear Filters CS 211A

Lecture 6: Edge Detection

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Outline. Advanced Digital Image Processing and Others. Importance of Segmentation (Cont.) Importance of Segmentation

Local Features Tutorial: Nov. 8, 04

Image Segmentation. Shengnan Wang

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Image segmentation. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Image Segmentation. 1Jyoti Hazrati, 2Kavita Rawat, 3Khush Batra. Dronacharya College Of Engineering, Farrukhnagar, Haryana, India

The SIFT (Scale Invariant Feature

Digital Image Analysis and Processing

Norbert Schuff VA Medical Center and UCSF

University of Florida CISE department Gator Engineering. Clustering Part 2

Computer Vision & Digital Image Processing. Image segmentation: thresholding

SIFT - scale-invariant feature transform Konrad Schindler

How and what do we see? Segmentation and Grouping. Fundamental Problems. Polyhedral objects. Reducing the combinatorics of pose estimation

Content-based Image and Video Retrieval. Image Segmentation

Dr. Ulas Bagci

Image Analysis - Lecture 5

EDGE BASED REGION GROWING

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Image Segmentation. Segmentation is the process of partitioning an image into regions

Object-Based Classification & ecognition. Zutao Ouyang 11/17/2015

A Survey on Image Segmentation Using Clustering Techniques

Local Features: Detection, Description & Matching

Idea. Found boundaries between regions (edges) Didn t return the actual region

CS 664 Segmentation. Daniel Huttenlocher

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations I

1. What are the derivative operators useful in image segmentation? Explain their role in segmentation.

Edges, interpolation, templates. Nuno Vasconcelos ECE Department, UCSD (with thanks to David Forsyth)

Filtering and Enhancing Images

CS 490: Computer Vision Image Segmentation: Thresholding. Fall 2015 Dr. Michael J. Reale

Comparison between Various Edge Detection Methods on Satellite Image

(10) Image Segmentation

Lecture 4: Spatial Domain Transformations

Lecture 7: Most Common Edge Detectors

Modern Medical Image Analysis 8DC00 Exam

Image Analysis Image Segmentation (Basic Methods)

MR IMAGE SEGMENTATION

Scale Invariant Feature Transform

6. Object Identification L AK S H M O U. E D U

REGION BASED SEGEMENTATION

Combinatorial optimization and its applications in image Processing. Filip Malmberg

Example 1: Regions. Image Segmentation. Example 3: Lines and Circular Arcs. Example 2: Straight Lines. Region Segmentation: Segmentation Criteria

Example 2: Straight Lines. Image Segmentation. Example 3: Lines and Circular Arcs. Example 1: Regions

EECS490: Digital Image Processing. Lecture #22

Edges and Binary Images

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

Edge Detection. Today s reading. Cipolla & Gee on edge detection (available online) From Sandlot Science

Feature Detectors - Canny Edge Detector

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Bioimage Informatics

EE795: Computer Vision and Intelligent Systems

Histogram and watershed based segmentation of color images

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

Chapter - 2 : IMAGE ENHANCEMENT

Problem Definition. Clustering nonlinearly separable data:

identified and grouped together.

Image Segmentation Techniques

Computer Vision for HCI. Topics of This Lecture

Image Segmentation and Registration

Corner Detection. GV12/3072 Image Processing.

Scale Invariant Feature Transform

Robotics Programming Laboratory

Transcription:

Operators-Based on Second Derivative The principle of edge detection based on double derivative is to detect only those points as edge points which possess local maxima in the gradient values. Laplacian operator is the most commonly used second derivative-based edge operator. Laplacian Operator The Laplacian operator is expressed by Laplacian Of Gaussian (LOG) Operator To reduce the noise, Laplacian of Gaussian (LOG) operator can be used. LOG first performs the Gaussian smoothing, which is followed by the Laplacian operation. Difference of Gaussian (DOG) operator It is possible to approximate the LOG filter by taking the difference of two differently sized Gaussians. The DOG operator is implemented by convolving an image with a mask which is obtained by subtracting two Gaussian masks with two different sigma values

Limitations of Edge-Based Segmentation [page 143] The principal limitations of edge detection methods are: (a) The edges extracted using the classical methods often do not necessarily correspond to boundary objects. In many low-quality images, captured using low quality imaging devices, some of the conventional methods produce spurious edges and gaps. (b) The edge detection techniques depend on the information contained in the local neighborhood of the image. Most of the edge detection techniques do not consider model-based information embedded in an image. (c) In most of the cases the edge detection strategies ignore the higher order organization which may be meaningfully present in the image. (d) After the edge points are extracted from the image, these points are linked in order to determine boundaries. This is usually done by first associating edge elements into edge segments and then by associating segments into boundaries. The edge linking process sometimes lead to discontinuities and gaps in the image. (e) The edge linking methods used arbitrary interpolation in order to complete boundary gaps. (f) It is often difficult to identify and classify spurious edges. IMAGE THRESHOLDING TECHNIQUES The thresholding operation involves identification of a set of optimal thresholds, based on which the image is partitioned into several meaningful regions. Bi-level Thresholding Bi-level thresholding is employed on images which have bimodal histograms. In bi-level thresholding, the object and background form two different groups with distinct gray levels In bimodal thresholding all gray values greater than threshold T are assigned the object label and all other gray values are assigned the background label, thus separating the object pixels from the background pixels. Thresholding thus is a transformation of an input image A into a segmented output image B as follows: (a) bij, = 1 for aij >= T. (b) bij = 0 for at, < T, where T is the threshold Multilevel thresholding

In Multilevel thresholding, the image is partitioned into different segments using multiple threshold values. The histograms in such cases are multimodal, with valleys in between. Entropy-Based Thresholding Entropy based thresholding is widely used in bilevel thresholding. Entropy is a measure of information in an image defined by Shannon. The variants of Shannon s entropy have been effectively used for estimation of thresholds in image segmentation. In entropy-based thresholding, the entropy of foreground (object) and background regions are used for optimal selection of thresholds. In Kapur s thresholding technique, the foreground and background region entropies are where the foreground gray values range from 0 to T and background pixels lie in [T + 1, L 1] n an L-level gray image. In Eqs. 7.5 and 7.6, p( g ) is the probability mass function where h(g) is the histogram of gray value g and N is the total number of pixels. The foreground and background area probability values are The gray level value that maximizes the entropy for the sum of H f and H b is used as the threshold. The thresholding strategy maximizes the total entropy of both the foreground and the background regions. Renyi s entropy has also been used for image thresholding.the Renyi s entropy for foreground and background regions are In this thresholding technique, the total entropy of foreground and background regions is computed for various p and appropriate p value is chosen that yields the best thresholding results. REGION GROWING Region growing refers to the procedure that groups pixels or subregions into larger regions. Starting with a set of seed points, the regions are grown from these points by including to each seed point those neighboring pixels that have similar attributes like intensity, gray level texture, color, etc Region Adjacency Graph

The adjacency relation among the regions in a scene can be represented by a region adjacency graph (RAG). The regions in the scene are represented by a set of nodes N = { N1, N2,..., N,} in the RAG, where node Ni, represent the region Ri, in the scene and properties of the region Ri, is stored in the node data structure Ni,. The edge eij between Ni, and Nj represent the adjacency between the regions Ri, and Rj. Two regions Ri, and Rj are adjacent if there exist a pixel in region Ri, and a pixel in region Rj which are adjacent to each other. The adjacency can be either 4-connected or 8connected. The adjacency relation is reflexive and symmetric, but not necessarily transitive. In Figure 7.17, we show the adjacency graph of a scene. Region Merging and Splitting A segmentation algorithm can produce too many small regions because of fragmentation of a single large region in the scene. In such a situation, the smaller regions need to be merged based on similarity and compactness of the smaller regions. A simple region merging algorithm is presented below. Step 1: Segment the image into R1, R2,..., Rm, using a set of thresholds. Step 2: Create a region adjacency graph (RAG) from the segmented description of the image. Step 3: For every Ri, i = 1, 2,..., m, identify all Rj, j # i from the RAG such that Ri is adjacent to Rj. Step 4: Compute an appropriate similarity measure Sij between Ri, and Rj,for all i and j. Step 5 : If Sij > T, then merge Ri, and Rj,. Step 6: Repeat steps 3 to 5 until there is no region to be merged according to the similarity criteria. Clustering Based Segmentation Data driven segmentation techniques can be histogram-oriented or cluster oriented. Histogram-oriented segmentation produces an individual segmentation for each feature of the multifeature data, and then overlaps the segmentation results from each

feature to produce more fragmented regions. Cluster oriented segmentation uses the multidimensional data to partition the image pixels into clusters. Cluster-oriented techniques may be more appropriate than histogram-oriented ones in segmenting images, where each pixel has several attributes and is represented by a vector. Each clustering configuration is assigned a value or cost to measure its goodness. An appropriate cost function measures the goodness of a cluster. Usually, the cost for a cluster configuration is its squared error, i.e., the sum of squared Euclidean distances of each point to its cluster center. Thus low values of such a cost indicate better clustering results. It is found that this cost surface is complicated in nature with many poor local minima. K-means is a popular cluster based segmentation method where each pixel is iteratively assigned to the nearest cluster and the cluster center position is recalculated. After each iteration the cost will decrease until the cluster configuration converges at a stable state, at which point the cost is at a local minimum.