Using the Kolmogorov-Smirnov Test for Image Segmentation

Size: px
Start display at page:

Download "Using the Kolmogorov-Smirnov Test for Image Segmentation"

Transcription

1 Using the Kolmogorov-Smirnov Test for Image Segmentation Yong Jae Lee CS395T Computational Statistics Final Project Report May 6th, 2009 I. INTRODUCTION Image segmentation is a fundamental task in computer vision with applications in many fields including medical image analysis, object recognition, pedestrian detection, and automated surveillance. The goal is to decompose an image into meaningful regions. While the precise meaning of meaningful varies for the specific application, in general, the objective is to produce regions that correspond to full coherent objects. To this end, existing segmentation algorithms group pixels that have similar image characteristics (e.g., color, intensity, and/or texture) with the assumption that image regions corresponding to objects have similar features throughout. Supervised methods incorporate high-level object cues learned from a training set while unsupervised methods assume no prior knowledge on what defines an object. For unsupervised methods, the number of segments (i.e., model selection) is often a parameter that the user must specify [1]. That is, the user must specify how many objects (or object parts) exist in the image to produce the optimal segmentation. This is often an unrealistic and demanding assumption to make. In this paper, I will use the Kolmogorov-Smirnov (K-S) test to perform unsupervised image segmentation. The proposed method does not require the user to specify the number of segments for each image. The K-S test is a non-parametric statistical test which can be used to determine whether a data sample is drawn from some underlying reference probability distribution (onesample test) or to determine whether two data samples are drawn from the same probability distribution (two-sample test) [2]. I will use the two-sample test in conjunction with agglomerative clustering. Specifically, after forming a complete binary merger tree based on feature similarities between regions, the test will be used to prune erroneous merges. The test will determine whether two regions belong to the same probability distribution and hence should remain merged. In my experiments, I will perform image segmentation on the Microsoft Research Cambridge [3] dataset and compare against a well known unsupervised image segmentation technique called Normalized Cuts [1].

2 2 II. RELATED WORK The K-S test has been applied previously to various computer vision applications, such as image comparison, category discovery, and image segmentation. In [4], the author uses the K-S test to test the hypothesis that the two images have the same grayscale intensity distributions. In [5], the authors use the K-S test to distinguish images containing objects of different categories. They use agglomerative clustering to merge similar images to create a binary merger tree and use the K-S test to prune erroneous mergers. My method is most influenced by the method of [5]. However, the proposed method will be used for image segmentation rather than for category clustering. The K-S test has also been used previously for image segmentation in [6]. The authors produce 1-dimensional histograms for each subspace of the data (e.g., each of the R, G, and B color dimensions) and directly cluster the histograms by locating local maxima and minima. The authors use the K-S test to simplify the clustering by identifying the simplest density function that fits the data, such that each pixel can be assigned to its nearest cluster center in the modified distribution. In contrast, the proposed method will not consider each 1-dimensional subspace of the data separately (since the optimal choice is image-dependent and cannot be intuitively determined). Instead, the proposed approach will map the multi-dimensional data to a 1-dimensional space. More importantly, I will use the K-S test to merge segments rather than to approximate a fit. Many methods have been proposed for unsupervised image segmentation. Some of the most famous are the Normalized Cuts [1], Mean-Shift [7], and the graph-based method by Felzenszwalb and Huttenlocher [8]. Comaniciu and Meer [7] employed the mean-shift algorithm for image segmentation, a non-parametric technique to analyze and find arbitrarily shaped clusters in feature space. Felzenszwalb and Huttenlocher [8] proposed an efficient graph-based image segmentation. It is a greedy method that runs approximately linear in the number of edges in the graph. Shi and Malik proposed the Normalized Cuts [1] algorithm. It is a graph-theoretic clustering method, where each pixel represents a vertex in the graph and the similarity scores between them are represented as edges. Each edge is weighted by the similarity between the connecting pixels. The image is segmented into disjoint segments by optimally partitioning the graph such that intra-cluster similarity and inter-cluster dissimilarity are maximized. In this paper, I will compare my method against the Normalized Cuts. Note that the Normalized Cuts method requires user-input for the number of segments for each image, and the results can often be very sensitive to the chosen parameter. In contrast, my method will only require the user to specify the global significance level for the test statistic for the entire dataset.

3 3 III. APPROACH A. Background: Two-sample Kolmogorov-Smirnov test In the two-sample test, the null hypothesis H 0 is that the two samples are drawn from the same distribution and the alternate hypothesis H a is that they are drawn from different distributions. Using the notation from [2], let f n1 (x) and f n2 (x) be two histograms (samples) of size n1 and n2 drawn from two continuous probability density functions, f 1 (x) and f 2 (x), respectively. The null hypothesis, H 0, and alternative hypothesis, H a, are: as: H 0 : f 1 (x) = f 2 (x), x H a : f 1 (x) f 2 (x), x We can compute the empirical cumulative distribution function (ECDF), F n1 (x) and F n2 (x) x r F n1 (x r ) = f n1 (x) x=x 0 x r F n2 (x r ) = f n2 (x) x=x 0 The K-S test statistic, D, is defined as the maximum absolute distance between the two ECDFs: D = max F n1 (x) F n2 (x) x Kolmogorov and Smirnov showed that the two-sided p-value can be approximated as: Q K S (λ K S ) = 2 ( 1) j 1 exp( 2j 2 λ 2 K S ), j=1 where λ K S = [ N e / N e ] D and N e = n1 n2/(n1 + n2). The null hypothesis is rejected if the test is significant at level α: Q K S (λ K S ) < α. The test is non-parametric in that no assumption is made concerning the distribution of the variables or the distribution between the two empirical density functions. B. Image Segmentation with the K-S test The proposed algorithm is summarized in Figure 1. First, I oversegment an image into small homogeneous regions called superpixels [9]. Superpixels are small homogeneous groups of pixels that preserve object boundaries. They are much more efficient to work with than pixels since a typical image will be comprised of hundreds of superpixels compared to thousands of pixels. The specific number of superpixels is a user selected parameter to the algorithm. For

4 4 Prune Merged segments using K S test Image SuperpixelImage Binary Merger Tree Final Segmentation Fig. 1. Summary of the system model. First, the input image is oversegmented into superpixels [9]. Then, agglomerative clustering based on chi-square distances is used to merge segments. Finally, the K-S test is used to prune erroneous merges to produce the final image segmentation. each superpixel, color and texture features are computed using the Lab space color pixel values and image filter responses, respectively. Hence, each superpixel can be represented by multidimensional color features or multi-dimensional texture features. To compactly represent each region (i.e., superpixel or merged superpixels) in the image, I generate a codebook to map each multi-dimensional feature to a 1-dimensional histogram. Each index in the codebook represents a cluster center in the feature space, obtained using k-means on a random subset of the features from the entire image collection. Each n-dimensional feature (n = 3 for color, and n = number of filters for texture) is mapped to the nearest cluster center in the codebook. The final representation of a region is a histogram with c-bins where c = number of codebook cluster centers. Each histogram bin is a count of the pixels in the region that have been mapped to that codebook index. The histogram is normalized to sum to one. To compare the similarity between two regions i and j, I use the chi-square distance between their histograms, h i and h j : χ 2 (h i, h j ) = 1 2 c k=1 (h i (k) h j (k)) 2 h i (k) + h j (k) Figure 2 (a) shows a distance matrix of the computed χ 2 distances between all superpixels in an example image. Red indicates high distances while blue indicates low distances. Given the distance matrix, I sequentially merge regions using single-link agglomerative clustering. Specifically, at each step, I merge the two regions that have the smallest χ 2 distance given by Eqn. ( 1). Every time two regions get merged, the new region is represented by averaging their respective histograms. Once the complete binary merger tree is formed, the K-S test is used to prune erroneous merges top-down. As explained in Section III-A, the K-S test statistic computes the maximum absolute difference between the empirical cumulative distribution functions of two histograms to determine whether they are drawn from the same distribution (see Figure 2 (b)). We can use this to determine whether two regions should remain merged. Each time the null hypothesis is rejected (i.e., it is determined that two regions should not be merged), a region is split into two. The reason that the K-S test is used to prune erroneous mergers top-down instead of using it in the bottom-up merging stage is that the K-S test is more reliable over (1)

5 5 D (a) (b) Fig. 2. (a) A χ 2 distance matrix that shows the distance between each pair of superpixels in an example image. (b) The Empirical Cumulatiave Distribution Functions of two regions in an image, and the K-S statistic as denoted by D. larger regions [5]. The pruning process ends when no null hypothesis is rejected. In this case, multiple hypothesis correction was not necessary since the regions represented in each level of the tree are independent. That is, while there is dependency in terms of image features between the parent region and children region since they contain overlapping parts of the image, a split in the parent node level should not influence (and therefore should be independent of) whether the two children should remain merged or not. A. Dataset and Implementation Choices IV. EXPERIMENTS AND ANALYSIS I tested my method on the Microsoft Research Cambridge dataset (MSRC) [3] which is comprised of 591 color images. Each image has multiple objects belonging to a subset of 23 categories. Pixel-level ground-truth annotation is available which makes pixel-based evaluation feasible. I represent color features by their 3-dimensional Lab values, and use a filter bank consisting of 12 oriented bar filters at three scales and two isotropic filters to compute texture features. For the color features I quantize the feature space into 69 bins and for the texture features I quantize the feature space into 400 bins. These numbers are chosen to provide (roughly) good coverage of the feature spaces. As part of the experiments, I compare the tradeoff in accuracy for different feature choices as well as different significance levels. B. Methodology To evaluate my method, I treat the image segmentation problem as one of data clustering and use cluster evaluation techniques on the segmentation results. This is the approach taken in [10];

6 6 I use the Rand Index [11] and the Jaccard Index [12]. Specifically, given a set of n objects S = (o 1,..., o n ), a partition is defined as a set of clusters C = (c 1,..., c k ), where c i S, c i c j = if i j, k i=1 c i = S. Given two partitions, X = (x 1,..., x r ) and Y = (y 1,..., y s ) of the same set of objects S, the following quantities can be measured for all pairs of objects (o i, o j ), i j, from X and Y, respectively: (i) f 00 = number of such pairs that fall in different clusters under X and Y. (ii) f 01 = number of such pairs that fall in the same cluster under Y but not under X. (iii) f 10 = number of such pairs that fall in the same cluster under X but not under Y. (iv) f 11 = number of such pairs that fall in the same cluster under X and Y. where f 00 + f 01 + f 10 + f 11 = n(n 1)/2, and n is the total number of objects in S. Intuitively, one can think of f 00 + f 11 as the number of agreements between X and Y and f 01 + f 10 as the number of disagreements between X and Y. The Rand index is defined as: The Jaccard index is defined as: R(X, Y ) = 1 J(X, Y ) = 1 f 11 + f 00 f 00 + f 01 + f 10 + f 11 (2) f 11 f 11 + f 10 + f 01 (3) The distance measures are in the domain [0,1]; a value of 0 means maximum similarity, while a value of 1 means maximum dissimilarity. The Jaccard index does not give any weight to pairs of objects that belong to different clusters under the two partitions. Hence, its distance measure is generally higher than that of the Rand index. For my evaluation, I chose the two partitions X and Y to be the algorithm segmentation (i.e., the proposed method or Normalized Cuts) and the Ground Truth segmentation, respectively. An image in the dataset is represented as S and each pixel in it is represented as an object o i, where i = {1,...,n}, and n = number of pixels in the image. C. Discussion For each algorithm (my method and Normalized Cuts), I obtain Rand and Jaccard index values by comparing it to the ground truth segmentation. I evaluate the performance of each method by comparing their index values. For the Normalized Cuts algorithm, I used the code provided by the authors. The features used for the Normalized Cuts algorithm are gray-scale pixel values. Note that Normalized Cuts requires the user to specify the number of segments, K, as a parameter to the algorithm. To choose the ideal value for K, I chose it to be equal to

7 7 TABLE I AVERAGE INDEX MEASURES FOR THE PROPOSED METHOD USING COLOR FEATURES (C) AND TEXTURE FEATURES (T), AND NORMALIZED CUTS (NCUTS) ON THE MSRC DATASET. EACH ROW IN THE TABLES CORRESPONDING TO MY METHOD SHOWS RESULTS FOR DIFFERENT CHOICES FOR THE NUMBER OF SUPERPIXELS N AND THE SIGNIFICANCE LEVEL α (ORDERED AS [FEATURE TYPE, N, α]). LOWER VALUES ARE BETTER. Method Rand Jaccard [C, 25, 0] 0.34 ± ± 0.19 [C, 25, ] 0.42 ± ± 0.17 [C, 25, 0.05] 0.44 ± ± 0.13 [C, 100, 0] 0.39 ± ± 0.20 [C, 100, ] 0.36 ± ± 0.23 [C, 100, 0.05] 0.40 ± ± 0.20 [C, 200, 0] 0.44 ± ± 0.20 [C, 200, ] 0.33 ± ± 0.22 [C, 200, 0.05] 0.33 ± ± 0.24 Ncuts 0.32 ± ± 0.17 Method Rand Jaccard [T, 25, 0] 0.49 ± ± 0.18 [T, 25, ] 0.46 ± ± 0.14 [T, 25, 0.05] 0.46 ± ± 0.07 [T, 100, 0] 0.51 ± ± 0.18 [T, 100, ] 0.47 ± ± 0.18 [T, 100, 0.05] 0.47 ± ± 0.10 [T, 200, 0] 0.52 ± ± 0.18 [T, 200, ] 0.50 ± ± 0.21 [T, 200, 0.05] 0.48 ± ± 0.17 Ncuts 0.32 ± ± 0.17 the number of segments specified in the ground-truth segmentation. While this gives an unfair advantage to Normalized Cuts, its a good setting to test how well my method works. Unlike Normalized Cuts, my method does not require user specification on K and instead automatically determines the number of segments for each image. Table I shows the mean and standard deviation values of the index values on the MSRC dataset images. Table I (left) shows results for my method when using color features and Table I (right) shows results for my method when using texture features. Each row in the tables show results for different choices for the significance level, α, and the number of superpixels, N. There are some interesting observations that can be made from the results. First, for my method, color features produce better segmentations than texture features. This implies that most objects in the dataset can be uniquely defined by their color. Second, the number of superpixels, N, does not have a significant effect on the segmentation results. This is mainly because my method prunes merge errors top-down, so the size of the regions that are considered for pruning are (approximately) independent of the size of the original superpixels that the image started off with. Third, with increasing significance level, α, the segmentations become much worse. The main reason for this is the size of each sample (the number of pixels within each region), which is on the order of ten thousands; a typical image in the MSRC dataset has size 320 x 213. Since we prune merges top-down after the binary merger tree is formed, the sample sizes of the considered regions can be quite large. Due to the large sample sizes, even very minor differences in the region feature distributions result in tiny p-values and hence rejection of the null hypothesis. Consequently, for most images, every merge is considered to be an error which

8 8 1 Rand Index comparing Ncuts and Proposed Algorithm 1 Jaccard Index comparing Ncuts and Proposed Algorithm 0.9 Proposed is better 0.9 Proposed is better Normalized Cuts Normalized Cuts Ncuts is better 0.3 Ncuts is better Proposed method Proposed method Fig. 3. Each point represents the distance measure of an image. My algorithm s measure is the x-value and Normalized Cuts measure is the y-value. Points above the red diagonal are images where my algorithm produced better segmentations (lower index values) than Normalized Cuts. (a) Rand index. (b) Jaccard index. results in the final segmentations being their initial oversegmented superpixel images (where nothing is merged). Therefore, setting α 0 to account for the tiny p-values produced the most reasonable results. Figure 4 shows example segmentation results obtained using these settings versus those obtained by Normalized Cuts. The fourth observation is that my method with feature type = color, N = 25, and α 0 (the best setting) performs better than Normalized Cuts in terms of the Jaccard index, but not in terms of the Rand index. This may be due to the fact that the Rand index gives equal weight to pairs of pixels that were placed in different segments for both partitions, f 00, as pairs of pixels that were placed in the same segment for both partitions, f 11. Since f 00 only considers whether the pairs of pixels were placed in different segments, but not specifically which segments, the Rand index could (incorrectly) give a higher measure of similarity than the Jaccard index which does not consider f 00. Figure 3 shows index values computed from the Normalized Cuts algorithm and my algorithm (with parameters feature type = color, N = 25, and α 0). Since the index values are distances, lower values are better. It is quite clear that in terms of Jaccard index, my method outperforms Normalized Cuts, while in terms of Rand index, the methods seem comparable. To test the statistical significance of the differences in the index values obtained from the two methods, I used the paired Z-test statistic. The null hypothesis is that there is no difference in the measured index values. Since my sample size is 591 (the number of images in the dataset), I can assume the difference in index values to be normally distributed by the Central Limit Theorem. I first computed the differences in index values for each sample. Then I computed the mean and standard deviation of the differences. From this, I computed the Z-score and corresponding p-value. The p-values for the Rand index and Jaccard index were and 3.148e-30, respectively. At α = 0.05, the null hypothesis is rejected and thus the differences are

9 9 significant. With a lower α value such as 0.01, the differences in Rand index values would not be considered to be significant. V. CONCLUSION I proposed an unsupervised method for image segmentation. The approach does not need user input to determine the number of segments, unlike most unsupervised alternatives. Instead, it uses statistical testing to select it automatically. The algorithm starts by merging superpixels with hierarchical agglomerative clustering to form a binary merger tree. It then prunes erroneous merges using the Kolmogorov-Smirnov test. The results indicate that the proposed method performs comparably or better than the Normalized Cuts method. A limitation of the method is that the pruning of the binary merger tree down any path stops when the null hypothesis is not rejected. If a merge between two very different regions produces a new region that is similar to another region in the image, then the larger regions could remain merged. Hence, the better segmentation would not be produced. This is a tradeoff of pruning error top-down versus bottom-up. As future work, a combined top-down and bottom-up merging/error pruning method could be employed to alleviate such effects. REFERENCES [1] J. Shi and J. Malik, Normalized cuts and image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp , [2] I. T. Young, Proof without Prejudice: Use of the Kolmogorov-Smirnov Test for the Analysis of Histograms from Flow Systems and Other Sources, The Journal of Histochemistry and Cytochemistry, vol. 25, no. 7, pp , [3] J. Shotton, J. Winn, C. Rother, and A. Criminisi, TextonBoost: Joint Appearance, Shape and Context Modeling for Multi-Class Object Recognition and Segmentation, In Proc. European Conference on Computer Vision, [4] E. Demidenko, Kolmogorov-Smirnov Test for Image Comparison, Computational Science and Its Applications, vol. 3046, pp , [5] N. Ahuja and S. Todorovic, Learning the Taxonomy and Models of Categories Present in Arbitrary Images, Proc. of the IEEE International Conference on Computer Vision, [6] E. J. Pauwels and G. Frederix, Image Segmentation by Nonparametric Clustering Based on the Kolmogorov-Smirnov Distance, In Proc. European Conference on Computer Vision, vol. 1843, pp , [7] D. Comaniciu and P. Meer, Mean shift: a robust approach toward feature space analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp , [8] P. F. Felzenszwalb and D. P. Huttenlocher, Efficient graph-based image segmentation, International Journal of Computer Vision, vol. 59, no. 2, [9] X. Ren and J. Malik, Learning a classification model for segmentation, In Proc. International Conference on Computer Vision, vol. 1, pp , [10] S. Van Dongen, Performance criteria for graph clustering and Markov cluster experiments. Report No. INS-R0012, Center for Mathematics and Computer Science (CWI), Amsterdam, [11] W. M. Rand, Objective criteria for the evaluation of clustering methods, Journal of the American Statistical Association, vol. 66, pp , [12] A. Ben-Hur, A. Elisseeff, and I. Guyon, A stability based method for discovering structure in clustered data, in Pacific Symposium on Biocomputing, 2002, pp

10 10 Fig. 4. The first column shows the original image. The second column shows the segmentation results of my algorithm. The third column shows the segmentation results of Normalized Cuts. For displaying each segment, I averaged the pixel values of the original image within the segment.

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervised Learning and Clustering Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)

More information

CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervised Learning and Clustering Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2008 CS 551, Spring 2008 c 2008, Selim Aksoy (Bilkent University)

More information

Applications. Foreground / background segmentation Finding skin-colored regions. Finding the moving objects. Intelligent scissors

Applications. Foreground / background segmentation Finding skin-colored regions. Finding the moving objects. Intelligent scissors Segmentation I Goal Separate image into coherent regions Berkeley segmentation database: http://www.eecs.berkeley.edu/research/projects/cs/vision/grouping/segbench/ Slide by L. Lazebnik Applications Intelligent

More information

Image Segmentation. Shengnan Wang

Image Segmentation. Shengnan Wang Image Segmentation Shengnan Wang shengnan@cs.wisc.edu Contents I. Introduction to Segmentation II. Mean Shift Theory 1. What is Mean Shift? 2. Density Estimation Methods 3. Deriving the Mean Shift 4. Mean

More information

The goals of segmentation

The goals of segmentation Image segmentation The goals of segmentation Group together similar-looking pixels for efficiency of further processing Bottom-up process Unsupervised superpixels X. Ren and J. Malik. Learning a classification

More information

Segmentation and Grouping April 19 th, 2018

Segmentation and Grouping April 19 th, 2018 Segmentation and Grouping April 19 th, 2018 Yong Jae Lee UC Davis Features and filters Transforming and describing images; textures, edges 2 Grouping and fitting [fig from Shi et al] Clustering, segmentation,

More information

Grouping and Segmentation

Grouping and Segmentation 03/17/15 Grouping and Segmentation Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Today s class Segmentation and grouping Gestalt cues By clustering (mean-shift) By boundaries (watershed)

More information

Segmentation Computer Vision Spring 2018, Lecture 27

Segmentation Computer Vision Spring 2018, Lecture 27 Segmentation http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 218, Lecture 27 Course announcements Homework 7 is due on Sunday 6 th. - Any questions about homework 7? - How many of you have

More information

CS 2750: Machine Learning. Clustering. Prof. Adriana Kovashka University of Pittsburgh January 17, 2017

CS 2750: Machine Learning. Clustering. Prof. Adriana Kovashka University of Pittsburgh January 17, 2017 CS 2750: Machine Learning Clustering Prof. Adriana Kovashka University of Pittsburgh January 17, 2017 What is clustering? Grouping items that belong together (i.e. have similar features) Unsupervised:

More information

Visual Representations for Machine Learning

Visual Representations for Machine Learning Visual Representations for Machine Learning Spectral Clustering and Channel Representations Lecture 1 Spectral Clustering: introduction and confusion Michael Felsberg Klas Nordberg The Spectral Clustering

More information

Segmentation (continued)

Segmentation (continued) Segmentation (continued) Lecture 05 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr Mubarak Shah Professor, University of Central Florida The Robotics

More information

identified and grouped together.

identified and grouped together. Segmentation ti of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is

More information

Improving Image Segmentation Quality Via Graph Theory

Improving Image Segmentation Quality Via Graph Theory International Symposium on Computers & Informatics (ISCI 05) Improving Image Segmentation Quality Via Graph Theory Xiangxiang Li, Songhao Zhu School of Automatic, Nanjing University of Post and Telecommunications,

More information

Content-based Image and Video Retrieval. Image Segmentation

Content-based Image and Video Retrieval. Image Segmentation Content-based Image and Video Retrieval Vorlesung, SS 2011 Image Segmentation 2.5.2011 / 9.5.2011 Image Segmentation One of the key problem in computer vision Identification of homogenous region in the

More information

Gene Clustering & Classification

Gene Clustering & Classification BINF, Introduction to Computational Biology Gene Clustering & Classification Young-Rae Cho Associate Professor Department of Computer Science Baylor University Overview Introduction to Gene Clustering

More information

CS 534: Computer Vision Segmentation II Graph Cuts and Image Segmentation

CS 534: Computer Vision Segmentation II Graph Cuts and Image Segmentation CS 534: Computer Vision Segmentation II Graph Cuts and Image Segmentation Spring 2005 Ahmed Elgammal Dept of Computer Science CS 534 Segmentation II - 1 Outlines What is Graph cuts Graph-based clustering

More information

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering SYDE 372 - Winter 2011 Introduction to Pattern Recognition Clustering Alexander Wong Department of Systems Design Engineering University of Waterloo Outline 1 2 3 4 5 All the approaches we have learned

More information

Segmentation. Bottom Up Segmentation

Segmentation. Bottom Up Segmentation Segmentation Bottom up Segmentation Semantic Segmentation Bottom Up Segmentation 1 Segmentation as clustering Depending on what we choose as the feature space, we can group pixels in different ways. Grouping

More information

Color Image Segmentation

Color Image Segmentation Color Image Segmentation Yining Deng, B. S. Manjunath and Hyundoo Shin* Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 93106-9560 *Samsung Electronics Inc.

More information

Applied Bayesian Nonparametrics 5. Spatial Models via Gaussian Processes, not MRFs Tutorial at CVPR 2012 Erik Sudderth Brown University

Applied Bayesian Nonparametrics 5. Spatial Models via Gaussian Processes, not MRFs Tutorial at CVPR 2012 Erik Sudderth Brown University Applied Bayesian Nonparametrics 5. Spatial Models via Gaussian Processes, not MRFs Tutorial at CVPR 2012 Erik Sudderth Brown University NIPS 2008: E. Sudderth & M. Jordan, Shared Segmentation of Natural

More information

Segmentation and Grouping

Segmentation and Grouping CS 1699: Intro to Computer Vision Segmentation and Grouping Prof. Adriana Kovashka University of Pittsburgh September 24, 2015 Goals: Grouping in vision Gather features that belong together Obtain an intermediate

More information

Enhancing Clustering Results In Hierarchical Approach By Mvs Measures

Enhancing Clustering Results In Hierarchical Approach By Mvs Measures International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 10, Issue 6 (June 2014), PP.25-30 Enhancing Clustering Results In Hierarchical Approach

More information

Clustering CS 550: Machine Learning

Clustering CS 550: Machine Learning Clustering CS 550: Machine Learning This slide set mainly uses the slides given in the following links: http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf http://www-users.cs.umn.edu/~kumar/dmbook/dmslides/chap8_basic_cluster_analysis.pdf

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing ECG782: Multidimensional Digital Signal Processing Object Recognition http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Knowledge Representation Statistical Pattern Recognition Neural Networks Boosting

More information

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06 Clustering CS294 Practical Machine Learning Junming Yin 10/09/06 Outline Introduction Unsupervised learning What is clustering? Application Dissimilarity (similarity) of objects Clustering algorithm K-means,

More information

Segmentation and Grouping April 21 st, 2015

Segmentation and Grouping April 21 st, 2015 Segmentation and Grouping April 21 st, 2015 Yong Jae Lee UC Davis Announcements PS0 grades are up on SmartSite Please put name on answer sheet 2 Features and filters Transforming and describing images;

More information

Outline. Segmentation & Grouping. Examples of grouping in vision. Grouping in vision. Grouping in vision 2/9/2011. CS 376 Lecture 7 Segmentation 1

Outline. Segmentation & Grouping. Examples of grouping in vision. Grouping in vision. Grouping in vision 2/9/2011. CS 376 Lecture 7 Segmentation 1 Outline What are grouping problems in vision? Segmentation & Grouping Wed, Feb 9 Prof. UT-Austin Inspiration from human perception Gestalt properties Bottom-up segmentation via clustering Algorithms: Mode

More information

Unsupervised learning in Vision

Unsupervised learning in Vision Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual

More information

Clustering: Classic Methods and Modern Views

Clustering: Classic Methods and Modern Views Clustering: Classic Methods and Modern Views Marina Meilă University of Washington mmp@stat.washington.edu June 22, 2015 Lorentz Center Workshop on Clusters, Games and Axioms Outline Paradigms for clustering

More information

CMPSCI 670: Computer Vision! Grouping

CMPSCI 670: Computer Vision! Grouping CMPSCI 670: Computer Vision! Grouping University of Massachusetts, Amherst October 14, 2014 Instructor: Subhransu Maji Slides credit: Kristen Grauman and others Final project guidelines posted Milestones

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

A Graph Theoretic Approach to Image Database Retrieval

A Graph Theoretic Approach to Image Database Retrieval A Graph Theoretic Approach to Image Database Retrieval Selim Aksoy and Robert M. Haralick Intelligent Systems Laboratory Department of Electrical Engineering University of Washington, Seattle, WA 98195-2500

More information

Improving the Efficiency of Fast Using Semantic Similarity Algorithm

Improving the Efficiency of Fast Using Semantic Similarity Algorithm International Journal of Scientific and Research Publications, Volume 4, Issue 1, January 2014 1 Improving the Efficiency of Fast Using Semantic Similarity Algorithm D.KARTHIKA 1, S. DIVAKAR 2 Final year

More information

Supervised texture detection in images

Supervised texture detection in images Supervised texture detection in images Branislav Mičušík and Allan Hanbury Pattern Recognition and Image Processing Group, Institute of Computer Aided Automation, Vienna University of Technology Favoritenstraße

More information

Clustering Web Documents using Hierarchical Method for Efficient Cluster Formation

Clustering Web Documents using Hierarchical Method for Efficient Cluster Formation Clustering Web Documents using Hierarchical Method for Efficient Cluster Formation I.Ceema *1, M.Kavitha *2, G.Renukadevi *3, G.sripriya *4, S. RajeshKumar #5 * Assistant Professor, Bon Secourse College

More information

CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu. Lectures 21 & 22 Segmentation and clustering

CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu. Lectures 21 & 22 Segmentation and clustering CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lectures 21 & 22 Segmentation and clustering 1 Schedule Last class We started on segmentation Today Segmentation continued Readings

More information

Image Segmentation. Selim Aksoy. Bilkent University

Image Segmentation. Selim Aksoy. Bilkent University Image Segmentation Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Examples of grouping in vision [http://poseidon.csd.auth.gr/lab_research/latest/imgs/s peakdepvidindex_img2.jpg]

More information

Image Segmentation. Selim Aksoy. Bilkent University

Image Segmentation. Selim Aksoy. Bilkent University Image Segmentation Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Examples of grouping in vision [http://poseidon.csd.auth.gr/lab_research/latest/imgs/s peakdepvidindex_img2.jpg]

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Image Analysis - Lecture 5

Image Analysis - Lecture 5 Texture Segmentation Clustering Review Image Analysis - Lecture 5 Texture and Segmentation Magnus Oskarsson Lecture 5 Texture Segmentation Clustering Review Contents Texture Textons Filter Banks Gabor

More information

Dr. Ulas Bagci

Dr. Ulas Bagci CAP5415-Computer Vision Lecture 11-Image Segmentation (BASICS): Thresholding, Region Growing, Clustering Dr. Ulas Bagci bagci@ucf.edu 1 Image Segmentation Aim: to partition an image into a collection of

More information

Segmentation of Images

Segmentation of Images Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a

More information

Including the Size of Regions in Image Segmentation by Region Based Graph

Including the Size of Regions in Image Segmentation by Region Based Graph International Journal of Emerging Engineering Research and Technology Volume 3, Issue 4, April 2015, PP 81-85 ISSN 2349-4395 (Print) & ISSN 2349-4409 (Online) Including the Size of Regions in Image Segmentation

More information

Segmentation by Clustering

Segmentation by Clustering KECE471 Computer Vision Segmentation by Clustering Chang-Su Kim Chapter 14, Computer Vision by Forsyth and Ponce Note: Dr. Forsyth s notes are partly used. Jae-Kyun Ahn in Korea University made the first

More information

Image Segmentation for Image Object Extraction

Image Segmentation for Image Object Extraction Image Segmentation for Image Object Extraction Rohit Kamble, Keshav Kaul # Computer Department, Vishwakarma Institute of Information Technology, Pune kamble.rohit@hotmail.com, kaul.keshav@gmail.com ABSTRACT

More information

Clustering appearance and shape by learning jigsaws Anitha Kannan, John Winn, Carsten Rother

Clustering appearance and shape by learning jigsaws Anitha Kannan, John Winn, Carsten Rother Clustering appearance and shape by learning jigsaws Anitha Kannan, John Winn, Carsten Rother Models for Appearance and Shape Histograms Templates discard spatial info articulation, deformation, variation

More information

Clustering and Visualisation of Data

Clustering and Visualisation of Data Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some

More information

Keywords: clustering algorithms, unsupervised learning, cluster validity

Keywords: clustering algorithms, unsupervised learning, cluster validity Volume 6, Issue 1, January 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Clustering Based

More information

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani Clustering CE-717: Machine Learning Sharif University of Technology Spring 2016 Soleymani Outline Clustering Definition Clustering main approaches Partitional (flat) Hierarchical Clustering validation

More information

EE 701 ROBOT VISION. Segmentation

EE 701 ROBOT VISION. Segmentation EE 701 ROBOT VISION Regions and Image Segmentation Histogram-based Segmentation Automatic Thresholding K-means Clustering Spatial Coherence Merging and Splitting Graph Theoretic Segmentation Region Growing

More information

Semi-Automatic Transcription Tool for Ancient Manuscripts

Semi-Automatic Transcription Tool for Ancient Manuscripts The Venice Atlas A Digital Humanities atlas project by DH101 EPFL Students Semi-Automatic Transcription Tool for Ancient Manuscripts In this article, we investigate various techniques from the fields of

More information

Color Image Segmentation Using a Spatial K-Means Clustering Algorithm

Color Image Segmentation Using a Spatial K-Means Clustering Algorithm Color Image Segmentation Using a Spatial K-Means Clustering Algorithm Dana Elena Ilea and Paul F. Whelan Vision Systems Group School of Electronic Engineering Dublin City University Dublin 9, Ireland danailea@eeng.dcu.ie

More information

Analysis: TextonBoost and Semantic Texton Forests. Daniel Munoz Februrary 9, 2009

Analysis: TextonBoost and Semantic Texton Forests. Daniel Munoz Februrary 9, 2009 Analysis: TextonBoost and Semantic Texton Forests Daniel Munoz 16-721 Februrary 9, 2009 Papers [shotton-eccv-06] J. Shotton, J. Winn, C. Rother, A. Criminisi, TextonBoost: Joint Appearance, Shape and Context

More information

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Topics to be Covered in the Rest of the Semester CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Charles Stewart Department of Computer Science Rensselaer Polytechnic

More information

6.801/866. Segmentation and Line Fitting. T. Darrell

6.801/866. Segmentation and Line Fitting. T. Darrell 6.801/866 Segmentation and Line Fitting T. Darrell Segmentation and Line Fitting Gestalt grouping Background subtraction K-Means Graph cuts Hough transform Iterative fitting (Next time: Probabilistic segmentation)

More information

Unsupervised Learning : Clustering

Unsupervised Learning : Clustering Unsupervised Learning : Clustering Things to be Addressed Traditional Learning Models. Cluster Analysis K-means Clustering Algorithm Drawbacks of traditional clustering algorithms. Clustering as a complex

More information

Image Segmentation. Srikumar Ramalingam School of Computing University of Utah. Slides borrowed from Ross Whitaker

Image Segmentation. Srikumar Ramalingam School of Computing University of Utah. Slides borrowed from Ross Whitaker Image Segmentation Srikumar Ramalingam School of Computing University of Utah Slides borrowed from Ross Whitaker Segmentation Semantic Segmentation Indoor layout estimation What is Segmentation? Partitioning

More information

Segmentation & Clustering

Segmentation & Clustering EECS 442 Computer vision Segmentation & Clustering Segmentation in human vision K-mean clustering Mean-shift Graph-cut Reading: Chapters 14 [FP] Some slides of this lectures are courtesy of prof F. Li,

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Unsupervised learning Until now, we have assumed our training samples are labeled by their category membership. Methods that use labeled samples are said to be supervised. However,

More information

human vision: grouping k-means clustering graph-theoretic clustering Hough transform line fitting RANSAC

human vision: grouping k-means clustering graph-theoretic clustering Hough transform line fitting RANSAC COS 429: COMPUTER VISON Segmentation human vision: grouping k-means clustering graph-theoretic clustering Hough transform line fitting RANSAC Reading: Chapters 14, 15 Some of the slides are credited to:

More information

Clustering Part 4 DBSCAN

Clustering Part 4 DBSCAN Clustering Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville DBSCAN DBSCAN is a density based clustering algorithm Density = number of

More information

Combining Top-down and Bottom-up Segmentation

Combining Top-down and Bottom-up Segmentation Combining Top-down and Bottom-up Segmentation Authors: Eran Borenstein, Eitan Sharon, Shimon Ullman Presenter: Collin McCarthy Introduction Goal Separate object from background Problems Inaccuracies Top-down

More information

Image Segmentation continued Graph Based Methods. Some slides: courtesy of O. Capms, Penn State, J.Ponce and D. Fortsyth, Computer Vision Book

Image Segmentation continued Graph Based Methods. Some slides: courtesy of O. Capms, Penn State, J.Ponce and D. Fortsyth, Computer Vision Book Image Segmentation continued Graph Based Methods Some slides: courtesy of O. Capms, Penn State, J.Ponce and D. Fortsyth, Computer Vision Book Previously Binary segmentation Segmentation by thresholding

More information

Lecture 10: Semantic Segmentation and Clustering

Lecture 10: Semantic Segmentation and Clustering Lecture 10: Semantic Segmentation and Clustering Vineet Kosaraju, Davy Ragland, Adrien Truong, Effie Nehoran, Maneekwan Toyungyernsub Department of Computer Science Stanford University Stanford, CA 94305

More information

Graph-based High Level Motion Segmentation using Normalized Cuts

Graph-based High Level Motion Segmentation using Normalized Cuts Graph-based High Level Motion Segmentation using Normalized Cuts Sungju Yun, Anjin Park and Keechul Jung Abstract Motion capture devices have been utilized in producing several contents, such as movies

More information

Segmentation and Grouping

Segmentation and Grouping 02/23/10 Segmentation and Grouping Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Last week Clustering EM Today s class More on EM Segmentation and grouping Gestalt cues By boundaries

More information

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach: a set of texels in some regular or repeated pattern

More information

Cell Clustering Using Shape and Cell Context. Descriptor

Cell Clustering Using Shape and Cell Context. Descriptor Cell Clustering Using Shape and Cell Context Descriptor Allison Mok: 55596627 F. Park E. Esser UC Irvine August 11, 2011 Abstract Given a set of boundary points from a 2-D image, the shape context captures

More information

Discriminative Clustering for Image Co-segmentation

Discriminative Clustering for Image Co-segmentation Discriminative Clustering for Image Co-segmentation Armand Joulin Francis Bach Jean Ponce INRIA Ecole Normale Supérieure, Paris January 2010 Introduction Introduction Task: dividing simultaneously q images

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

Cluster Analysis. Ying Shen, SSE, Tongji University

Cluster Analysis. Ying Shen, SSE, Tongji University Cluster Analysis Ying Shen, SSE, Tongji University Cluster analysis Cluster analysis groups data objects based only on the attributes in the data. The main objective is that The objects within a group

More information

Understanding Clustering Supervising the unsupervised

Understanding Clustering Supervising the unsupervised Understanding Clustering Supervising the unsupervised Janu Verma IBM T.J. Watson Research Center, New York http://jverma.github.io/ jverma@us.ibm.com @januverma Clustering Grouping together similar data

More information

Grouping and Segmentation

Grouping and Segmentation Grouping and Segmentation CS 554 Computer Vision Pinar Duygulu Bilkent University (Source:Kristen Grauman ) Goals: Grouping in vision Gather features that belong together Obtain an intermediate representation

More information

Bioimage Informatics

Bioimage Informatics Bioimage Informatics Lecture 13, Spring 2012 Bioimage Data Analysis (IV) Image Segmentation (part 2) Lecture 13 February 29, 2012 1 Outline Review: Steger s line/curve detection algorithm Intensity thresholding

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

ECLT 5810 Clustering

ECLT 5810 Clustering ECLT 5810 Clustering What is Cluster Analysis? Cluster: a collection of data objects Similar to one another within the same cluster Dissimilar to the objects in other clusters Cluster analysis Grouping

More information

Cluster Validation. Ke Chen. Reading: [25.1.2, KPM], [Wang et al., 2009], [Yang & Chen, 2011] COMP24111 Machine Learning

Cluster Validation. Ke Chen. Reading: [25.1.2, KPM], [Wang et al., 2009], [Yang & Chen, 2011] COMP24111 Machine Learning Cluster Validation Ke Chen Reading: [5.., KPM], [Wang et al., 9], [Yang & Chen, ] COMP4 Machine Learning Outline Motivation and Background Internal index Motivation and general ideas Variance-based internal

More information

Context-sensitive Classification Forests for Segmentation of Brain Tumor Tissues

Context-sensitive Classification Forests for Segmentation of Brain Tumor Tissues Context-sensitive Classification Forests for Segmentation of Brain Tumor Tissues D. Zikic, B. Glocker, E. Konukoglu, J. Shotton, A. Criminisi, D. H. Ye, C. Demiralp 3, O. M. Thomas 4,5, T. Das 4, R. Jena

More information

Enhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques

Enhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques 24 Enhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques Enhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques Ruxandra PETRE

More information

Overview of Clustering

Overview of Clustering based on Loïc Cerfs slides (UFMG) April 2017 UCBL LIRIS DM2L Example of applicative problem Student profiles Given the marks received by students for different courses, how to group the students so that

More information

University of Florida CISE department Gator Engineering. Clustering Part 4

University of Florida CISE department Gator Engineering. Clustering Part 4 Clustering Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville DBSCAN DBSCAN is a density based clustering algorithm Density = number of

More information

CSE 158 Lecture 6. Web Mining and Recommender Systems. Community Detection

CSE 158 Lecture 6. Web Mining and Recommender Systems. Community Detection CSE 158 Lecture 6 Web Mining and Recommender Systems Community Detection Dimensionality reduction Goal: take high-dimensional data, and describe it compactly using a small number of dimensions Assumption:

More information

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Sung Chun Lee, Chang Huang, and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu,

More information

A Connection between Network Coding and. Convolutional Codes

A Connection between Network Coding and. Convolutional Codes A Connection between Network Coding and 1 Convolutional Codes Christina Fragouli, Emina Soljanin christina.fragouli@epfl.ch, emina@lucent.com Abstract The min-cut, max-flow theorem states that a source

More information

K-Means Clustering Using Localized Histogram Analysis

K-Means Clustering Using Localized Histogram Analysis K-Means Clustering Using Localized Histogram Analysis Michael Bryson University of South Carolina, Department of Computer Science Columbia, SC brysonm@cse.sc.edu Abstract. The first step required for many

More information

Ensemble Segmentation Using Efficient Integer Linear Programming

Ensemble Segmentation Using Efficient Integer Linear Programming IEEE PATTERN RECOGNITION AND MACHINE INTELLIGENCE, 2012 1 Ensemble Segmentation Using Efficient Integer Linear Programming Amir Alush and Jacob Goldberger Abstract We present a method for combining several

More information

INF 4300 Classification III Anne Solberg The agenda today:

INF 4300 Classification III Anne Solberg The agenda today: INF 4300 Classification III Anne Solberg 28.10.15 The agenda today: More on estimating classifier accuracy Curse of dimensionality and simple feature selection knn-classification K-means clustering 28.10.15

More information

Lecture: k-means & mean-shift clustering

Lecture: k-means & mean-shift clustering Lecture: k-means & mean-shift clustering Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap: Image Segmentation Goal: identify groups of pixels that go together 2 Recap: Gestalt

More information

Lecture: k-means & mean-shift clustering

Lecture: k-means & mean-shift clustering Lecture: k-means & mean-shift clustering Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap: Image Segmentation Goal: identify groups of pixels that go together

More information

Hierarchical Clustering

Hierarchical Clustering What is clustering Partitioning of a data set into subsets. A cluster is a group of relatively homogeneous cases or observations Hierarchical Clustering Mikhail Dozmorov Fall 2016 2/61 What is clustering

More information

Targil 12 : Image Segmentation. Image segmentation. Why do we need it? Image segmentation

Targil 12 : Image Segmentation. Image segmentation. Why do we need it? Image segmentation Targil : Image Segmentation Image segmentation Many slides from Steve Seitz Segment region of the image which: elongs to a single object. Looks uniform (gray levels, color ) Have the same attributes (texture

More information

Tri-modal Human Body Segmentation

Tri-modal Human Body Segmentation Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

Lecture 7: Segmentation. Thursday, Sept 20

Lecture 7: Segmentation. Thursday, Sept 20 Lecture 7: Segmentation Thursday, Sept 20 Outline Why segmentation? Gestalt properties, fun illusions and/or revealing examples Clustering Hierarchical K-means Mean Shift Graph-theoretic Normalized cuts

More information

AN IMPROVED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION

AN IMPROVED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION AN IMPROVED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION WILLIAM ROBSON SCHWARTZ University of Maryland, Department of Computer Science College Park, MD, USA, 20742-327, schwartz@cs.umd.edu RICARDO

More information

Experimentation on the use of Chromaticity Features, Local Binary Pattern and Discrete Cosine Transform in Colour Texture Analysis

Experimentation on the use of Chromaticity Features, Local Binary Pattern and Discrete Cosine Transform in Colour Texture Analysis Experimentation on the use of Chromaticity Features, Local Binary Pattern and Discrete Cosine Transform in Colour Texture Analysis N.Padmapriya, Ovidiu Ghita, and Paul.F.Whelan Vision Systems Laboratory,

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Cluster analysis. Agnieszka Nowak - Brzezinska

Cluster analysis. Agnieszka Nowak - Brzezinska Cluster analysis Agnieszka Nowak - Brzezinska Outline of lecture What is cluster analysis? Clustering algorithms Measures of Cluster Validity What is Cluster Analysis? Finding groups of objects such that

More information

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant

More information