Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Similar documents
Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Lecture 11: Classification

INF 4300 Classification III Anne Solberg The agenda today:

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering

Unsupervised Learning

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

Feature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Network Traffic Measurements and Analysis

Supervised vs. Unsupervised Learning

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Unsupervised Learning : Clustering

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Unsupervised Learning

Understanding Clustering Supervising the unsupervised

10601 Machine Learning. Hierarchical clustering. Reading: Bishop: 9-9.2

Intro to Artificial Intelligence

Clustering and Dissimilarity Measures. Clustering. Dissimilarity Measures. Cluster Analysis. Perceptually-Inspired Measures

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06

Clustering. Supervised vs. Unsupervised Learning

Introduction to Pattern Recognition Part II. Selim Aksoy Bilkent University Department of Computer Engineering

Gene Clustering & Classification

ECG782: Multidimensional Digital Signal Processing

AN IMPROVED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION

Lecture 10: Semantic Segmentation and Clustering

Introduction to digital image classification

10701 Machine Learning. Clustering

Unsupervised Learning. Presenter: Anil Sharma, PhD Scholar, IIIT-Delhi

Cluster Analysis: Agglomerate Hierarchical Clustering

Discriminate Analysis

Machine Learning. Unsupervised Learning. Manfred Huber

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford

Clustering CS 550: Machine Learning

Lesson 3. Prof. Enza Messina

Supervised vs unsupervised clustering

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning

Unsupervised learning, Clustering CS434

PARALLEL CLASSIFICATION ALGORITHMS

Introduction to Computer Science

Cluster Analysis. Ying Shen, SSE, Tongji University


INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Clustering Lecture 5: Mixture Model

Cluster Analysis. Angela Montanari and Laura Anderlucci

(Refer Slide Time: 0:51)

Clustering and Visualisation of Data

Data mining. Classification k-nn Classifier. Piotr Paszek. (Piotr Paszek) Data mining k-nn 1 / 20

Clustering: Classic Methods and Modern Views

Segmentation of Images

Data Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

MSA220 - Statistical Learning for Big Data

What to come. There will be a few more topics we will cover on supervised learning

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Machine learning Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Lab 9. Julia Janicki. Introduction

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

Segmentation Computer Vision Spring 2018, Lecture 27

Figure 1: Workflow of object-based classification

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a

Spectral Classification

Unsupervised: no target value to predict

CS 229 Midterm Review

Clustering part II 1

Case-Based Reasoning. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. Parametric / Non-parametric.

CS 188: Artificial Intelligence Fall 2008

Statistical Analysis of Metabolomics Data. Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

Clustering & Classification (chapter 15)

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme, Nicolas Schilling

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM.

CHAPTER 4: CLUSTER ANALYSIS

Methods for Intelligent Systems

Machine Learning in Biology

PPKE-ITK. Lecture

Based on Raymond J. Mooney s slides

Feature Selection for Image Retrieval and Object Recognition

SYDE 372 Introduction to Pattern Recognition. Distance Measures for Pattern Classification: Part I

Image Classification. RS Image Classification. Present by: Dr.Weerakaset Suanpaga

PAM algorithm. Types of Data in Cluster Analysis. A Categorization of Major Clustering Methods. Partitioning i Methods. Hierarchical Methods

Data: a collection of numbers or facts that require further processing before they are meaningful

Applications. Foreground / background segmentation Finding skin-colored regions. Finding the moving objects. Intelligent scissors

Unsupervised Learning

6. Dicretization methods 6.1 The purpose of discretization

High throughput Data Analysis 2. Cluster Analysis

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos

Unsupervised learning in Vision

Data Clustering Hierarchical Clustering, Density based clustering Grid based clustering

MUSI-6201 Computational Music Analysis

Region-based Segmentation

Clustering Lecture 3: Hierarchical Methods

Statistical Pattern Recognition

3. Cluster analysis Overview

Improving the Efficiency of Fast Using Semantic Similarity Algorithm

ELEC Dr Reji Mathew Electrical Engineering UNSW

3. Data Preprocessing. 3.1 Introduction

2. Data Preprocessing

Clustering & Dimensionality Reduction. 273A Intro Machine Learning

Transcription:

Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Outline An overview on classification Basics of classification How to choose appropriate features (feature set) How to perform classification Classifiers

Where are we right now? Figure: Classification in image processing.

What is intelligence? Ability to separate relevant information Ability to learn from examples and to generalize knowledge so that it can be used in other situations Ability to draw conclusions from incomplete information

What is classification? Classification is a process in which individual items (objects/patterns/image regions/pixels) are grouped based on the similarity between the item and the description of the group Figure: Classification process.

What is classification? Distinguish between different types of objects Determine which parts of the image belong to the object of interest We would like to create an intelligent system that can draw conclusions from our image data Often used as the last stage in automated image analysis task

Classification example 1 Figure: Classification of leaves.

Classification example 2 Figure: Histological bone implant images.

Some important concepts Arrangements of descriptors are often called patterns Descriptors are often called features The most common pattern arrangement is a feature vector with n dimensions Patterns are placed in classes of objects which share common properties A collection of N classes are denoted ω 1, ω 2,..., ω N

Terminology Object = pattern = point = sample = vector Feature = attribute = measurement Classifier = decision function (boundary) Class Cluster

Dataset By measuring the features of many objects we construct dataset Object Perimeter Area Label Apple 1 25 80 1 Apple 2 30 78 1 Apple 3 29 81 1 Pear 1 52 65 2 Pear 2 48 66 2 Pear 3 51 63 2

Data set and a representation in the feature space

Features Features are the individual measurable heuristic properties of the phenomena Discriminating (effective) features Independent features Features: area, perimeter texture color...

What are good features? Each pattern is represented in terms of n features x = (x 1,..., x n ) The goal is to choose those features that allow pattern vectors belonging to different classes to occupy compact and disjoint regions It is application dependent You might try many, many features, until you find the right ones Often, people compute 100s of features, and put them all in a classifier The classifier will figure out which ones are good This is wrong!!!

Peaking phenomenon (Curse of dimensionality) Additional features may actually degrade the performance of a classifier This paradox is called peaking phenomenon (curse of dimensionality) Figure: Curse of dimensionality.

Feature set Colour? Area? Perimeter?...

Dimensionality reduction Keep the number of features as small as possible Measurements cost Accuracy Simplify pattern representation The resulting classifier will be faster and will use less memory A reduction in the number of features may lead to a loss in the discriminatory power and thereby lower the accuracy

Feature extraction Feature extraction is the process of generating features to be used in the selection and classification tasks Reduce dimensionality by (linear or non-linear) projection of n dimensional vector onto m dimensional vector (m < n) May have not clear physical meaning Figure: Feature extraction.

Feature selection Feature selection methods choose features from the original set based on some criteria Reduce dimensionality by selecting subset of original features Have clear physical meaning Figure: Feature selection.

Choice of a criterion function The main issues in dimensionality reduction is the choice of a criterion function A most commonly used is the classification error of a feature subset Example: A measure of distance between two distributions f 1 and f 2 Mahalanobis distance: Main assumption is that Gaussian distributions have equal covariance matrix Σ D M (f 1, f 2 ) = (m 1 m 2 ) T Σ 1 (m 1 m 2 ) m 1 is a mean of objects in class 1 m 2 is a mean of objects in class 2

Feature selection methods Exhaustive search Best individual features Sequential forward selection Sequential backward selection

Exhaustive search Evaluate all ( n m) possible subsets Guaranteed to find the optimal subset Very expensive! Example: If n = 50, m = 5 then there are 2 118 760 possible subset (classifications) to evaluate

Best individual features Evaluate all n features individually Select the best m n individual features Simple and not likely to lead to an optimal subset Example: Feature 1 is best Feature 2 is best Maybe features 3 and 4 outperform features 1 and 2!

Sequential forward selection Selects the best feature and then add one feature at a time Once a feature is retained, it cannot be discarded Computationally fast (for a subset of size 2 examine n 1 possible subsets)

Sequential backward selection Starts with all n features and successive delete one feature at a time Once a feature is deleted, it cannot be brought back into the optimal subset Require more computation time than sequential forward selection

Search algorithms The presented algorithms are suboptimal When should we use forward selection? When should we use backward selection? Is there any optimal algorithms? Branch and bound (divide and conquer method) Genetic algorithms Simulated annealing

Supervised vs. Unsupervised classification Supervised First apply knowledge, then classify Unsupervised First classify, then apply knowledge Figure: Supervised and unsupervised classification.

Object-wise and pixel-wise classification Object-wise classification Uses shape information to describe patterns Size, mean intensity, mean color, etc. Pixel-wise classification Uses information from individual pixels Intensity, color, texture, spectral information

Object-wise classification Lab 2

Object-wise classification Segment the image into regions and label them Extract features for each pattern Train classifier on examples with known class to find discriminant function in the feature space For new examples decide their class using the discriminant function

Pixel-wise classification 256*256 patterns (pixels) 3 features (red, green and blue color) Four classes (stamen, leaf, stalk and background)

Pixel-wise classification The pattern is a pixel in a non-segmented image Extract (calculate) features for each pattern (pixel) e.g., color, gray-level representation of texture, temporal changes. Train classifier New samples are classified by classifier

Train and classify Training Find rules and discriminant function that separate patterns in different classes using known examples Classification Take a new unknown example and put it into the correct class using discriminant function

Training data Training data can be obtained from available training samples Classify patterns which are not used during the training stage The performance of classifier depends on the number of available training samples as well as the specific values of the samples Number of training samples should not be too small!

Training data Conditional densities are unknown and they have to be learned from available training data Maybe density is known (for example multivariate Gaussian) A common strategy is to replace unknown parameters by the their estimated values Less available information - the difficulty of classification problem increases

How to choose appropriate training set?

How to choose appropriate training set? Figure: Original image and training image.

How to choose appropriate training set? Figure: Original image and training image.

Classifiers Once a feature selection finds a proper representation, a classifier can be designed using a number of possible approaches The performance of classifier depends on the interrelationship between sample size, number of features and classifier complexity The choice of a classifier is a difficult problem!!! It is often based on which classifier(s) happen to be available or best known to the user

Most commonly used classifiers The user can modify several associated parameters and criterion function There exist some classification problem for which they are the best choice Classifiers (covered in this lecture): Minimal distance classifier Maximal likelihood classifier Nearest neighbour classifier

Discriminant functions A discriminant function for a class is a function that will yield larger values than functions for other classes if the pattern belongs to the class d j (x) > d j (x) j = 1, 2,..., N; j i For N pattern classes, we have N discriminant functions The decision boundary between class i and class j is d j (x) d i (x) = 0

Decision boundary

Decision boundary Figure: Decision boundary.

Linear and quadratic classifier Figure: Linear (left) and quadratic (right) classifier.

Thresholding - simple classification Classify image into foreground and background Figure: Original image and tresholded image.

Box classification Generalized thresholding (multispectral thresholding) Interval for each class and feature All objects with feature vector within the same box belong to the same class

Bayesian classifiers Based on a priori knowledge of class probability Cost of errors Minimum distance classifier Maximum likelihood classifier

Minimum distance classifier Each class is represented by its mean vector Training is done using the objects (pixels) of known class Mean of the feature vectors for the object within the class is calculated New objects are classified by finding the closest mean vector

Minimum distance classifier

Limitations of Minimum distance classifier Useful when distance between means is large compared to randomness of each class with respect to its mean Optimal performance when distributions of the classes form spherical shape

Maximal likelihood classifier Classify according to the largest probability (taking variance and covariance into consideration) Assume that distribution within each class is Gaussian The distribution in each class can be described by a mean vector and covariance matrix

Maximal likelihood classifier For each class from training data compute: Mean vector Covariance matrix Form decision function for each class New objects are classified to class with highest probability

Variance and covariance Variance : spread a randomness for class Covariance : influence (dependency) between different features Described by covariance matrix Feature 1 Feature 2 Feature 3 Feature 1 1 1&2 1&3 Feature 2 1&2 2 2&3 Feature 3 1&3 2&3 3

Computation of covariance Features : x 1, x 2,..., x n Feature vector for object i: x 1,i, x 2,i,..., x n,i Mean for each class x mean1, x mean2,..., x meanm n Covariance: cov(x i, x j ) = 1 n 1 (x i,k x meani )(x j,k x meanj ) k=1 cov(x 1, x 1 ) cov(x 1, x 2 ) cov(x 1, x 3 ) Σ = cov(x 2, x 1 ) cov(x 2, x 2 ) cov(x 2, x 3 ) cov(x 3, x 1 ) cov(x 3, x 2 ) cov(x 3, x 3 )

Density functions Equal variance Different variance Covariance is different from zero

Assumptions on covariance matrix 1 Case 1 (Minimal distance) No covariance, equal variance 2 Case 2 (Equal covariance) Same covariance for all classes 3 Case 3 (Uncorrelated) No covariance, different variance 4 Case 4 (General) Different covariances

Case 1 Independent features Equal variance Minimal distance classifier

Case 2 Equal covariances for all classes

Case 3 Independent features - no covariance Different variance for different features

Case 4 General Complicated decision boundaries

Nearest neighbour Stores all training samples Assigns pattern to majority class among k nearest neighbour

k nearest neighbour Assigns pattern to majority class among k nearest neighbour

k nearest neighbour Metric dependent Lazy The function is approximated only locally The best choice of k depends on data Sensitive to outliers Larger values of k reduce effects of noise, but make boundaries between the classes less distinct

Combine classifiers! Each of the classifiers can be developed in a different context and for entirely different representation/description (identification of persons by their voice, face as well as hand writing) More than a single training set is available

Combine classifiers! Different classifiers trained on the same data may only differ in their global performances May show strong local differences Combination is very useful if the individual classifiers are independent

Combination Parallel All classifiers are evaluated independently Their results are combined (first weight then combine!!!) Serial - linear sequence The number of possible classes for a given pattern are gradually reduced as more classifiers in the sequence has be involved First consider simple classifiers (low computational and measurements cost) More accurate and expensive classifiers apply after Hierarchical (tree-like) Individual classifiers are combined into a structure (a tree-like structure) Advantage high efficiently and flexibility

Unsupervised classification Difficult, expensive or even impossible to rely label training sample with its true category (It is not possible to obtain ground truth) Patterns within a cluster are more similar to each other than are patterns belonging to different clusters

Unsupervised classification Difficult How to determine the number of clusters K? The number of clusters often depends on resolution (fine vs. coarse)

K means Step 1. Select an initial partition with K clusters. Repeat steps 2 through 4 until the cluster membership stabilizes Step 2. Generate a new partition by assigning each pattern to its closest cluster center Step 3. Compute new cluster centres as the centroids of the clusters Step 4. Repeat steps 2 and 3 until an optimum value of the criterion function is found (the cluster are stabilized)

K means

K means

K means - disadvantages Different initialization can result in different final clusters Fixed number of clusters can make it difficult to predict what K should be It is helpful to rerun the classification using the same as well as different K values, to compare the achieved results 10 different initializations for 2D data For N-dimensional data 10 different initializations is often not enough!

How to determine K? Davies-Bouldin index (DB index) It works for spherical clusters Intra-cluster and inter-cluster measure

Hierarchical clustering Hierarchical clustering - construct clustering tree (dendrogram) Start with each object/pixel as its own class Merge the classes that are closest according to some distance measure Continue until only one class is achieved Decide the number of classes based on the distances in the tree

Simple dendrogram

Distances measure and linking rules Distance measures: Euclidean City block Mahalonobis Linking rules - how distances are measured when cluster contains more than one object Single linkage (nearest neighbours) Complete linkage (furthest neighbours) Mean linkage (distance between cluster centres)

Linking rules Dependent on metric d complete - red d single - blue

Summary and conclusions Classification is needed in image processing It is highly application dependent Features and classifiers Using more features does not guarantee a better result!