Clustering and Classification. Basic principles of clustering. Clustering. Classification

Similar documents
Supervised vs unsupervised clustering

Exploratory data analysis for microarrays

Classification with PAM and Random Forest

CANCER PREDICTION USING PATTERN CLASSIFICATION OF MICROARRAY DATA. By: Sudhir Madhav Rao &Vinod Jayakumar Instructor: Dr.

9/29/13. Outline Data mining tasks. Clustering algorithms. Applications of clustering in biology

Statistics 202: Data Mining. c Jonathan Taylor. Week 8 Based in part on slides from textbook, slides of Susan Holmes. December 2, / 1

Gene Clustering & Classification

Network Traffic Measurements and Analysis

Cluster Analysis for Microarray Data

Preface to the Second Edition. Preface to the First Edition. 1 Introduction 1

Machine Learning in Biology

Introduction to Pattern Recognition Part II. Selim Aksoy Bilkent University Department of Computer Engineering

INF 4300 Classification III Anne Solberg The agenda today:

Applying Supervised Learning

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset.

Predictive Analytics: Demystifying Current and Emerging Methodologies. Tom Kolde, FCAS, MAAA Linda Brobeck, FCAS, MAAA

Overview Citation. ML Introduction. Overview Schedule. ML Intro Dataset. Introduction to Semi-Supervised Learning Review 10/4/2010

Multivariate Methods

Unsupervised Learning

Clustering. Lecture 6, 1/24/03 ECS289A

Using Machine Learning to Optimize Storage Systems

Supervised vs. Unsupervised Learning

Network Traffic Measurements and Analysis

Random Forest A. Fornaser

ECS 234: Data Analysis: Clustering ECS 234

Model Assessment and Selection. Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer

Distances, Clustering! Rafael Irizarry!

Machine Learning Techniques for Data Mining

High throughput Data Analysis 2. Cluster Analysis

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Lecture 25: Review I

10701 Machine Learning. Clustering

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Slides for Data Mining by I. H. Witten and E. Frank

Feature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Classification by Nearest Shrunken Centroids and Support Vector Machines

CS 229 Midterm Review

Unsupervised Learning and Clustering

Hierarchical Clustering

ECG782: Multidimensional Digital Signal Processing

Clustering CS 550: Machine Learning

Machine Learning. Chao Lan

Bioinformatics - Lecture 07

Statistical Analysis of Metabolomics Data. Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte

Dimension reduction : PCA and Clustering

Tree-based methods for classification and regression

Hard clustering. Each object is assigned to one and only one cluster. Hierarchical clustering is usually hard. Soft (fuzzy) clustering

Lecture on Modeling Tools for Clustering & Regression

MIT Samberg Center Cambridge, MA, USA. May 30 th June 2 nd, by C. Rea, R.S. Granetz MIT Plasma Science and Fusion Center, Cambridge, MA, USA

ECLT 5810 Clustering

Evaluation Measures. Sebastian Pölsterl. April 28, Computer Aided Medical Procedures Technische Universität München

1) Give decision trees to represent the following Boolean functions:

MSA220 - Statistical Learning for Big Data

Unsupervised Learning and Clustering

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM.

Clustering and Visualisation of Data

Supervised Learning for Image Segmentation

Statistical Pattern Recognition

Clustering. Supervised vs. Unsupervised Learning

Lecture 27: Review. Reading: All chapters in ISLR. STATS 202: Data mining and analysis. December 6, 2017

Performance Evaluation of Various Classification Algorithms

Contents. Preface to the Second Edition

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford

Cluster Analysis. Summer School on Geocomputation. 27 June July 2011 Vysoké Pole

Cross-validation and the Bootstrap

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning

Understanding Clustering Supervising the unsupervised

Statistical Pattern Recognition

What to come. There will be a few more topics we will cover on supervised learning

Gene signature selection to predict survival benefits from adjuvant chemotherapy in NSCLC patients

BIOINF 585: Machine Learning for Systems Biology & Clinical Informatics

Last time... Coryn Bailer-Jones. check and if appropriate remove outliers, errors etc. linear regression

Estimating Error-Dimensionality Relationship for Gene Expression Based Cancer Classification

AN IMPROVED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

CHAPTER 4: CLUSTER ANALYSIS

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1

Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions

ECLT 5810 Clustering

R (2) Data analysis case study using R for readily available data set using any one machine learning algorithm.

Machine Learning (BSMC-GA 4439) Wenke Liu

RESAMPLING METHODS. Chapter 05

Statistical Pattern Recognition

DI TRANSFORM. The regressive analyses. identify relationships

The supclust Package

CSE 573: Artificial Intelligence Autumn 2010

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

Data Mining Lecture 8: Decision Trees

Measure of Distance. We wish to define the distance between two objects Distance metric between points:

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

Classification/Regression Trees and Random Forests

An Empirical Comparison of Ensemble Methods Based on Classification Trees. Mounir Hamza and Denis Larocque. Department of Quantitative Methods

10. Clustering. Introduction to Bioinformatics Jarkko Salojärvi. Based on lecture slides by Samuel Kaski

STA 4273H: Statistical Machine Learning

Classification: Feature Vectors

Clustering Jacques van Helden

Cluster Analysis. Ying Shen, SSE, Tongji University

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Transcription:

Classification Clustering and Classification Task: assign objects to classes (groups) on the basis of measurements made on the objects Jean Yee Hwa Yang University of California, San Francisco http://www.biostat.ucsf.edu/jean/ Institute for Mathematical Sciences, National University of Singapore January 2-6, 2004 Unsupervised: classes unknown, want to discover them from the data (cluster analysis) Supervised: classes are predefined, want to use a (training or learning) set of labeled objects to form a classifier for classification of future observations Basic principles of clustering Aim: to group observations that are similar based on predefined criteria. Clustering Issues: Which genes / arrays to use? Which similarity or dissimilarity measure? Which clustering algorithm? It is advisable to reduce the number of genes from the full set to some more manageable number, before clustering. The basis for this reduction is usually quite context specific. 1

Clustering Expression Data Which similarity or dissimilarity measure? For each gene, calculate a summary statistics and/or adjusted p-values A metric is a measure of the similarity or dissimilarity between two data objects Used to form data points into clusters Similarity metrics Clustering algorithm Set of candidate DE genes. Clustering Biological verification Descriptive interpretation Two main classes of distance: - Correlation coefficients - Compares shape of expression curves - Two types of correlation: - Centered. - Un-centered. - Distance metrics - City Block (Manhattan) distance - Euclidean distance Correlation (a measure between -1 and 1) Pearson Correlation Coefficient S x = Standard deviation of x n S y = Standard deviation of y 1 x i x y i y n 1 i= 1 Sx S y Others include Spearman s ρ and Kendall s τ Potential pitfalls You can use absolute correlation to capture both positive and negative correlation Correlation = 1 Positive correlation Negative correlation 2

City Block (Manhattan) distance: - Sum of differences across dimensions - Less sensitive to outliers - Diamond shaped clusters Distance metrics Euclidean distance: - Most commonly used distance - Sphere shaped cluster - Corresponds to the geometric distance into the multidimensional space d ( X, Y ) = x i y i d( X, Y) = ( x i y i ) i i 2 Euclidean vs Correlation (I) Euclidean distance Correlation Condition 2 X Y Condition 2 X Y Condition 1 Condition 1 where gene X = (x 1,,x n ) and gene Y=(y 1,,y n ) Distance between clusters Between-cluster dissimilarity measures Clustering algorithms Clustering algorithm comes in 2 basic flavors Partitioning Hierarchical Complete (minimum) Single (maximum) x x Distance between centroids Average (Mean) linkage 3

Partitioning methods Partition the data into a pre-specified number k of mutually exclusive and exhaustive groups. Iteratively reallocate the observations to clusters until some criterion is met, e.g. minimize within cluster sums of squares. K = 2 Partitioning methods Examples: - k-means, self-organizing maps (SOM), PAM, etc.; - Fuzzy: needs stochastic model, e.g. Gaussian mixtures. Partitioning methods Hierarchical methods K = 4 Hierarchical clustering methods produce a tree or dendrogram. They avoid specifying how many clusters are appropriate by providing a partition for each k obtained from cutting the tree at some level. The tree can be built in two distinct ways - bottom-up: agglomerative clustering. - top-down: divisive clustering. 4

Tree re-ordering? Illustration of points In two dimensional space 1 5 2 3 4 Agglomerative 2 1 5 3 4 1 5 2 3 4 Agglomerative 3 4 1,2,3,4,5 3 4 1,2,3,4,5 1,2,5 1,2,5 1 5 2 1,5 3,4 1 5 2 1,5 3,4 1 5 2 3 4 1 5 2 3 4 Partitioning vs. hierarchical Partitioning: Advantages Optimal for certain criteria. Genes automatically assigned to clusters Disadvantages Need initial k; Often require long computation times. All genes are forced into a cluster. Hierarchical Advantages Faster computation. Visual. Disadvantages Unrelated genes are eventually joined Rigid, cannot correct later for erroneous decisions made earlier. Hard to define clusters. Clustering microarray data Clustering leads to readily interpretable figures and can be helpful for identifying patterns in time or space. Examples: We can cluster cell samples (cols), e.g. 1) for identification (profiles). Here, we might want to estimate the number of different neuron cell types in a set of samples, based on gene expression. 2) the identification of new / unknown tumor classes using gene expression profiles. We can cluster genes (rows), e.g. using large numbers of yeast experiments, to identify groups of co-regulated genes. We can cluster genes (rows) to reduce redundancy (cf. variable selection) in predictive models. 5

Clustering both cell samples and genes Clustering cell samples Discovering sub-groups Taken from Nature February, 2000 Paper by A Alizadeh et al Distinct types of diffuse large B-cell lymphoma identified by Gene expression profiling, Taken from Alizadeh et al (Nature, 2000) Clustering genes Finding different patterns in the data Some other issues on clustering Yeast Cell Cycle (Cho et al, 1998) 6 5 SOM with 828 genes Taken from Tamayo et al, (PNAS, 1999) Two-way clustering. Comparing trees. Estimating the number of cluster. - Silhouette width (used in PAM). - Gap statistics Ref: Tibshirani, Walther and Hastie Estimating the number of clusters in a dataset via the gap statistic. - Clest: Ref: Dudoit & Fridlyand A prediction-based resampling method for estimating the number of clusters in a dataset. Note: select a subsets of genes using their class association before clustering. 6

Summary Which clustering method should I use? - What is the biological question? - Do I have a preconceived notion of how many clusters there should be? - Can a gene be in multiple clusters? - Hard or soft boundaries between clusters Keep in mind: - Clustering cannot NOT work. That is, every clustering methods will return clusters. - Clustering helps to group / order information and is a visualization tool for learning about the data. However, clustering results do not provide biological proof. Comparison of clustering and single gene approaches for microarray data analysis Cluster analysis: 1) Usually outside the normal framework of statistical inference. 2) Less appropriate when only a few genes are likely to change. 3) Needs lots of experiments. Single gene approaches 1) May be too noisy in general to show much. 2) May not reveal coordinated effects of positively correlated genes. 3) Harder to relate to pathways. Basic principles of discrimination Each object associated with a class label (or response) Y {1, 2,, K} and a feature vector (vector of predictor variables) of G measurements: X = (X 1,, X G ) Aim: predict Y from X. Discrimination 1 2 K Predefined Class {1,2, K} Classification procedure Feature selection Performance assessment Comparison study Y = Class Label = 2 X = Feature vector {colour, shape} Objects Classification rule? X = {red, square} Y =? 7

Learning Set Data with known classes Classification Technique Discrimination Classification Classification rule Prediction Data with unknown classes Class Assignment Predefine classes Clinical outcome Objects Array Feature vectors Gene expression Learning set Bad prognosis recurrence < 5yrs Reference L van t Veer et al (2002) Gene expression profiling predicts clinical outcome of breast cancer. Nature, Jan.. Good Prognosis recurrence > 5yrs Classification rule Good Prognosis? Matesis > 5 new array Learning set Predefine classes Tumor type Objects Array Feature vectors Gene expression B-ALL T-ALL AML T-ALL? new array Performance Assessment e.g. Cross validation Classification Rule -Classification procedure, -Feature selection, -Parameters [pre-determine, estimable], Distance measure, Aggregation methods Reference Golub et al (1999) Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science 286(5439): 531-537. Classification Rule One can think of the classification rule as a black box, some methods provides more insight into the box. Performance assessment needs to be looked at for all classification rule. 8

Classification rule Maximum likelihood discriminant rule A maximum likelihood estimator (MLE) chooses the parameter value that makes the chance of the observations the highest For known class conditional densities p k (X), the maximum likelihood (ML) discriminant rule predicts the class of an observation X by C(X) = argmax k p k (X) Gaussian ML discriminant rules For multivariate Gaussian (normal) class densities X Y= k ~ N(µ k, Σ k ), the ML classifier is C(X) = argmin k {(X - µ k ) Σ k -1 (X - µ k ) + log Σ k } In general, this is a quadratic rule (Quadratic discriminant analysis, or QDA) In practice, population mean vectors µ k and covariance matrices Σ k are estimated by corresponding sample quantities ML discriminant rules - special cases [DLDA] Diagonal linear discriminant analysis class densities have the same diagonal covariance matrix = diag(s 12,, s p2 ) [DQDA] Diagonal quadratic discriminant analysis) class densities have different diagonal covariance matrix k = diag(s 1k 2,, s pk2 ) Note. Weighted gene voting of Golub et al. (1999) is a minor variant of DLDA for two classes (wrong variance calculation). Nearest neighbor classification Based on a measure of distance between observations (e.g. Euclidean distance or one minus correlation). k-nearest neighbor rule (Fix and Hodges (1951)) classifies an observation X as follows: - find the k observations in the learning set closest to X - predict the class of X by majority vote, i.e., choose the class that is most common among those k observations. The number of neighbors k can be chosen by cross-validation (more on this later). 9

Nearest neighbor rule Classification tree Partition the feature space into a set of rectangles, then fit a simple model in each one Binary tree structured classifiers are constructed by repeated splits of subsets (nodes) of the measurement space X into two descendant subsets (starting with X itself) Each terminal subset is assigned a class label; the resulting partition of X corresponds to the classifier Classification tree Three aspects of tree construction Gene 1 M i1 < -0.67 Gene 2 Split selection rule: - Example, at each node, choose split maximizing decrease in impurity (e.g. Gini index, entropy, misclassification error). yes no 0 Split-stopping: yes Gene 2 M i2 > 0.18 no 2 0.18 1 2 Gene 1 - Example, grow large tree, prune to obtain a sequence of subtrees, then use cross-validation to identify the subtree with lowest misclassification rate. Class assignment: 0 1-0.67 - Example, for each terminal node, choose the class minimizing the resubstitution estimate of misclassification probability, given that a case falls into this node. 10

Classification with SVMs Other classifiers include Neural networks Logistic regression Projection pursuit Bayesian belief networks Why select features? Explicit feature selection One-gene-at-a-time approaches. Genes are ranked based on the value of a univariate test statistic such as: t- or F-statistic or their non-parametric variants (Wilcoxon/Kruskal-Wallis); p-value. Possible meta-parameters include the number of genes G or a p-value cutoff. A formal choice of these parameters may be achieved by crossvalidation or bootstrap procedures. Multivariate approaches. More refined feature selection procedures consider the joint distribution of the expression measures, in order to detect genes with weak main effects but possibly strong interaction. No feature selection -1 +1 Top 100 feature selection Selection based on variance Correlation plot Data: Leukemia, 3 class Bo & Jonassen (2002): Subset selection procedures for screening gene pairs to be used in classification. Breiman (1999): ranks genes according to importance statistic define in terms of prediction accuracy. Note that tree building itself does not involve explicit feature selection. 11

Implicit feature selection Feature selection may also be performed implicitly by the classification rule itself. In classification trees, features are selected at each step based on reduction in impurity and The number of features used (or size of the tree) is determined by pruning the tree using cross-validation. Thus, feature selection is inherent part of tree-building and pruning deals with over-fitting. Shrinkage methods and adaptive distance function. May be used for LDA and knn. Performance assessment Any classification rule needs to be evaluated for its performance on the future samples. It is almost never the case in microarray studies that a large independent population-based collection of samples is available at the time of initial classifier-building phase. One needs to estimate future performance based on what is available: often the same set that is used to build the classifier. Assessing performance of the classifier based on - Cross-validation. - Test set - Independent testing on future dataset Diagram of performance assessment Classifier Training Set Resubstitution estimation Performance assessment (I) Resubstitution estimation: error rate on the learning set. - Problem: downward bias Training set Performance assessment Test set estimation: 1) divide learning set into two sub-sets, L and T; Build the classifier on L and compute the error rate on T. 2) Build the classifier on the training set (L) and compute the error rate on an independent test set (T). Classifier Independent test set Test set estimation - L and T must be independent and identically distributed (i.i.d). - Problem: reduced effective sample size 12

Training set Diagram of performance assessment Classifier (CV) Learning set Classifier Training Set Cross Validation Performance assessment Resubstitution estimation Performance assessment (II) V-fold cross-validation (CV) estimation: Cases in learning set randomly divided into V subsets of (nearly) equal size. Build classifiers by leaving one set out; compute test set error rates on the left out set and averaged. - Bias-variance tradeoff: smaller V can give larger bias but smaller variance - Computationally intensive. (CV) Test set Classifier Independent test set Test set estimation Leave-one-out cross validation (LOOCV). (Special case for V=n). Works well for stable classifiers (k- NN, LOOCV) Performance assessment (III) Another component in classification rule: aggregating classifiers Common practice to do feature selection using the learning, then CV only for model building and classification. Resample 1 Resample 2 Classifier 1 Classifier 2 However, usually features are unknown and the intended inference includes feature selection. Then, CV estimates as above tend to be downward biased. Training Set X 1, X 2, X 100 Aggregate classifier Features (variables) should be selected only from the learning set used to build the model (and not the entire set) Resample 499 Resample 500 Classifier 499 Classifier 500 Examples: Bagging Boosting Random Forest 13

Aggregating classifiers: Bagging Test sample Comparison study Resample 1 X* 1, X* 2, X* 100 Resample 2 X* 1, X* 2, X* 100 Tree 1 Tree 2 Class 1 Class 2 Leukemia data Golub et al. (1999) - n = 72 samples, - G = 3,571 genes, - 3 classes (B-cell ALL, T-cell ALL, AML). Training Set (arrays) X 1, X 2, X 100 Resample 499 X* 1, X* 2, X* 100 Resample 500 X* 1, X* 2, X* 100 Tree 499 Tree 500 Lets the tree vote Class 1 Class 1 90% Class 1 10% Class 2 Reference: S. Dudoit, J. Fridlyand, and T. P. Speed (2002). Comparison of discrimination methods for the classification of tumors using gene expression data. Journal of the American Statistical Association, Vol. 97, No. 457, p. 77-87 Results In the main comparison, NN and DLDA had the smallest error rates. Aggregation improved the performance of CART classifiers. Leukemia data, 3 classes: Test set error rates;150 LS/TS runs For the leukemia datasets, increasing the number of genes to G=200 didn't greatly affect the performance of the various classifiers. 14

Comparison study discussion (I) Diagonal LDA: ignoring correlation between genes helped here. Unlike classification trees and nearest neighbors, DLDA is unable to take into account gene interactions. Classification trees are capable of handling and revealing interactions between variables. In addition, they have useful by-product of aggregated classifiers: prediction votes, variable importance statistics. Although nearest neighbors are simple and intuitive classifiers, their main limitation is that they give very little insight into mechanisms underlying the class distinctions. Summary (I) Bias-variance trade-off. Simple classifiers do well on small datasets. As the number of samples increases, we expect to see that classifiers capable of considering higher-order interactions (and aggregated classifiers) will have an edge. Cross-validation. It is of utmost importance to crossvalidate for every parameter that has been chosen based on the data, including meta-parameters - what and how many features - how many neighbors - pooled or unpooled variance - classifier itself. If this is not done, it is possible to wrongly declare having discrimination power when there is none. Summary (II) Generalization error rate estimation. It is necessary to keep sampling scheme in mind. Thousands and thousands of independent samples from variety of sources are needed to be able to address the true performance of the classifier. We are not at that point yet with microarrays studies. Van Veer et al (2002) study is probably the only study to date with ~300 test samples. Learning set Bad Good 295 samples selected from Netherland Cancer Institute tissue bank (1984 1995). Results Gene expression profile is a more powerful predictor then standard systems based on clinical and histologic criteria Classification Rule Feature selection. Correlation with class labels, very similar to t-test. Using cross validation to select 70 genes Agendia (formed by reseachers from the Netherlands Cancer Institute) Plan to start in Oct, 2003 1) 3000 subjects [Health Council of the Netherlands] 2) 5000 subjects New York based Avon Foundation. Custorm arrays are made by Aglient including 70 genes + 1000 controls Case studies Reference 1 Retrospective study L van t Veer et al Gene expression profiling predicts clinical outcome of breast cancer. Nature, Jan 2002.. Reference 2 Retrospective study M Van de Vijver et al. A gene expression signature as a predictor of survival in breast cancer. The New England Jouranl of Medicine, Dec 2002. Reference 3 Prospective trials. Aug 2003 Clinical trials http://www.agendia.com/ 15

Acknowledgements Clustering Agnes Paquet David Erle Andrea Barczac, UCSF Sandler Genomics Core Facility. Discrimination Jane Fridlyand Mark Segal Terry Speed Sandrine Dudoit 16