Clustering Sequences with Hidden. Markov Models. Padhraic Smyth CA Abstract

Similar documents
the number of states must be set in advance, i.e. the structure of the model is not t to the data, but given a priori the algorithm converges to a loc

Mixture Models and the EM Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm

Random projection for non-gaussian mixture models

Machine Learning (BSMC-GA 4439) Wenke Liu

Machine Learning (BSMC-GA 4439) Wenke Liu

Skill. Robot/ Controller

ECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov

Clustering & Dimensionality Reduction. 273A Intro Machine Learning

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

Time series, HMMs, Kalman Filters

Mixture Models and EM

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

(a) (b) Penalty Prior CS BIC Likelihood -600 Likelihood

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

Assignment 2. Unsupervised & Probabilistic Learning. Maneesh Sahani Due: Monday Nov 5, 2018

Machine Learning. Unsupervised Learning. Manfred Huber

NOVEL HYBRID GENETIC ALGORITHM WITH HMM BASED IRIS RECOGNITION

Rowena Cole and Luigi Barone. Department of Computer Science, The University of Western Australia, Western Australia, 6907

application of learning vector quantization algorithms. In Proceedings of the International Joint Conference on

Data Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Generative and discriminative classification techniques

STA 4273H: Statistical Machine Learning

Chapter 6 Continued: Partitioning Methods

Introduction to Trajectory Clustering. By YONGLI ZHANG

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves

Image Segmentation Region Number. Ping Guo and Michael R. Lyu. The Chinese University of Hong Kong, normal (Gaussian) mixture, has been used in a

Motivation. Technical Background

Modeling time series with hidden Markov models

CS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas

Semi-Supervised Clustering with Partial Background Information

k-means demo Administrative Machine learning: Unsupervised learning" Assignment 5 out

Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011

Constraints in Particle Swarm Optimization of Hidden Markov Models

ModelStructureSelection&TrainingAlgorithmsfor an HMMGesture Recognition System

Clustering Sequence Data using Hidden Markov Model. Representation. Cen Li and Gautam Biswas. Box 1679 Station B,

A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation. Kwanyong Lee 1 and Hyeyoung Park 2

Ambiguity Detection by Fusion and Conformity: A Spectral Clustering Approach

Supplementary Material: The Emergence of. Organizing Structure in Conceptual Representation

Effect of Initial HMM Choices in Multiple Sequence Training for Gesture Recognition

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

K-Means and Gaussian Mixture Models

A Model Selection Criterion for Classification: Application to HMM Topology Optimization

10. MLSP intro. (Clustering: K-means, EM, GMM, etc.)

Content-based image and video analysis. Machine learning

Unsupervised Learning

CS 2750 Machine Learning. Lecture 19. Clustering. CS 2750 Machine Learning. Clustering. Groups together similar instances in the data sample

Clustering Relational Data using the Infinite Relational Model

Person Authentication from Video of Faces: A Behavioral and Physiological Approach Using Pseudo Hierarchical Hidden Markov Models

Preface to the Second Edition. Preface to the First Edition. 1 Introduction 1

An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework

Computer vision: models, learning and inference. Chapter 10 Graphical Models

10701 Machine Learning. Clustering

Further Applications of a Particle Visualization Framework

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06

APPLICATION OF THE FUZZY MIN-MAX NEURAL NETWORK CLASSIFIER TO PROBLEMS WITH CONTINUOUS AND DISCRETE ATTRIBUTES

Machine Learning in Biology

Adaptive Estimation of Distributions using Exponential Sub-Families Alan Gous Stanford University December 1996 Abstract: An algorithm is presented wh

Introduction to Machine Learning CMU-10701

Expectation Maximization (EM) and Gaussian Mixture Models

Cluster quality 15. Running time 0.7. Distance between estimated and true means Running time [s]

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

K-means and Hierarchical Clustering

ABSTRACT 1. INTRODUCTION 2. METHODS

Mixture models and clustering

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models

Department of. Computer Science. Remapping Subpartitions of. Hyperspace Using Iterative. Genetic Search. Keith Mathias and Darrell Whitley

Invariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction

Scott Shaobing Chen & P.S. Gopalakrishnan. IBM T.J. Watson Research Center. as follows:

Introduction to Mobile Robotics

COMP5318 Knowledge Management & Data Mining Assignment 1

Data Mining: Models and Methods

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

Regularization and model selection

Some questions of consensus building using co-association

Flexible-Hybrid Sequential Floating Search in Statistical Feature Selection

APPLICATION OF MULTIPLE RANDOM CENTROID (MRC) BASED K-MEANS CLUSTERING ALGORITHM IN INSURANCE A REVIEW ARTICLE

CS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample

Cluster Analysis. Prof. Thomas B. Fomby Department of Economics Southern Methodist University Dallas, TX April 2008 April 2010

Clustering. SC4/SM4 Data Mining and Machine Learning, Hilary Term 2017 Dino Sejdinovic

10-701/15-781, Fall 2006, Final

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA

Pattern Clustering with Similarity Measures

An Evaluation of Information Retrieval Accuracy. with Simulated OCR Output. K. Taghva z, and J. Borsack z. University of Massachusetts, Amherst

Learning in Medical Image Databases. Cristian Sminchisescu. Department of Computer Science. Rutgers University, NJ

STATS306B STATS306B. Clustering. Jonathan Taylor Department of Statistics Stanford University. June 3, 2010

Hidden Markov Models. Slides adapted from Joyce Ho, David Sontag, Geoffrey Hinton, Eric Xing, and Nicholas Ruozzi

A Formal Approach to Score Normalization for Meta-search

Unsupervised: no target value to predict

SGN (4 cr) Chapter 11

Adaptive Metric Nearest Neighbor Classification

Jahmm v Jean-Marc François

Relative Constraints as Features

Markov Random Fields and Gibbs Sampling for Image Denoising

PROTEIN MULTIPLE ALIGNMENT MOTIVATION: BACKGROUND: Marina Sirota

Image Segmentation. Shengnan Wang

Approximate Smoothing Spline Methods for Large Data Sets in the Binary Case Dong Xiang, SAS Institute Inc. Grace Wahba, University of Wisconsin at Mad

Stats 170A: Project in Data Science Exploratory Data Analysis: Clustering Algorithms

Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining, 2 nd Edition

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Transcription:

Clustering Sequences with Hidden Markov Models Padhraic Smyth Information and Computer Science University of California, Irvine CA 92697-3425 smyth@ics.uci.edu Abstract This paper discusses a probabilistic model-based approach to clustering sequences, using hidden Markov models (HMMs). The problem can be framed as a generalization of the standard mixture model approach to clustering in feature space. Two primary issues are addressed. First, a novel parameter initialization procedure is proposed, and second, the more dicult problem of determining the number of clusters K, from the data, is investigated. Experimental results indicate that the proposed techniques are useful for revealing hidden cluster structure in data sets of sequences. 1 Introduction Consider a data set D consisting of N sequences, D = fs 1 ;...; S N g. S i = (x i 1 ;...xi L i ) is a sequence of length L i composed of potentially multivariate feature vectors x. The problem addressed in this paper is the discovery from data of a natural grouping of the sequences into K clusters. This is analagous to clustering in multivariate feature space which is normally handled by methods such as k-means and Gaussian mixtures. Here, however, one is trying to cluster the sequences S rather than the feature vectors x. As an example Figure 1 shows four sequences which were generated by two dierent models (hidden Markov models in this case). The rst and third came from a model with \slower" dynamics than the second and fourth (details will be provided later). The sequence clustering problem consists of being given sample sequences such as those in Figure 1 and inferring from the data what the underlying clusters are. This is non-trivial since the sequences can be of dierent lengths and it is not clear what a meaningful distance metric is for sequence comparison. The use of hidden Markov models for clustering sequences appears to have rst

1 1 1 2 3 4 5 6 7 8 9 1 1 1 1 2 3 4 5 6 7 8 9 1 1 1 1 2 3 4 5 6 7 8 9 1 1 1 1 2 3 4 5 6 7 8 9 1 Position in Sequence Figure 1: Which sequences came from which hidden Markov model? been mentioned in Juang and Rabiner (1985) and subsequently used in the context of discovering subfamilies of protein sequences in Krogh et al. (1994). This present paper contains two new contributions in this context: a cluster-based method for initializing the model parameters and a novel method based on cross-validated likelihood for determining automatically how many clusters to t to the data. A natural probabilistic model for this problem is that of a nite mixture model: f K (S) = KX j=1 f j (Sj j )p j (1) where S denotes a sequence, p j is the weight of the jth model, and f j (Sj j ) is the density function for the sequence data S given the component model f j with parameters j. Here we will assume that the f j 's are HMMs: thus, the j 's are the transition matrices, observation density parameters, and initial state probabilities, all for the jth component. f j (Sj j ) can be computed via the forward part of the forward backward procedure. More generally, the component models could be any probabilistic model for S such as linear autoregressive models, graphical models, non-linear networks with probabilistic semantics, and so forth. It is important to note that the motivation for this problem comes from the goal of building a descriptive model for the data, rather than prediction per se. For the prediction problem there is a clearly dened metric for performance, namely average prediction error on out-of-sample data (cf. Rabiner et al. (1989) in a speech context with clusters of HMMs and Zeevi, Meir, and Adler (1997) in a general time-series context). In contrast, for descriptive modeling it is not always clear what the appropriate metric for evaluation is, particularly when K, the number of clusters, is unknown. In this paper a density estimation viewpoint is taken and the likelihood of out-of-sample data is used as the measure of the quality of a particular model. 2 An Algorithm for Clustering Sequences into K Clusters Assume rst that K, the number of clusters, is known. Our model is that of a mixture of HMMs as in Equation 1. We can immediately observe that this mixture can itself be viewed as a single \composite" HMM where the transition matrix A of the model is block-diagonal, e.g., if the mixture model consists of two components with transition matrices A 1 and A 2 we can represent the overall mixture model as

a single HMM (in eect, a hierarchical mixture) with transition matrix A = A1 A 2 where the initial state probabilities are chosen appropriately to reect the relative weights of the mixture components (the p k in Equation 1). Intuitively, a sequence is generated from this model by initially randomly choosing either the \upper" matrix A 1 (with probability p 1 ) or the \lower" matrix with probability A 2 (with probability 1? p 1 ) and then generating data according to the appropriate A i. There is no \crossover" in this mixture model: data are assumed to come from one component or the other. Given this composite HMM a natural approach is to try to learn the parameters of the model using standard HMM estimation techniques, i.e., some form of initialization followed by Baum-Welch to maximize the likelihood. Note that unlike predictive modelling (where likelihood is not necessarily an appropriate metric to evaluate model quality), likelihood maximization is exactly what we want to do here since we seek a generative (descriptive) model for the data. We will assume throughout that the number of states per component is known a priori, i.e., that we are looking for K HMM components each of which has m states and m is known. An obvious extension is to address the problem of learning K and m simultaneously but this is not dealt with here. 2.1 Initialization using Clustering in \Log-Likelihood Space" Since the EM algorithm is eectively hill-climbing the likelihood surface, the quality of the nal solution can depend critically on the initial conditions. Thus, using as much prior information as possible about the problem to seed the initialization is potentially worthwhile. This motivates the following scheme for initializing the A matrix of the composite HMM: 1. Fit N m-state HMMs, one to each individual sequence S i ; 1 i N. These HMMs can be initialized in a \default" manner: set the transition matrices uniformly and set the means and covariances using the k-means algorithm, where here k = m, not to be confused with K, the number of HMM components. For discrete observation alphabets modify accordingly. 2. For each tted model M i, evaluate the log-likelihood of each of the N sequences given model M i, i.e., calculate L ij = log L(S j jm i ); 1 i; j N. 3. Use the log-likelihood distance matrix to cluster the sequences into K groups (details of the clustering are discussed below). 4. Having pooled the sequences into K groups, t K HMMs, one to each group, using the default initialization described above. From the K HMMs we get K sets of parameters: initialize the composite HMM in the obvious way, i.e., the mm \block-diagonal" component A j of A (where A is mk mk) is set to the estimated transition matrix from the jth group and the means and covariances of the jth set of states are set accordingly. Initialize the p j in Equation 1 to N j =N where N j is the number of sequences which belong to cluster j. After this initialization step is complete, learning proceeds directly on the composite HMM (with matrix A) in the usual Baum-Welch fashion using all of the sequences. The intuition behind this initialization procedure is as follows. The hypothesis is that the data are being generated by K models. Thus, if we t models to each individual sequence, we will get noisier estimates of the model parameters (than if we used all of the sequences from that cluster) but the parameters should be (2)

clustered in some manner into K groups about their true values (assuming the model is correct). Clustering directly in parameter space would be inappropriate (how does one dene distance?): however, the log-likelihoods are a natural way to dene pairwise distances. Note that step 1 above requires the training of N sequences individually and step 2 requires the evaluation of N 2 distances. For large N this may be impractical. Suitable modications which train only on a small random sample of the N sequences and randomly sample the distance matrix could help reduce the computational burden, but this is not pursued here. A variety of possible clustering methods can be used in step 3 above. The \symmetrized distance" L ij = 1=2(L(S i jm j )+L(S j jm i )) can be shown to be an appropriate measure of dissimilarity between models M i and M j (Juang and Rabiner 1985). For the results described in this paper, hierarchical clustering was used to generate K clusters from the symmetrized distance matrix. The \furthest-neighbor" merging heuristic was used to encourage compact clusters and worked well empirically, although there is no particular reason to use only this method. We will refer to the above clustering-based initialization followed by Baum-Welch training on the composite model as the \HMM-Clustering" algorithm in the rest of the paper. 2.2 Experimental Results Consider a deceptively simple \toy" problem. 1-dimensional feature data are generated from a 2-component HMM mixture (K = 2), each with 2 states. We have A 1 = :6 :4 :4 :6 A 2 = :4 :6 :6 :4 and the observable feature data obey a Gaussian density in each state with 1 = 2 = 1 for each state in each component, and 1 = ; 2 = 3 for the respective mean of each state of each component. 4 sample sequences are shown in Figure 1. The top, and third from top, sequences are from the \slower" component A 1 (is more likely to stay in any state than switch). In total the training data contain 2 sample sequences from each component of length 2. The problem is non-trivial both because the data have exactly the same marginal statistics if the temporal sequence information is removed and because the Markov dynamics (as governed by A 1 and A 2 ) are relatively similar for each component making identication somewhat dicult. The HMM clustering algorithm was applied to these sequences. The symmetrized likelihood distance matrix is shown as a grey-scale image in Figure 2. The axes have been ordered so that the sequences from the same clusters are adjacent. The dierence in distances between the two clusters is apparent and the hierarchical clustering algorithm (with K = 2) easily separates the two groups. This initial clustering, followed by training separately the two clusters on the set of sequences assigned to each cluster, yielded: ^A 1 = :58 :42 :42 :598 ^A 2 = :392 :611 :68 :389 ^ 1 = 2:892 :4 ^ 2 = 2:911 :138 ^ 1 = 1:353 1:219 ^ 2 = 1:239 1:339 Subsequent training of the composite model on all of the sequences produced more rened parameter estimates, although the basic cluster structure of the model remained the same (i.e., the initial clustering was robust).

Symmetric Log Likelihood Distance Matrix 5 model_.4 1 15 2 25 model_.6 3 35 4 5 1 15 2 25 3 35 4 model_.4 model_.6 Figure 2: Symmetrized log-likelihood distance matrix. For comparative purposes two alternative initialization procedures were used to initialize the training of the composite HMM. The \unstructured" method uniformly initializes the A matrix without any knowledge of the fact that the o-block-diagonal terms are zero (this is the \standard" way of tting a HMM). The \block-uniform" method uniformly initializes the K block-diagonal matrices within A and sets the o-block-diagonal terms to zero. Random initialization gave poorer results overall compared to uniform. Table 1: Dierences in log-likelihood for dierent initialization methods. Initialization Maximum Mean Standard Method Log-Likelihood Log-Likelihood Deviation Value Value of Log-Likelihoods Unstructured 7.6. 1.3 Block-Uniform 44.8 8.1 28.7 HMM-Clustering 55.1 5.4.9 The three alternatives were run 2 times on the data above, where for each run the seeds for the k-means component of the initialization were changed. The maximum, mean and standard deviations of log-likelihoods on test data are reported in Table 1 (the log-likelihoods were oset so that the mean unstructured log-likelihood is zero). The unstructured approach is substantially inferior to the others on this problem: this is not surprising since it is not given the block-diagonal structure of the true model. The Block-Uniform method is closer in performance to HMM- Clustering but is still inferior. In particular, its log-likelihood is consistently lower than that of the HMM-Clustering solution and has much greater variability across dierent initial seeds. The same qualitative behavior was observed across a variety of simulated data sets (results are not presented here due to lack of space). 3 Learning K, the Number of Sequence Components 3.1 Background Above we have assumed that K, the number of clusters, is known. The problem of learning the \best" value for K in a mixture model is a dicult one in practice

even for the simpler (non-dynamic) case of Gaussian mixtures. There has been considerable prior work on this problem. Penalized likelihood approaches are popular, where the log-likelihood on the training data is penalized by the subtraction of a complexity term. A more general approach is the full Bayesian solution where the posterior probability of each value of K is calculated given the data, priors on the mixture parameters, and priors on K itself. A potential diculty here is the the computational complexity of integrating over the parameter space to get the posterior probabilities on K. Various analytic and sampling approximations are used in practice. In theory, the full Bayesian approach is fully optimal and probably the most useful. However, in practice the ideal Bayesian solution must be approximated and it is not always obvious how the approximation aects the quality of the nal answer. Thus, there is room to explore alternative methods for determining K. 3.2 A Monte-Carlo Cross-Validation Approach Imagine that we had a large test data set D test which is not used in tting any of the models. Let L K (D test ) be the log-likelihood where the model with K components is t to the training data D but the likelihood is evaluated on D test. We can view this likelihood as a function of the \parameter" K, keeping all other parameters and D xed. Intuitively, this \test likelihood" should be a much more useful estimator than the training data likelihood for comparing mixture models with dierent numbers of components. In fact, the test likelihood can be shown to be an unbiased estimator of the Kullback-Leibler distance between the true (but unknown) density and the model. Thus, maximizing out-of-sample likelihood over K is a reasonable model selection strategy. In practice, one does not usually want to reserve a large fraction of one's data for test purposes: thus, a cross-validated estimate of log-likelihood can be used instead. In Smyth (1996) it was found that for standard multivariate Gaussian mixture modeling, the standard v-fold cross-validation techniques (with say v = 1) performed poorly in terms of selecting the correct model on simulated data. Instead Monte- Carlo cross-validation (Shao, 1993) was found to be much more stable: the data are partitioned into a fraction for testing and 1? for training, and this procedure is repeated M times where the partitions are randomly chosen on each run (i.e., need not be disjoint). In choosing one must tradeo the variability of the performance estimate on the test set with the variability in model tting on the training set. In general, as the total amount of data increases relative to the model complexity, the optimal becomes larger. For the mixture clustering problem = :5 was found empirically to work well (Smyth, 1996) and is used in the results reported here. 3.3 Experimental Results The same data set as described earlier was used where now K is not known a priori. The 4 sequences were randomly partitioned 2 times into training and test crossvalidation sets. For each train/test partition the value of K was varied between 1 and 6, and for each value of K the HMM-Clustering algorithm was t to the training data, and the likelihood was evaluated on the test data. The mean cross-validated likelihood was evaluated as the average over the 2 runs. Assuming the models are equally likely a priori, one can generate an approximate posterior distribution p(kjd) by Bayes rule: these posterior probabilities are shown in Figure 3. The cross-validation procedure produces a clear peak at K = 2 which is the true model size. In general, the cross-validation method has been tested on a variety of other simulated sequence clustering data sets and typically converges as a function of the number of training samples to the true value of K (from below). As the number of

.9 Estimated Posterior Probabilities on K for 2 Cluster Data.8.7 Posterior Probability.6.5.4.3.2.1 1 2 3 4 5 6 7 K Figure 3: Posterior probability distribution on K as estimated by cross-validation. data points grow, the posterior distribution on K narrows about the true value of K. If the data were not generated by the assumed form of the model, the posterior distribution on K will tend to be peaked about the model size which is closest (in K-L distance) to the true model. Results in the context of Gaussian mixture clustering(smyth 1996) have shown that the Monte Carlo cross-validation technique performs as well as the better Bayesian approximation methods and is more robust then penalized likelihood methods such as BIC. In conclusion, we have shown that model-based probabilistic clustering can be generalized from feature-space clustering to sequence clustering. Log-likelihood between sequence models and sequences was found useful for detecting cluster structure and cross-validated likelihood was shown to be able to detect the true number of clusters. References Baldi, P. and Y. Chauvin, `Hierarchical hybrid modeling, HMM/NN architectures, and protein applications,' Neural Computation, 8(6), 1541{1565, 1996. Krogh, A. et al., `Hidden Markov models in computational biology: applications to protein modeling,' it J. Mol. Bio., 235:151{1531, 1994. Juang, B. H., and L. R. Rabiner, `A probabilistic distance measure for hidden Markov models,' AT&T Technical Journal, vol.64, no.2, February 1985. Rabiner, L. R., C. H. Lee, B. H. Juang, and J. G. Wilpon, `HMM clustering for connected word recognition,' Proc. Int. Conf. Ac. Speech. Sig. Proc, IEEE Press, 45{48, 1989. Shao, J., `Linear model selection by cross-validation,' J. Am. Stat. Assoc., 88(422), 486{494, 1993. Smyth, P., `Clustering using Monte-Carlo cross validation,' in Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, Menlo Park, CA: AAAI Press, pp.126{133, 1996. Zeevi, A. J., Meir, R., Adler, R., `Time series prediction using mixtures of experts,' in this volume, 1997.