Classic HMM tutorial see class website: *L. R. Rabiner, "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition," Proc. of the IEEE, Vol.77, No.2, pp.257--286, 1989. Time series, HMMs, Kalman Filters Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University March 28 th, 2005
Adventures of our BN hero Compact representation for probability distributions Fast inference Fast learning 1. Naïve Bayes But Who are the most popular kids? 2 and 3. Hidden Markov models (HMMs) Kalman Filters
Handwriting recognition Character recognition, e.g., kernel SVMs r r r a c r rr z c c b
Example of a hidden Markov model (HMM)
Understanding the HMM Semantics X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 =
HMMs semantics: Details X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 = Just 3 distributions:
HMMs semantics: Joint distribution X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 =
Learning HMMs from fully observable data is easy X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 = Learn 3 distributions:
Possible inference tasks in an HMM X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 = Marginal probability of a hidden variable: Viterbi decoding most likely trajectory for hidden vars:
Using variable elimination to compute P(X i o 1:n ) X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} Compute: O 1 = O 2 = O 3 = O 4 = O 5 = Variable elimination order? Example:
What if I want to compute P(X i o 1:n ) for each i? X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} Compute: O 1 = O 2 = O 3 = O 4 = O 5 = Variable elimination for each i? Variable elimination for each i, what s the complexity?
Reusing computation X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} Compute: O 1 = O 2 = O 3 = O 4 = O 5 =
The forwards-backwards algorithm X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 = Initialization: For i = 2 to n Generate a forwards factor by eliminating X i-1 Initialization: For i = n-1 to 1 Generate a backwards factor by eliminating X i+1 i, probability is:
Most likely explanation X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 = Compute: Variable elimination order? Example:
The Viterbi algorithm X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 = Initialization: For i = 2 to n Generate a forwards factor by eliminating X i-1 Computing best explanation: For i = n-1 to 1 Use argmax to get explanation:
What about continuous variables? In general, very hard! Must represent complex distributions A special case is very doable When everything is Gaussian Called a Kalman filter One of the most used algorithms in the history of probabilities!
Time series data example: Temperatures from sensor network 49 50 51 OFFICE OFFICE 54 52 53 CONFERENCE STORAGE 8 7 9 10 11 12 QUIET PHONE 13 14 15 18 16 17 48 ELEC COPY 5 6 LAB 19 47 46 4 21 20 45 44 43 42 41 40 KITCHEN 39 37 38 36 2 35 1 34 3 33 32 SERVER 29 31 30 28 23 27 25 26 22 24
Operations in Kalman filter X 1 X 2 X 3 X 4 X 5 O 1 = O 2 = O 3 = O 4 = O 5 = Compute Start with At each time step t: Condition on observation Roll-up (marginalize previous time step)
Detour: Understanding Multivariate Gaussians Observe attributes Example: Observe X 1 =18 P(X 2 X 1 =18)
Characterizing a multivariate Gaussian Mean vector: Covariance matrix:
Conditional Gaussians Conditional probabilities P(Y X)
Kalman filter with Gaussians X 1 X 2 X 3 X 4 X 5 O 1 = O 2 = O 3 = O 4 = O 5 = Equivalent to a linear system
Detour2: Canonical form Standard form and canonical forms are related: Conditioning is easy in canonical form Marginalization easy in standard form
Conditioning in canonical form First multiply: Then, condition on value B = y
Operations in Kalman filter X 1 X 2 X 3 X 4 X 5 O 1 = O 2 = O 3 = O 4 = O 5 = Compute Start with At each time step t: Condition on observation Roll-up (marginalize previous time step)
Roll-up in canonical form First multiply: Then, marginalize X t :
Operations in Kalman filter X 1 X 2 X 3 X 4 X 5 O 1 = O 2 = O 3 = O 4 = O 5 = Compute Start with At each time step t: Condition on observation Roll-up (marginalize previous time step)
Learning a Kalman filter Must learn: Learn joint, and use division rule:
Maximum likelihood learning of a multivariate Gaussian Data: Means are just empirical means: Empirical covariances:
What you need to know Hidden Markov models (HMMs) Very useful, very powerful! Speech, OCR, Parameter sharing, only learn 3 distributions Trick reduces inference from O(n 2 ) to O(n) Special case of BN Kalman filter Continuous vars version of HMMs Assumes Gaussian distributions Equivalent to linear system Simple matrix operations for computations