Time series, HMMs, Kalman Filters

Similar documents
Bayesian Networks Inference

Bayesian Networks Inference (continued) Learning

Expectation Maximization. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University

Dynamic Bayesian network (DBN)

ECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov

Motivation: Shortcomings of Hidden Markov Model. Ko, Youngjoong. Solution: Maximum Entropy Markov Model (MEMM)

Biology 644: Bioinformatics

Hidden Markov Models. Slides adapted from Joyce Ho, David Sontag, Geoffrey Hinton, Eric Xing, and Nicholas Ruozzi

Invariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction

ECE 6504: Advanced Topics in Machine Learning Probabilistic Graphical Models and Large-Scale Learning

CS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas

Structured Learning. Jun Zhu

Clustering Sequences with Hidden. Markov Models. Padhraic Smyth CA Abstract

Probabilistic Graphical Models

Conditional Random Fields and beyond D A N I E L K H A S H A B I C S U I U C,

Introduction to Hidden Markov models

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves

Generative and discriminative classification techniques

Machine Learning

Machine Learning

Machine Learning

COMP90051 Statistical Machine Learning

Lecture 21 : A Hybrid: Deep Learning and Graphical Models

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

Supervised and unsupervised classification approaches for human activity recognition using body-mounted sensors

Assignment 2. Unsupervised & Probabilistic Learning. Maneesh Sahani Due: Monday Nov 5, 2018

Pictorial Structures for Object Recognition

Gaussian Processes, SLAM, Fast SLAM and Rao-Blackwellization

Machine Learning

Hidden Markov Model for Sequential Data

Inference. Inference: calculating some useful quantity from a joint probability distribution Examples: Posterior probability: Most likely explanation:

From Gaze to Focus of Attention

Computer vision: models, learning and inference. Chapter 10 Graphical Models

Exact Inference: Elimination and Sum Product (and hidden Markov models)

of Manchester The University COMP14112 Markov Chains, HMMs and Speech Revision

Lecture 11: Clustering Introduction and Projects Machine Learning

Part II. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS

Machine Learning

Machine Learning

CS 532c Probabilistic Graphical Models N-Best Hypotheses. December

Support Vector Machine Learning for Interdependent and Structured Output Spaces

Modeling time series with hidden Markov models

18 October, 2013 MVA ENS Cachan. Lecture 6: Introduction to graphical models Iasonas Kokkinos

Comparisons of Sequence Labeling Algorithms and Extensions

Particle Filters for Visual Tracking

Generative and discriminative classification techniques

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

Graphical Models & HMMs

Speech Recognition Lecture 8: Acoustic Models. Eugene Weinstein Google, NYU Courant Institute Slide Credit: Mehryar Mohri

Sum-Product Networks. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 15, 2015

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Machine Learning

Mean Field and Variational Methods finishing off

Image classification by a Two Dimensional Hidden Markov Model

A Model Selection Criterion for Classification: Application to HMM Topology Optimization

Exam Topics. Search in Discrete State Spaces. What is intelligence? Adversarial Search. Which Algorithm? 6/1/2012

ECE521 W17 Tutorial 10

Hidden Markov Models in the context of genetic analysis

Evaluation of Model-Based Condition Monitoring Systems in Industrial Application Cases

Markov Decision Processes (MDPs) (cont.)

Skill. Robot/ Controller

Simple Model Selection Cross Validation Regularization Neural Networks

CSCI 599 Class Presenta/on. Zach Levine. Markov Chain Monte Carlo (MCMC) HMM Parameter Es/mates

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms for Inference Fall 2014

OSU CS 536 Probabilistic Graphical Models. Loopy Belief Propagation and Clique Trees / Join Trees

Inference and Representation

Hidden Markov Models. Gabriela Tavares and Juri Minxha Mentor: Taehwan Kim CS159 04/25/2017

Package HMMCont. February 19, 2015

Robust Probabilistic Inference in Distributed Systems

Mean Field and Variational Methods finishing off

Short Survey on Static Hand Gesture Recognition

NOVEL HYBRID GENETIC ALGORITHM WITH HMM BASED IRIS RECOGNITION

Conditional Random Fields for Activity Recognition

Machine Learning. Sourangshu Bhattacharya

A Brief Introduction to Bayesian Networks AIMA CIS 391 Intro to Artificial Intelligence

Conditional Random Fields - A probabilistic graphical model. Yen-Chin Lee 指導老師 : 鮑興國

Conditional Random Fields : Theory and Application

3 : Representation of Undirected GMs

The Un-normalized Graph p-laplacian based Semi-supervised Learning Method and Speech Recognition Problem

HMM-Based Handwritten Amharic Word Recognition with Feature Concatenation

D-Separation. b) the arrows meet head-to-head at the node, and neither the node, nor any of its descendants, are in the set C.

Feature Selection. Department Biosysteme Karsten Borgwardt Data Mining Course Basel Fall Semester / 262

CSE 586 Final Programming Project Spring 2011 Due date: Tuesday, May 3

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Instance-based Learning

Recognition of online captured, handwritten Tamil words on Android

Boosting Simple Model Selection Cross Validation Regularization. October 3 rd, 2007 Carlos Guestrin [Schapire, 1989]

Clustering web search results

Instance-based Learning

Hidden Markov Models. Implementing the forward-, backward- and Viterbi-algorithms

Introduction to Graphical Models

Improving Time Series Classification Using Hidden Markov Models

ModelStructureSelection&TrainingAlgorithmsfor an HMMGesture Recognition System

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models

27: Hybrid Graphical Models and Neural Networks

Machine Learning. Chao Lan

CPSC 340: Machine Learning and Data Mining. Probabilistic Classification Fall 2017

Introduction to Machine Learning CMU-10701

Handwritten Text Recognition

CVPR 2014 Visual SLAM Tutorial Efficient Inference

Transcription:

Classic HMM tutorial see class website: *L. R. Rabiner, "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition," Proc. of the IEEE, Vol.77, No.2, pp.257--286, 1989. Time series, HMMs, Kalman Filters Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University March 28 th, 2005

Adventures of our BN hero Compact representation for probability distributions Fast inference Fast learning 1. Naïve Bayes But Who are the most popular kids? 2 and 3. Hidden Markov models (HMMs) Kalman Filters

Handwriting recognition Character recognition, e.g., kernel SVMs r r r a c r rr z c c b

Example of a hidden Markov model (HMM)

Understanding the HMM Semantics X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 =

HMMs semantics: Details X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 = Just 3 distributions:

HMMs semantics: Joint distribution X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 =

Learning HMMs from fully observable data is easy X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 = Learn 3 distributions:

Possible inference tasks in an HMM X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 = Marginal probability of a hidden variable: Viterbi decoding most likely trajectory for hidden vars:

Using variable elimination to compute P(X i o 1:n ) X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} Compute: O 1 = O 2 = O 3 = O 4 = O 5 = Variable elimination order? Example:

What if I want to compute P(X i o 1:n ) for each i? X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} Compute: O 1 = O 2 = O 3 = O 4 = O 5 = Variable elimination for each i? Variable elimination for each i, what s the complexity?

Reusing computation X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} Compute: O 1 = O 2 = O 3 = O 4 = O 5 =

The forwards-backwards algorithm X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 = Initialization: For i = 2 to n Generate a forwards factor by eliminating X i-1 Initialization: For i = n-1 to 1 Generate a backwards factor by eliminating X i+1 i, probability is:

Most likely explanation X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 = Compute: Variable elimination order? Example:

The Viterbi algorithm X 1 = {a, z} X 2 = {a, z} X 3 = {a, z} X 4 = {a, z} X 5 = {a, z} O 1 = O 2 = O 3 = O 4 = O 5 = Initialization: For i = 2 to n Generate a forwards factor by eliminating X i-1 Computing best explanation: For i = n-1 to 1 Use argmax to get explanation:

What about continuous variables? In general, very hard! Must represent complex distributions A special case is very doable When everything is Gaussian Called a Kalman filter One of the most used algorithms in the history of probabilities!

Time series data example: Temperatures from sensor network 49 50 51 OFFICE OFFICE 54 52 53 CONFERENCE STORAGE 8 7 9 10 11 12 QUIET PHONE 13 14 15 18 16 17 48 ELEC COPY 5 6 LAB 19 47 46 4 21 20 45 44 43 42 41 40 KITCHEN 39 37 38 36 2 35 1 34 3 33 32 SERVER 29 31 30 28 23 27 25 26 22 24

Operations in Kalman filter X 1 X 2 X 3 X 4 X 5 O 1 = O 2 = O 3 = O 4 = O 5 = Compute Start with At each time step t: Condition on observation Roll-up (marginalize previous time step)

Detour: Understanding Multivariate Gaussians Observe attributes Example: Observe X 1 =18 P(X 2 X 1 =18)

Characterizing a multivariate Gaussian Mean vector: Covariance matrix:

Conditional Gaussians Conditional probabilities P(Y X)

Kalman filter with Gaussians X 1 X 2 X 3 X 4 X 5 O 1 = O 2 = O 3 = O 4 = O 5 = Equivalent to a linear system

Detour2: Canonical form Standard form and canonical forms are related: Conditioning is easy in canonical form Marginalization easy in standard form

Conditioning in canonical form First multiply: Then, condition on value B = y

Operations in Kalman filter X 1 X 2 X 3 X 4 X 5 O 1 = O 2 = O 3 = O 4 = O 5 = Compute Start with At each time step t: Condition on observation Roll-up (marginalize previous time step)

Roll-up in canonical form First multiply: Then, marginalize X t :

Operations in Kalman filter X 1 X 2 X 3 X 4 X 5 O 1 = O 2 = O 3 = O 4 = O 5 = Compute Start with At each time step t: Condition on observation Roll-up (marginalize previous time step)

Learning a Kalman filter Must learn: Learn joint, and use division rule:

Maximum likelihood learning of a multivariate Gaussian Data: Means are just empirical means: Empirical covariances:

What you need to know Hidden Markov models (HMMs) Very useful, very powerful! Speech, OCR, Parameter sharing, only learn 3 distributions Trick reduces inference from O(n 2 ) to O(n) Special case of BN Kalman filter Continuous vars version of HMMs Assumes Gaussian distributions Equivalent to linear system Simple matrix operations for computations