Outline Introduction Goal Methodology Results Discussion Conclusion 5/9/2008 2

Similar documents
EEG Group. Research Paper. By: Anthony Hampton. Tony Nuth. Miral Patel

Expectation-Maximization. Nuno Vasconcelos ECE Department, UCSD

Machine Learning. Unsupervised Learning. Manfred Huber

Clustering Lecture 5: Mixture Model

Missing variable problems

Introduction to Mobile Robotics

Lecture 8: The EM algorithm

Motivation. Technical Background

Histograms. h(r k ) = n k. p(r k )= n k /NM. Histogram: number of times intensity level rk appears in the image

Lecture 7: Segmentation. Thursday, Sept 20

Unsupervised Learning : Clustering

K-Means Clustering 3/3/17

Optimal Subsequence Bijection

Unsupervised Learning: Clustering

Segmentation & Grouping Kristen Grauman UT Austin. Announcements

Fitting D.A. Forsyth, CS 543

3. Data Structures for Image Analysis L AK S H M O U. E D U

IBL and clustering. Relationship of IBL with CBR

10701 Machine Learning. Clustering

I How does the formulation (5) serve the purpose of the composite parameterization

Estimating Noise and Dimensionality in BCI Data Sets: Towards Illiteracy Comprehension

Bus Detection and recognition for visually impaired people

10-701/15-781, Fall 2006, Final

Automated fmri Feature Abstraction using Neural Network Clustering Techniques

Association Rule Mining and Clustering

Mixture Models and EM

Introduction to Machine Learning. Xiaojin Zhu

Gaussian Mixture Models For Clustering Data. Soft Clustering and the EM Algorithm

Final Exam Assigned: 11/21/02 Due: 12/05/02 at 2:30pm

Clustering: Classic Methods and Modern Views

A Multiple-Line Fitting Algorithm Without Initialization Yan Guo

Expectation Maximization!

ABSTRACT 1. INTRODUCTION 2. METHODS

Announcements. Image Segmentation. From images to objects. Extracting objects. Status reports next Thursday ~5min presentations in class

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning

Clustering in Ratemaking: Applications in Territories Clustering

Expectation Maximization (EM) and Gaussian Mixture Models

Tight Clusters and Smooth Manifolds with the Harmonic Topographic Map.

CS145: INTRODUCTION TO DATA MINING

Unsupervised Learning

ECE 5424: Introduction to Machine Learning

Section 9. Human Anatomy and Physiology

10601 Machine Learning. Hierarchical clustering. Reading: Bishop: 9-9.2

Unsupervised Learning. Clustering and the EM Algorithm. Unsupervised Learning is Model Learning

Mixture Models and the EM Algorithm

Clustering & Dimensionality Reduction. 273A Intro Machine Learning

CS 2750 Machine Learning. Lecture 19. Clustering. CS 2750 Machine Learning. Clustering. Groups together similar instances in the data sample

MATLAB Based Interactive Music Player using XBOX Kinect

2/15/2009. Part-Based Models. Andrew Harp. Part Based Models. Detect object from physical arrangement of individual features

Density estimation. In density estimation problems, we are given a random from an unknown density. Our objective is to estimate

Copyright Notice. Do not remove this notice. COMMONWEALTH OF AUSTRALIA Copyright Regulations 1969 WARNING

Solution Sketches Midterm Exam COSC 6342 Machine Learning March 20, 2013

Unsupervised Learning Partitioning Methods

Breaking it Down: The World as Legos Benjamin Savage, Eric Chu

Random projection for non-gaussian mixture models

SPARSE COMPONENT ANALYSIS FOR BLIND SOURCE SEPARATION WITH LESS SENSORS THAN SOURCES. Yuanqing Li, Andrzej Cichocki and Shun-ichi Amari

Effect of age and dementia on topology of brain functional networks. Paul McCarthy, Luba Benuskova, Liz Franz University of Otago, New Zealand

CS 229 Midterm Review

Some questions of consensus building using co-association

Convolution Product. Change of wave shape as a result of passing through a linear filter

BESA Research. CE certified software package for comprehensive, fast, and user-friendly analysis of EEG and MEG

CS 543: Final Project Report Texture Classification using 2-D Noncausal HMMs

Introduction to Machine Learning CMU-10701


Cluster Evaluation and Expectation Maximization! adapted from: Doug Downey and Bryan Pardo, Northwestern University

A Simple Generative Model for Single-Trial EEG Classification

Tri-modal Human Body Segmentation

A Deterministic Global Optimization Method for Variational Inference

Inference and Representation

( ) =cov X Y = W PRINCIPAL COMPONENT ANALYSIS. Eigenvectors of the covariance matrix are the principal components

Practical 4: The Integrate & Fire neuron

human vision: grouping k-means clustering graph-theoretic clustering Hough transform line fitting RANSAC

A SURVEY ON CLUSTERING ALGORITHMS Ms. Kirti M. Patil 1 and Dr. Jagdish W. Bakal 2

Research on the New Image De-Noising Methodology Based on Neural Network and HMM-Hidden Markov Models

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

Segmentation in electron microscopy images

ALTERNATIVE METHODS FOR CLUSTERING

Pattern recognition (3)

Fall 09, Homework 5

Homework 2: Search and Optimization

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning : Clustering, Self-Organizing Maps

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization

Behavioral Data Mining. Lecture 18 Clustering

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

Content based Image Retrievals for Brain Related Diseases

K-Means Clustering. Sargur Srihari

Chapter 7 UNSUPERVISED LEARNING TECHNIQUES FOR MAMMOGRAM CLASSIFICATION

CLUSTERING. JELENA JOVANOVIĆ Web:

Note Set 4: Finite Mixture Models and the EM Algorithm

MORPH-II: Feature Vector Documentation

Subspace Clustering with Global Dimension Minimization And Application to Motion Segmentation

The EM Algorithm Lecture What's the Point? Maximum likelihood parameter estimates: One denition of the \best" knob settings. Often impossible to nd di

Hard clustering. Each object is assigned to one and only one cluster. Hierarchical clustering is usually hard. Soft (fuzzy) clustering

A P300-speller based on event-related spectral perturbation (ERSP) Ming, D; An, X; Wan, B; Qi, H; Zhang, Z; Hu, Y

Markov Random Fields and Segmentation with Graph Cuts

Clustering. Image segmentation, document clustering, protein class discovery, compression

CS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample

The Curse of Dimensionality

Transcription:

Group EEG (Electroencephalogram) l Anthony Hampton, Tony Nuth, Miral Patel (Portions credited to Jack Shelley-Tremblay and E. Keogh) 05/09/2008 5/9/2008 1

Outline Introduction Goal Methodology Results Discussion Conclusion 5/9/2008 2

Goals The goal of this project is to evaluate EEG data from 19 subjects using various techniques of mathematics. 5/9/2008 3

EEG A recording of the electrical waves that sweeps over the brain s surface. It is measured by the electrodes that is placed on top of the scalp. 5/9/2008 4

Parts of the brain We are interested in the Primary Motor Cortex, and Pre-motor Cortex 5/9/2008 5

Understand the Waves In neurophysiology,,anaction action potential (also known as a nerve impulse or spike) ) is a pulse- like wave of voltage that travels along several types of cell membranes. 5/9/2008 6

Placement International 10-20 system, Place an Electrode on Each point. 5/9/2008 7

Event related Potentials An event-related potential (ERP) is any stereotyped electrophysiological response to an internal or external stimulus. More simply, it is any measured brain response that is directly the result of a thought or perception. By collecting multiple trials of the same type of stimuli, we can enhance the signal and reduce the noise using simple math(typically neuroscientist use this). 5/9/2008 8

Defined an Epoch A series of time points locked in a significant point time. The button press is our significant point in time. 5/9/2008 9

A subject s s data Example of an Epoch 5/9/2008 10

Same data Same data At a different Epoch 5/9/2008 11

Time Series A time series is a sequence of data points, measured typically at successive times, spaced at (often uniform) time intervals. 5/9/2008 12

Time Series What is a time series? 5/9/2008 13

What EM clustering does How do we classify points and estimate parameters of the models in a mixture at the same time? Adaptive soft clustering: EM.. Data points are assigned to each group with a probability bilit equal to a likelihood lih of that t point belonging to that group. 5/9/2008 14

What is EM - Expectation Maximization A statistical model that makes use of the finite Gaussian mixture models. A set of parameters are recomputed until a desired value is reached Initial variables are randomly initialized iti 5/9/2008 15

The methods of EM Initialization: Pick start values for parameters (for us it was making random models and setting a sigma) Iteratively process until parameters converge Expectation (E) step: Calculate weights for every data point and update the weights to affect further steps Maximization (M) step: Maximize a log likelihood function with the weights given by E step to update the parameters 5/9/2008 16

Evaluate Initialized 2 models of data using the mean of the EEG entered using the first half of data over time for the first model and the second half of over time for the second model Compared each model and created a weight matrix Normalized the data 5/9/2008 17

OSB algorithm OSB : Optimal Subsequence Bijection It is an algorithm that determines the optimal subsequence bijection between two sequences of real numbers. We were given a code tsdagjump4 that worked for only one channels and we modified so it can work for more than 1 channels.( works for 40 channels) Modification: Created difference matrix with each entry containing differences of corresponding elements. 5/9/2008 18

Why OSB algorithm? The OSB is efficient because we use DAG(Directed Acyclic Graph), cheapest path to find the solution. By using DAG in OSB, we get perfect and correct results on Time Series dataset. DAG helps us to get rid of outlier elements and get one-to to-one one or onto bijection of a sequences. Comparing OSB with DTW using warping window, OSB shows that by skipping elements improves results. 5/9/2008 19

Directed Acyclic Graph This is a simple example of DAG. By skipping over outlier elements we get perfect result. 5/9/2008 20

OSB algorithm This program is used to find: Ts - Time Series DAG Directed Acyclic Graph OSB between two sequences of real numbers D is to find distance between two elements. C is for the jump cost.(penalty for skipping an element) W- weight of edges 5/9/2008 21

OSB algorithm Find subsequences of two elements. Create dissimilarity matrix. Use shortest path algorithm on Directed Acyclic Graph. Find jump cost. Nodes are index pairs of matrix. The main thing in the algorithm is to find edge weights of DAG. 5/9/2008 22

Wavelet Wavelets are mathematical functions that cut up data into different frequency components, and then study each component with a resolution matched to its scale. (IEEE Computational Science and Engineering, Summer 1995, vol. 2, num. 2, published by the IEEE Computer Society) 5/9/2008 23

Discussion of Results We used OSB to obtain our results. Our results consist of two sequences a and b then find subsequences a of a and b of b so that a matches best with b. Results are divided on two parts: Cluster Precision - means x% of time that you will cluster an epoch correctly. Cluster Recall - means y% of time that you will cluster a known left or right button 5/9/2008 24

Example: Discussion of Results typeout1: [75 76 27 21 22] = a rtypeout1: [77 72 24 22 24] = b ltypeout2: [25 24 73 79 78] = a rtypeout2: t2 [23 28 76 78 76] = b For Cluster Precision: left button cluster: 76, right button cluster: 72 (from a and b) Formula : 76/ (76+72) = 0.51351351 = 51% For Cluster Recall: left button cluster: 76, right button cluster: 28 (from a and b ) Now, we applied the formula so: 76/ (76+28) = 0.73076923 = 73% Note: Apply same formula to find both right cluster precision and recall. 5/9/2008 25

Conclusion We re given total 19 subjects (EEG Datasets) but we derived correct result for only 10 subjects. Other 9 subjects gave us all zeros as a result. ltypeout1: [0 0 0 0 0] rtypeout1: [0 0 0 0 0] ltypeout2: [0 0 0 0 0] rtypeout2: [0 0 0 0 0]..error 5/9/2008 26

Extra References Yang Ran, Expectation Maximization : An Approach to Parameter Estimation, www.umiacs.umd.edu/ umd edu/~shaohua/enee698 a_f03/em.ppt Andrew Blake, Bill Freeman, Learning and Vision: Generative Methods -ppt ICCV 2003 October 12, 2003 5/9/2008 27

Thank You 5/9/2008 28