Applied Neuroscience. Columbia Science Honors Program Fall Machine Learning and Neural Networks

Similar documents
Dimension Reduction CS534

Linear Methods for Regression and Shrinkage Methods

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition

CSC 411: Lecture 14: Principal Components Analysis & Autoencoders

CSC 411: Lecture 14: Principal Components Analysis & Autoencoders

Image Processing. Image Features

Network Traffic Measurements and Analysis

Recognizing Handwritten Digits Using the LLE Algorithm with Back Propagation


Facial Expression Detection Using Implemented (PCA) Algorithm

Discriminate Analysis

CSE 481C Imitation Learning in Humanoid Robots Motion capture, inverse kinematics, and dimensionality reduction

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 8, March 2013)

Recognition: Face Recognition. Linda Shapiro EE/CSE 576

CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS

Clustering and Visualisation of Data

Lecture 4 Face Detection and Classification. Lin ZHANG, PhD School of Software Engineering Tongji University Spring 2018

Assignment 2. Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions

Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes

GPU Based Face Recognition System for Authentication

Data Mining - Data. Dr. Jean-Michel RICHER Dr. Jean-Michel RICHER Data Mining - Data 1 / 47

Perturbation Estimation of the Subspaces for Structure from Motion with Noisy and Missing Data. Abstract. 1. Introduction

COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS

Spectral Classification

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Clustering and Dimensionality Reduction

Unsupervised Learning

Locality Preserving Projections (LPP) Abstract

Published by: PIONEER RESEARCH & DEVELOPMENT GROUP( 1

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

CIE L*a*b* color model

Chapter 2 Basic Structure of High-Dimensional Spaces

Locality Preserving Projections (LPP) Abstract

Principal Component Analysis (PCA) is a most practicable. statistical technique. Its application plays a major role in many

Parallel Architecture & Programing Models for Face Recognition

Courtesy of Prof. Shixia University

AN IMPROVED APPROACH FOR IMAGE MATCHING USING PRINCIPLE COMPONENT ANALYSIS

vector space retrieval many slides courtesy James Amherst

Biology Project 1

Feature selection. Term 2011/2012 LSI - FIB. Javier Béjar cbea (LSI - FIB) Feature selection Term 2011/ / 22

Recognition, SVD, and PCA

( ) =cov X Y = W PRINCIPAL COMPONENT ANALYSIS. Eigenvectors of the covariance matrix are the principal components

Diagonal Principal Component Analysis for Face Recognition

FACE RECOGNITION USING SUPPORT VECTOR MACHINES

Dimensionality Reduction and Classification through PCA and LDA

PCA and KPCA algorithms for Face Recognition A Survey

Image Compression with Singular Value Decomposition & Correlation: a Graphical Analysis

Automatic Mail Sorting For Hand Written Postcodes

Face detection and recognition. Many slides adapted from K. Grauman and D. Lowe

Unsupervised Learning

ECG782: Multidimensional Digital Signal Processing

Artificial Neural Networks (Feedforward Nets)

Clustering Documents in Large Text Corpora

CS 195-5: Machine Learning Problem Set 5

Modeling the Spectral Envelope of Musical Instruments

Modelling and Visualization of High Dimensional Data. Sample Examination Paper

Lecture 07 Dimensionality Reduction with PCA

Neural Networks for unsupervised learning From Principal Components Analysis to Autoencoders to semantic hashing

Artificial Neural Network and Multi-Response Optimization in Reliability Measurement Approximation and Redundancy Allocation Problem

Analysis Techniques: Flood Analysis Tutorial with Instaneous Peak Flow Data (Log-Pearson Type III Distribution)

Face detection and recognition. Detection Recognition Sally

Week 7 Picturing Network. Vahe and Bethany

3D Models and Matching

HUMAN TRACKING SYSTEM

Face recognition based on improved BP neural network

TEAM 6. Clustering. Professor Anita Wasilewska Cours Number: CSE 634 Course Name: DATA MINING

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos

Face/Flesh Detection and Face Recognition

ECE 285 Class Project Report

Unsupervised learning in Vision

Principal Coordinate Clustering

Concepts of Database Management Eighth Edition. Chapter 2 The Relational Model 1: Introduction, QBE, and Relational Algebra

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

Image Similarities for Learning Video Manifolds. Selen Atasoy MICCAI 2011 Tutorial

Unsupervised Learning

Face Recognition for Mobile Devices

Digital Image Processing Chapter 11: Image Description and Representation

ATINER's Conference Paper Series COM

Recognition problems. Face Recognition and Detection. Readings. What is recognition?

DI TRANSFORM. The regressive analyses. identify relationships

Object and Action Detection from a Single Example

Announcements. Recognition I. Gradient Space (p,q) What is the reflectance map?

Classroom Tips and Techniques: Stepwise Solutions in Maple - Part 2 - Linear Algebra

Face Recognition using Laplacianfaces

10/14/2017. Dejan Sarka. Anomaly Detection. Sponsors

Feature Selection Using Principal Feature Analysis

Multidirectional 2DPCA Based Face Recognition System

(Creating Arrays & Matrices) Applied Linear Algebra in Geoscience Using MATLAB

10-701/15-781, Fall 2006, Final

SGN (4 cr) Chapter 10

Applications Video Surveillance (On-line or off-line)

Adaptive Framework for Network Traffic Classification Using Dimensionality Reduction and Clustering

An Effective Approach in Face Recognition using Image Processing Concepts

Verification: is that a lamp? What do we mean by recognition? Recognition. Recognition

GENDER CLASSIFICATION USING SUPPORT VECTOR MACHINES

Linear Discriminant Analysis for 3D Face Recognition System

Euclidean Space. Definition 1 (Euclidean Space) A Euclidean space is a finite-dimensional vector space over the reals R, with an inner product,.

Statistical analysis of IR thermographic sequences by PCA

Human Action Recognition Using Independent Component Analysis

Performance Evaluation of the Eigenface Algorithm on Plain-Feature Images in Comparison with Those of Distinct Features

Transcription:

Applied Neuroscience Columbia Science Honors Program Fall 2016 Machine Learning and Neural Networks

Machine Learning and Neural Networks Objective: Introduction to Machine Learning Agenda: 1. JavaScript Tutorial for Handwritten Digit Recognition 2. Fundamentals of Machine Learning Principal Component Analysis 3. Tour of the Laboratory of Professor Paul Sajda

Three Layers of Web Design 1. HTML for Content 2. CSS for Presentation 3. JavaScript for Behavior Advantages Runs Fast: JS does not need to contact server Platform Independent Easy to Learn Increased Interactivity: interfaces react on user interactions Richer Interfaces: JS can be used for sliders Disadvantages Runs on Single Thread: JS has no multiprocessing capabilities Read/Write Files: JS disables this for security reasons Networking Applications Code is not hidden: JS code always visible to client

JavaScript for Handwritten Digit Recognition Euclidean Distance: distance between a pair of points in Euclidean space The Euclidean distance provides a value that determines the similarity between two arrays of the same length. Consider the following three arrays: Array 1: [1, 2, 3] Array 2: [1, 3, 2] Array 3: [3, 2, 1] d(p, q) = d(q, p) = (q 1 p 1 ) 2 + (q 2 p 2 ) 2 +... + (q n p n ) 2 = (q i p i ) 2 How can we determine the arrays most similar to each other? 1. Compute Euclidean distance between Array 1 and Array 2 2. Compute Euclidean distance between Array 1 and Array 3 3. Compare: shorter distance represents greater similarity n i=1

Implement Euclidean Distance in JavaScript ENCOG.MathUtil.euclideandistance=function(a1,a2,startIndex,len) { use strict ; }; var result = 0, i, diff; for (i = startindex; i < (startindex + len); i += 1) { diff = a1[i] a2[i]; result += diff * diff; } return Math.sqrt(result); Euclidean Distance can be used to create a simple optical character recognition. JavaScript Handwritten Recognition Demo

JavaScript for Handwritten Digit Recognition 1. Train program to recognize digits. What do we use to train algorithms? 2. Draw a digit. 3. Down-sample high resolution characters onto a 5x8 grid What is down-sampling? At this step, the character drawn becomes a matrix array. 4. Find the array most similar to the digit drawn in Step 2. What criterion is used to measure similarity between arrays?

Implement Euclidean Distance in JavaScript var c, data, sum, i, delta for(c in chardata) { data = chardata[c]; //Now we will recognize the letter drawn using //Euclidean Distance sum = 0; for(var i=0; i<data.length; i++) { delta = data[i] downsampledata[i]; sum = sum + (delta*delta); } sum = Math.sqrt(sum); //Smallest Euclidean Distance is the char if( sum<bestscore bestchar==?? ) { bestscore = sum; bestchar = c; } }

Principal Component Analysis Objective: Identify patterns in data By reducing a d-dimensional dataset by projecting it onto a (k)-dimensional subspace (where k < d), we increase computational efficiency while retaining most of the information. What is the size of k that represents the data well? Analysis: Find the directions of maximum variance in highdimensional data and project it onto a smaller dimensional subspace while retaining most of the information.

Principal Component Analysis 1. Standardize the Data 2. Obtain the Eigenvectors and Eigenvalues: 1. Covariance matrix: How do we determine the similarity of arrays? 2. Singular Vector Decomposition 3. Sort eigenvalues in descending order and choose the k eigenvectors that correspond to the k largest eigenvalues where k is the number of dimensions of the new feature subspace (k d) 4. Construct the projection matrix W from the selected k eigenvectors 5. Transform the original dataset X via W to obtain the k- dimensional feature subspace Y.

Principal Component Analysis Principal Component Analysis uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components.