Visuelle Perzeption für Mensch- Maschine Schnittstellen

Size: px
Start display at page:

Download "Visuelle Perzeption für Mensch- Maschine Schnittstellen"

Transcription

1 Visuelle Perzeption für Mensch- Maschine Schnittstellen Vorlesung, WS 2008 Dr. Rainer Stiefelhagen Dr. Edgar Seemann Interactive Systems Laboratories Universität Karlsruhe (TH) Edgar Seemann, Interactive Systems Laboratories, Universität Karlsruhe (TH)

2 Basics: Pattern Recognition WS 2008/09 Dr. Edgar Seemann Edgar Seemann,

3 Organisatorisches Gruppe 1: Christian Johner Mike Morante Patrick Mehl Gruppe 2: Thomas Stephan (Java) Steffen Braun (Java) Gruppe 3: Martin Wagner Hilke Kieritz Jan Hendrik Hammer Gruppe 4: Wenlei Wu Chengchao Qu Gruppe 5: Michael Weber Tomas Semela Dennis Kopcan Gruppe 6: Johann Korndoerfer Daniel Koester Daniel Putsch Gruppe 7 Benjamin Bartosch Thomas Lichtenstein frei: Igor Plotkin Adrian Genaid Gruppe 8 Florian Krupicka Mathias Luedtke Edgar Seemann,

4 Edgar Seemann, Question from last lecture multiplication of the two projection matrices: u v 1 = k u k v u 0 v 0 1 f f f x y z 1 u v 1 = k u f 0 u 0 0 k v f v x y z 1

5 Separate projections were defined as x y z x y z 1 u v = 1 fx fy fz z = k u k v f f f u 0 v 0 1 x y z 1 x y z 1 Edgar Seemann,

6 Termine (1) Termine Thema Introduction, Applications Basics: Image Processing Basics: Image Transformations, 2D Structure Basics: Pattern recognition Computer Vision: Tasks, Challenges, Learning, Performance measures Face Detection I: Color, Edges (Birchfield) Project 1: Intro + Programming tips Project 1: Questions Face Detection II: ANNs, SVM, Viola & Jones Face Recognition I: Traditional Approaches, Eigenfaces, Fisherfaces, EBGM Face Recognition II Head Pose Estimation: Model-based, NN, Texture Mapping, Focus of Attention Project 1: Student Presentations, Project 2: Intro People Detection I People Detection II People Detection III (Part-Based Models) Scene Context and Geometry I (Ground-Plane, Hoiem, Leibe) Edgar Seemann,

7 Today Why pattern recognition and what is it? Dimensionality Reduction Principle Component Analysis Classification Bayes Decision Theory k-nn Kd-trees Ball trees Edgar Seemann,

8 Overview Learn common patterns based on either a priori knowledge or on statistical information Why statistics? Manual definition of patterns often tedious or not obvious Important for the adaptability to different tasks/domains Finally: Try to mimic human learning / better understand human learning Edgar Seemann,

9 Pattern Recognition Sensor Representation pattern Supervised Data samples with associated class labels Unsupervised Feature Selection/ Extraction Data samples WITHOUT any labels Feature pattern Classifier Decision Semi-Supervised / Weakly-Supervised Learning Not a topic in this lecture Edgar Seemann,

10 Pattern Recognition Stages 1. Feature Extraction Which part of the data is most important 2. Classification/Learning How can we map the extracted features to our desired output (supervised learning)? = x y in [a,b,c, ] w Edgar Seemann,

11 Examples Speech recognition w Computer Vision You will see many examples in the course of this semester Edgar Seemann,

12 What s in the box? Parametric/Non-parametric distributions Support Vector Machines Neural Networks Decision Trees. Edgar Seemann,

13 Problems and Considerations Features How do I encode domain knowledge? Allow invariance (e.g. rotation in the case of images) see e.g. previous lecture Which part of the data can be discarded as it represents redundant information? How can we reduce the dimensionality? i.e. how can we make the problem as simple as possible, but as complex as necessary Edgar Seemann,

14 Curse of Dimensionality Generally: the more dimensions, the more difficult for a learning algorithm to extract patterns More dimensions = more degrees of freedom Decision boundary Class A Class B Defining a linear decision boundary: 2-dim: 3 degrees of freedom 3-dim: 5 degrees of freedom 4-dim: 7 degrees of freedom Edgar Seemann,

15 Dimensionality Reduction Reduce the dimensionality of the data, while retaining relevant information Popular techniques: Principal Component Analysis (PCA) Linear Discriminant Analysis (LDA) Multidimensional Scaling (MDS) Many of these techniques are readily available in MATLAB or Octave Edgar Seemann,

16 PCA Edgar Seemann,

17 First Steps: Sample Variance Given a set of samples with mean a' An unbiased variance estimator is defined as with the sequence with zero mean written in vectors ( a 1,..., a n ) 1 2 var( a) = ( a a') = i n 1 n 1 ( z 1,..., z n ) 1 n 1 T zz 2 z i Edgar Seemann,

18 First Steps: Covariance Given two sets of samples with means a', b' A covariance estimator is defined as cov( a, b) with sequences with zero mean written in vectors ( a 1,..., a n ) 1 = ( ai a')( bi b' ) n 1 ( c1,..., cn),( d1,..., dn) 1 cd n 1 1 n 1 = c i d i Edgar Seemann, T ( b 1,..., b n )

19 First Steps: Covariance Matrix Given n sets samples v i with means and d being the dimensionality A multi-dimensional covariance estimator is defined as Written in vectors ' v 1,...,v n v = i ( a 1,..., a d 1 cov( V ) = ( vi vi ')( v j v j ') n 1 1 n 1 VV Edgar Seemann, T ) T

20 Variance Properties Properties: var(a)=cov(a,a) cov(a,b)=0 iff a,b are completely uncorrelated cov(v) is a square-symmetric matrix (containing variances on the diagonal and covariances on the off-diagonals) var( v.. 1 )... cov( v.. 1, v n ) cov( v n, v 1 ).. var( v n ) Edgar Seemann,

21 PCA: Toy example Toy example: Try to understand the motion of a spring Data from three camera sensors (x,y-position for each camera, i.e. 6-dim. data) Edgar Seemann,

22 Noise and Redundancy Data contains noise Data is redundant 1-dimensional motion vs. 6-dimensional sample data Sensor data from the three cameras are highly correlated Edgar Seemann,

23 Change of basis Basis Change By changing the basis, we may represent the data almost perfectly with a lower number of dimensions Remember that the data has a mean of zero r 1 r 2 Edgar Seemann,

24 Change of Basis Basis change can be written as matrix multiplication Given the new basis vectors p 1,..., p n we can transform data samples x i in the following manner i.e. we are projecting x i onto the new basis vectors Edgar Seemann,

25 Reducing the co-variances Remember the covariance matrix: We have seen that a basis change may reduce the correlation between the different dimensions Goal of PCA var( v.. cov( v Make covariance matrix as diagonal as possible n 1 ), v 1 )..... cov( v 1.. var( v, v n ) n ) Edgar Seemann,

26 PCA Assumptions Basis (principal components) is orthogonal Linearity Change of basis is a linear operation For non-linear problems: kernel PCA Mean and variance are sufficient statistics This holds if data is distributed according to a Gaussian distribution Large variances have important dynamics Edgar Seemann,

27 Finally solving it Theorem: The covariance matrix is diagonalized by an orthogonal matrix of its eigenvectors That is the principal components of the underlying data are the eigenvectors The higher the eigenvalue the more variance is captured along the dimension Edgar Seemann,

28 Eigenspectrum and Energy Edgar Seemann,

29 PCA in practice Eigenvectors can be computed by solving the linear equation system Ax = λx In practice, we most often use a singular value decomposition (SVD) If the data samples are high dimensional, the covariance matrix can get extremely large Let the sample x i be images with a resolution of 1600x1200 then the covariance would be a x matrix Computationally, computing the eigenvectors would be very costly Edgar Seemann,

30 Algebra to the rescue Fortunately, it ain t that bad If we have a maximum of n data samples, we can also obtain only a maximum of n eigenvectors (one dimension for each sample point) i.e. the number of eigenvectors is restricted by both the sample dimension and the sample number Let s have a look at the following equations: A T AA AA Ax = λx T T Ax = Aλx ( Ax) = λ( Ax) λ is eigenvalue of A T A λ is eigenvalue of AA T We can compute the eigenvectors of A T A and multiply them by A Edgar Seemann,

31 Summary PCA can be used to reduce the number of dimensions Find the principal dimensions Principal dimensions are assumed to have a high variance discard non-informative dimensions (redundancy and noise) Extensions: Robust PCA (deal with occlusion) Incremental PCA Kernel PCA (non-linear) Edgar Seemann,

32 Pattern Classification Edgar Seemann,

33 Bayes Decision Theory Example: Character Recognition Classify characters into the classes C 1 =a and C 2 =b minimizing the probability of an incorrect classification Edgar Seemann,

34 Priors Prior probabilities represent the probability of a class to occur Often used to encode domain knowledge Edgar Seemann,

35 Conditional Probability Given a feature vector x x measuring e.g. the width or height of the character The conditional probability p( x Ck ) measures the probability of observing x when the character is of class C k Edgar Seemann,

36 Example Say x = 15 x = 15 What decision should be made? Edgar Seemann,

37 Example Say x = 25 x = 25 What decision should be made? Edgar Seemann,

38 Example Say x = 20 x = 20 What decision should be made? Edgar Seemann,

39 Bayes Theorem The a-posteriori probability ( x C p k defines the probability of a class given a specific feature vector x ) Edgar Seemann,

40 Decision boundary Decision boundary Edgar Seemann,

41 Decision Rule Choose C 1 iff this is equivalent to this is equivalent to p ( C1 x) > p( C2 x) p ( x C1) p( C1) > p( x C2) p( C2) p( x C p( x C 1 2 ) ) > p( C p( C 2 1 ) ) Special cases: C ) p( ) and p = p x C ) = p( x ) ( 1 C2 ( 1 C2 Edgar Seemann,

42 More Classes We can generalize this rule to multi-class problems (e.g. all letters from the alphabet) Choose the class with the highest a-posteriori probability Edgar Seemann,

43 Example Edgar Seemann,

44 Generalization with Loss-Function In some applications we may consider the loss of a misclassification E.g. medical applications Loss if classified as healthy despite a disease: Loss if classified as ill despite being healthy: We assume: λ ( healthy ill) >> λ( ill healthy) λ( healthy ill) λ( ill healthy) Edgar Seemann,

45 Expected Loss We have an observation x, but do not know the loss function λ for all x: λ( x) = α i? λ ( α i C ) = λ with α i the possible decision and C k the possible classes The expected loss can be computed in the following manner: k ik Edgar Seemann,

46 Example Let s look at a two class example: We want to minimize the expected loss, i.e. we choose α i when > Edgar Seemann,

47 Some simple math Edgar Seemann,

48 Linear Discriminant Functions Perceptron Algorithm Edgar Seemann,

49 Linear Discriminant Functions Separate two classes with a linear hyper plane T With w the normal vector of the hyper plane Examples: Perceptron Linear SVM T y( x) = w x + w 0 w 0 T w Edgar Seemann,

50 Perceptron Algorithmus[Rosenblatt 58] Preconditions: Data set is linear separable Goal Find a separating hyper plane Idea Iteratively improve solution Only update solution if the data sample of interest is classified incorrectly Edgar Seemann,

51 Perceptron Algorithm 1. Initialize w = 0 2. Classify a new data sample with the following inner product y(x) = sign(w T x) 3. If correct, then goto step 2 else and w = w - y(x)x 4. If no errors left, then done else goto step 2 Edgar Seemann,

52 Algorithm Visualized Edgar Seemann,

53 Why does this work? Given misclassified sample x with y(x)=1 Then: w = w - x If we now classify the sample again: y(x) = (w-x) T x = w T x x T x x T x is positive and therefore the next prediction for this data sample will be closer to the correct value Convergence: Does not converge if data is not separable Edgar Seemann,

54 Wait We did not consider w 0 What happens if separating hyper plane does not pass through (0,0) Small trick: Choose x to be n+1 dimensional with n+1 dimension always being 1, then Becomes T y( x) = w x + w y( x) = w T x 0 Edgar Seemann,

55 Instance-Based Learning Edgar Seemann,

56 Instance-based Learning Learning=storing all training instances Classification=assigning target function to a new instance Referred to as Lazy learning Instance Examples: Template-Matching K-Nearest Neighbor Edgar Seemann,

57 1-Nearest-Neighbor Features All instances correspond to points in an n-dimensional Euclidean space Classification done by comparing feature vectors of the different points Edgar Seemann,

58 3-Nearest-Neighbor Search for the 3 nearest neighbors Typically majority vote, when not all neighbors are from the same class Edgar Seemann,

59 3-Nearest-Neighbor Decision surface Described by the Voronoi diagram Voronoi diagram is the dual of the Delaunay triangulation (computable in O(nlogn)) Edgar Seemann,

60 When to Consider Nearest Neighbor? Lots of training data Less than 20 attributes (dimensions) per example Advantages: Training is very fast Learn complex target functions Don t lose information Disadvantages: Slow at query time Easily fooled by irrelevant attributes Edgar Seemann,

61 K-Nearest Neighbors Typically decision surface is not computed, but for each new test sample, we compute the nearest neighbors on the fly Probabilistic interpretation: estimate density in a local neighborhood There are efficient memory indexing techniques in order to retrieve the stored training examples kd-tree [Friedman et al. 1977] Ball-tree Edgar Seemann,

62 KD Tree for NN Search Each node contains Children information The tightest box that bounds all the data points within the node. Edgar Seemann,

63 NN Search by KD Tree Edgar Seemann,

64 NN Search by KD Tree Edgar Seemann,

65 NN Search by KD Tree Edgar Seemann,

66 NN Search by KD Tree Edgar Seemann,

67 NN Search by KD Tree Edgar Seemann,

68 NN Search by KD Tree Edgar Seemann,

69 NN Search by KD Tree Edgar Seemann,

70 Standard Kd-tree construction Choose splitting planes by cycling through the dimensions Example: Root node: split in x-direction Next level: split in y-direction Next level: split in z-direction The position of the splitting plane is chosen to be the median of the points (with respect to their coordinates in the axis being used) Typically generates a quite balanced tree Edgar Seemann,

71 Kd-tree construction There exist many (sometimes application specific) heuristics that try to speed up kd-tree creation E.g. split in the dimension of the highest variance Kd-trees can be built incrementally Useful for incremental learning (for example when exploring a new territory with a robot) Existing libraries: C++: libkdtree++ Edgar Seemann,

72 A kd-tree: level 1 Edgar Seemann,

73 A kd-tree: level 2 Edgar Seemann,

74 A kd-tree: level 3 Edgar Seemann,

75 A kd-tree: level 4 Edgar Seemann,

76 A kd-tree: level 5 Edgar Seemann,

77 A kd-tree: level 6 Edgar Seemann,

78 Complexity Building a static kd-tree from n points: O(n log n) O(log n) tree levels, O(n) median search Insertion into a balanced kd-tree: O(log n) Removal from a balanced kd-tree O(log n) Query of an axis-parallel range in a balanced kdtree: O(n 1-1/d +k) with k the number of the reported points, and d the dimension of the kd-tree. Edgar Seemann,

79 Problem of Dimensionality Imagine instances described by 20 attributes, but only 2 are relevant to target function Curse of dimensionality: nearest neighbor is easily mislead when high dimensional X Edgar Seemann,

80 Ball Trees Can be constructed in a similar fashion as kd-trees Ball trees have shown to be superior to kd-trees in many applications (though there is high variance and dataset dependence) The Proximity Project [Gray, Lee, Rotella, Moore 2005] Edgar Seemann,

81 A ball-tree: level 1 Edgar Seemann,

82 A ball-tree: level 2 Edgar Seemann,

83 A ball-tree: level 3 Edgar Seemann,

84 A ball-tree: level 4 Edgar Seemann,

85 A ball-tree: level 5 Edgar Seemann,

86 knn Discussion Highly effective inductive inference method for noisy training data and complex target functions Target function for a whole space may be described as a combination of less complex local approximations Learning is very simple Classification can be time consuming, if training data set is large: O(logn) Edgar Seemann,

87 End of Lecture Edgar Seemann,

Visuelle Perzeption für Mensch- Maschine Schnittstellen

Visuelle Perzeption für Mensch- Maschine Schnittstellen Visuelle Perzeption für Mensch- Maschine Schnittstellen Vorlesung, WS 2009 Prof. Dr. Rainer Stiefelhagen Dr. Edgar Seemann Institut für Anthropomatik Universität Karlsruhe (TH) http://cvhci.ira.uka.de

More information

Visuelle Perzeption für Mensch- Maschine Schnittstellen

Visuelle Perzeption für Mensch- Maschine Schnittstellen Visuelle Perzeption für Mensch- Maschine Schnittstellen Vorlesung, WS 2009 Prof. Dr. Rainer Stiefelhagen Dr. Edgar Seemann Institut für Anthropomatik Universität Karlsruhe (TH) http://cvhci.ira.uka.de

More information

Visuelle Perzeption für Mensch- Maschine Schnittstellen

Visuelle Perzeption für Mensch- Maschine Schnittstellen Visuelle Perzeption für Mensch- Maschine Schnittstellen Vorlesung, WS 2009 Prof. Dr. Rainer Stiefelhagen Dr. Edgar Seemann Institut für Anthropomatik Universität Karlsruhe (TH) http://cvhci.ira.uka.de

More information

Deep Learning for Computer Vision

Deep Learning for Computer Vision Deep Learning for Computer Vision Spring 2018 http://vllab.ee.ntu.edu.tw/dlcv.html (primary) https://ceiba.ntu.edu.tw/1062dlcv (grade, etc.) FB: DLCV Spring 2018 Yu Chiang Frank Wang 王鈺強, Associate Professor

More information

K-Nearest Neighbour (Continued) Dr. Xiaowei Huang

K-Nearest Neighbour (Continued) Dr. Xiaowei Huang K-Nearest Neighbour (Continued) Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ A few things: No lectures on Week 7 (i.e., the week starting from Monday 5 th November), and Week 11 (i.e., the week

More information

Dimension Reduction CS534

Dimension Reduction CS534 Dimension Reduction CS534 Why dimension reduction? High dimensionality large number of features E.g., documents represented by thousands of words, millions of bigrams Images represented by thousands of

More information

Applications Video Surveillance (On-line or off-line)

Applications Video Surveillance (On-line or off-line) Face Face Recognition: Dimensionality Reduction Biometrics CSE 190-a Lecture 12 CSE190a Fall 06 CSE190a Fall 06 Face Recognition Face is the most common biometric used by humans Applications range from

More information

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints Last week Multi-Frame Structure from Motion: Multi-View Stereo Unknown camera viewpoints Last week PCA Today Recognition Today Recognition Recognition problems What is it? Object detection Who is it? Recognizing

More information

Modelling and Visualization of High Dimensional Data. Sample Examination Paper

Modelling and Visualization of High Dimensional Data. Sample Examination Paper Duration not specified UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE Modelling and Visualization of High Dimensional Data Sample Examination Paper Examination date not specified Time: Examination

More information

Machine Learning: k-nearest Neighbors. Lecture 08. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Machine Learning: k-nearest Neighbors. Lecture 08. Razvan C. Bunescu School of Electrical Engineering and Computer Science Machine Learning: k-nearest Neighbors Lecture 08 Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Nonparametric Methods: k-nearest Neighbors Input: A training dataset

More information

Multiple-Choice Questionnaire Group C

Multiple-Choice Questionnaire Group C Family name: Vision and Machine-Learning Given name: 1/28/2011 Multiple-Choice naire Group C No documents authorized. There can be several right answers to a question. Marking-scheme: 2 points if all right

More information

CSE 573: Artificial Intelligence Autumn 2010

CSE 573: Artificial Intelligence Autumn 2010 CSE 573: Artificial Intelligence Autumn 2010 Lecture 16: Machine Learning Topics 12/7/2010 Luke Zettlemoyer Most slides over the course adapted from Dan Klein. 1 Announcements Syllabus revised Machine

More information

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2015

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2015 Instance-based Learning CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2015 Outline Non-parametric approach Unsupervised: Non-parametric density estimation Parzen Windows K-Nearest

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 06 Image Structures 13/02/06 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

CS6716 Pattern Recognition

CS6716 Pattern Recognition CS6716 Pattern Recognition Prototype Methods Aaron Bobick School of Interactive Computing Administrivia Problem 2b was extended to March 25. Done? PS3 will be out this real soon (tonight) due April 10.

More information

Introduction to Machine Learning Prof. Anirban Santara Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Introduction to Machine Learning Prof. Anirban Santara Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Introduction to Machine Learning Prof. Anirban Santara Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture 14 Python Exercise on knn and PCA Hello everyone,

More information

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

More information

Classification: Feature Vectors

Classification: Feature Vectors Classification: Feature Vectors Hello, Do you want free printr cartriges? Why pay more when you can get them ABSOLUTELY FREE! Just # free YOUR_NAME MISSPELLED FROM_FRIEND... : : : : 2 0 2 0 PIXEL 7,12

More information

Content-based image and video analysis. Machine learning

Content-based image and video analysis. Machine learning Content-based image and video analysis Machine learning for multimedia retrieval 04.05.2009 What is machine learning? Some problems are very hard to solve by writing a computer program by hand Almost all

More information

Case-Based Reasoning. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. Parametric / Non-parametric.

Case-Based Reasoning. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. Parametric / Non-parametric. CS 188: Artificial Intelligence Fall 2008 Lecture 25: Kernels and Clustering 12/2/2008 Dan Klein UC Berkeley Case-Based Reasoning Similarity for classification Case-based reasoning Predict an instance

More information

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Fall 2008 CS 188: Artificial Intelligence Fall 2008 Lecture 25: Kernels and Clustering 12/2/2008 Dan Klein UC Berkeley 1 1 Case-Based Reasoning Similarity for classification Case-based reasoning Predict an instance

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing ECG782: Multidimensional Digital Signal Processing Object Recognition http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Knowledge Representation Statistical Pattern Recognition Neural Networks Boosting

More information

INF 4300 Classification III Anne Solberg The agenda today:

INF 4300 Classification III Anne Solberg The agenda today: INF 4300 Classification III Anne Solberg 28.10.15 The agenda today: More on estimating classifier accuracy Curse of dimensionality and simple feature selection knn-classification K-means clustering 28.10.15

More information

Feature Extractors. CS 188: Artificial Intelligence Fall Some (Vague) Biology. The Binary Perceptron. Binary Decision Rule.

Feature Extractors. CS 188: Artificial Intelligence Fall Some (Vague) Biology. The Binary Perceptron. Binary Decision Rule. CS 188: Artificial Intelligence Fall 2008 Lecture 24: Perceptrons II 11/24/2008 Dan Klein UC Berkeley Feature Extractors A feature extractor maps inputs to feature vectors Dear Sir. First, I must solicit

More information

Unsupervised learning in Vision

Unsupervised learning in Vision Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual

More information

Network Traffic Measurements and Analysis

Network Traffic Measurements and Analysis DEIB - Politecnico di Milano Fall, 2017 Introduction Often, we have only a set of features x = x 1, x 2,, x n, but no associated response y. Therefore we are not interested in prediction nor classification,

More information

Object Recognition. Lecture 11, April 21 st, Lexing Xie. EE4830 Digital Image Processing

Object Recognition. Lecture 11, April 21 st, Lexing Xie. EE4830 Digital Image Processing Object Recognition Lecture 11, April 21 st, 2008 Lexing Xie EE4830 Digital Image Processing http://www.ee.columbia.edu/~xlx/ee4830/ 1 Announcements 2 HW#5 due today HW#6 last HW of the semester Due May

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Kernels and Clustering Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.

More information

CSC 411: Lecture 05: Nearest Neighbors

CSC 411: Lecture 05: Nearest Neighbors CSC 411: Lecture 05: Nearest Neighbors Raquel Urtasun & Rich Zemel University of Toronto Sep 28, 2015 Urtasun & Zemel (UofT) CSC 411: 05-Nearest Neighbors Sep 28, 2015 1 / 13 Today Non-parametric models

More information

Feature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Feature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Feature Selection CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Dimensionality reduction Feature selection vs. feature extraction Filter univariate

More information

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points] CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, 2015. 11:59pm, PDF to Canvas [100 points] Instructions. Please write up your responses to the following problems clearly and concisely.

More information

Classification: Linear Discriminant Functions

Classification: Linear Discriminant Functions Classification: Linear Discriminant Functions CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Discriminant functions Linear Discriminant functions

More information

Announcements. Recognition I. Gradient Space (p,q) What is the reflectance map?

Announcements. Recognition I. Gradient Space (p,q) What is the reflectance map? Announcements I HW 3 due 12 noon, tomorrow. HW 4 to be posted soon recognition Lecture plan recognition for next two lectures, then video and motion. Introduction to Computer Vision CSE 152 Lecture 17

More information

Feature Extractors. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. The Perceptron Update Rule.

Feature Extractors. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. The Perceptron Update Rule. CS 188: Artificial Intelligence Fall 2007 Lecture 26: Kernels 11/29/2007 Dan Klein UC Berkeley Feature Extractors A feature extractor maps inputs to feature vectors Dear Sir. First, I must solicit your

More information

Applying Supervised Learning

Applying Supervised Learning Applying Supervised Learning When to Consider Supervised Learning A supervised learning algorithm takes a known set of input data (the training set) and known responses to the data (output), and trains

More information

MUSI-6201 Computational Music Analysis

MUSI-6201 Computational Music Analysis MUSI-6201 Computational Music Analysis Part 4.3: Feature Post-Processing alexander lerch November 4, 2015 instantaneous features overview text book Chapter 3: Instantaneous Features (pp. 63 69) sources:

More information

Announcements. CS 188: Artificial Intelligence Spring Classification: Feature Vectors. Classification: Weights. Learning: Binary Perceptron

Announcements. CS 188: Artificial Intelligence Spring Classification: Feature Vectors. Classification: Weights. Learning: Binary Perceptron CS 188: Artificial Intelligence Spring 2010 Lecture 24: Perceptrons and More! 4/20/2010 Announcements W7 due Thursday [that s your last written for the semester!] Project 5 out Thursday Contest running

More information

Kernels and Clustering

Kernels and Clustering Kernels and Clustering Robert Platt Northeastern University All slides in this file are adapted from CS188 UC Berkeley Case-Based Learning Non-Separable Data Case-Based Reasoning Classification from similarity

More information

Lecture 3. Oct

Lecture 3. Oct Lecture 3 Oct 3 2008 Review of last lecture A supervised learning example spam filter, and the design choices one need to make for this problem use bag-of-words to represent emails linear functions as

More information

CSE 6242 A / CS 4803 DVA. Feb 12, Dimension Reduction. Guest Lecturer: Jaegul Choo

CSE 6242 A / CS 4803 DVA. Feb 12, Dimension Reduction. Guest Lecturer: Jaegul Choo CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo Data is Too Big To Do Something..

More information

Last time... Coryn Bailer-Jones. check and if appropriate remove outliers, errors etc. linear regression

Last time... Coryn Bailer-Jones. check and if appropriate remove outliers, errors etc. linear regression Machine learning, pattern recognition and statistical data modelling Lecture 3. Linear Methods (part 1) Coryn Bailer-Jones Last time... curse of dimensionality local methods quickly become nonlocal as

More information

HW2 due on Thursday. Face Recognition: Dimensionality Reduction. Biometrics CSE 190 Lecture 11. Perceptron Revisited: Linear Separators

HW2 due on Thursday. Face Recognition: Dimensionality Reduction. Biometrics CSE 190 Lecture 11. Perceptron Revisited: Linear Separators HW due on Thursday Face Recognition: Dimensionality Reduction Biometrics CSE 190 Lecture 11 CSE190, Winter 010 CSE190, Winter 010 Perceptron Revisited: Linear Separators Binary classification can be viewed

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Learning without Class Labels (or correct outputs) Density Estimation Learn P(X) given training data for X Clustering Partition data into clusters Dimensionality Reduction Discover

More information

Dr. Enrique Cabello Pardos July

Dr. Enrique Cabello Pardos July Dr. Enrique Cabello Pardos July 20 2011 Dr. Enrique Cabello Pardos July 20 2011 ONCE UPON A TIME, AT THE LABORATORY Research Center Contract Make it possible. (as fast as possible) Use the best equipment.

More information

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition Linear Discriminant Analysis in Ottoman Alphabet Character Recognition ZEYNEB KURT, H. IREM TURKMEN, M. ELIF KARSLIGIL Department of Computer Engineering, Yildiz Technical University, 34349 Besiktas /

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos Machine Learning for Computer Vision 1 22 October, 2012 MVA ENS Cachan Lecture 5: Introduction to generative models Iasonas Kokkinos Iasonas.kokkinos@ecp.fr Center for Visual Computing Ecole Centrale Paris

More information

Machine Learning: Think Big and Parallel

Machine Learning: Think Big and Parallel Day 1 Inderjit S. Dhillon Dept of Computer Science UT Austin CS395T: Topics in Multicore Programming Oct 1, 2013 Outline Scikit-learn: Machine Learning in Python Supervised Learning day1 Regression: Least

More information

Introduction to machine learning, pattern recognition and statistical data modelling Coryn Bailer-Jones

Introduction to machine learning, pattern recognition and statistical data modelling Coryn Bailer-Jones Introduction to machine learning, pattern recognition and statistical data modelling Coryn Bailer-Jones What is machine learning? Data interpretation describing relationship between predictors and responses

More information

Intro to Artificial Intelligence

Intro to Artificial Intelligence Intro to Artificial Intelligence Ahmed Sallam { Lecture 5: Machine Learning ://. } ://.. 2 Review Probabilistic inference Enumeration Approximate inference 3 Today What is machine learning? Supervised

More information

Supervised Learning: Nearest Neighbors

Supervised Learning: Nearest Neighbors CS 2750: Machine Learning Supervised Learning: Nearest Neighbors Prof. Adriana Kovashka University of Pittsburgh February 1, 2016 Today: Supervised Learning Part I Basic formulation of the simplest classifier:

More information

Lecture Topic Projects

Lecture Topic Projects Lecture Topic Projects 1 Intro, schedule, and logistics 2 Applications of visual analytics, basic tasks, data types 3 Introduction to D3, basic vis techniques for non-spatial data Project #1 out 4 Data

More information

Machine Learning / Jan 27, 2010

Machine Learning / Jan 27, 2010 Revisiting Logistic Regression & Naïve Bayes Aarti Singh Machine Learning 10-701/15-781 Jan 27, 2010 Generative and Discriminative Classifiers Training classifiers involves learning a mapping f: X -> Y,

More information

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review Gautam Kunapuli Machine Learning Data is identically and independently distributed Goal is to learn a function that maps to Data is generated using an unknown function Learn a hypothesis that minimizes

More information

Basis Functions. Volker Tresp Summer 2017

Basis Functions. Volker Tresp Summer 2017 Basis Functions Volker Tresp Summer 2017 1 Nonlinear Mappings and Nonlinear Classifiers Regression: Linearity is often a good assumption when many inputs influence the output Some natural laws are (approximately)

More information

Image Processing. Image Features

Image Processing. Image Features Image Processing Image Features Preliminaries 2 What are Image Features? Anything. What they are used for? Some statements about image fragments (patches) recognition Search for similar patches matching

More information

Applied Statistics for Neuroscientists Part IIa: Machine Learning

Applied Statistics for Neuroscientists Part IIa: Machine Learning Applied Statistics for Neuroscientists Part IIa: Machine Learning Dr. Seyed-Ahmad Ahmadi 04.04.2017 16.11.2017 Outline Machine Learning Difference between statistics and machine learning Modeling the problem

More information

Introduction to Pattern Recognition Part II. Selim Aksoy Bilkent University Department of Computer Engineering

Introduction to Pattern Recognition Part II. Selim Aksoy Bilkent University Department of Computer Engineering Introduction to Pattern Recognition Part II Selim Aksoy Bilkent University Department of Computer Engineering saksoy@cs.bilkent.edu.tr RETINA Pattern Recognition Tutorial, Summer 2005 Overview Statistical

More information

DATA MINING LECTURE 10B. Classification k-nearest neighbor classifier Naïve Bayes Logistic Regression Support Vector Machines

DATA MINING LECTURE 10B. Classification k-nearest neighbor classifier Naïve Bayes Logistic Regression Support Vector Machines DATA MINING LECTURE 10B Classification k-nearest neighbor classifier Naïve Bayes Logistic Regression Support Vector Machines NEAREST NEIGHBOR CLASSIFICATION 10 10 Illustrating Classification Task Tid Attrib1

More information

Supervised vs unsupervised clustering

Supervised vs unsupervised clustering Classification Supervised vs unsupervised clustering Cluster analysis: Classes are not known a- priori. Classification: Classes are defined a-priori Sometimes called supervised clustering Extract useful

More information

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam: INF 4300 / INF 9305 Digital image analysis Date: Thursday December 21, 2017 Exam hours: 09.00-13.00 (4 hours) Number of pages: 8 pages

More information

10-701/15-781, Fall 2006, Final

10-701/15-781, Fall 2006, Final -7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly

More information

K-Nearest Neighbors. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

K-Nearest Neighbors. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824 K-Nearest Neighbors Jia-Bin Huang ECE-5424G / CS-5824 Virginia Tech Spring 2019 Administrative Check out review materials Probability Linear algebra Python and NumPy Start your HW 0 On your Local machine:

More information

Distribution-free Predictive Approaches

Distribution-free Predictive Approaches Distribution-free Predictive Approaches The methods discussed in the previous sections are essentially model-based. Model-free approaches such as tree-based classification also exist and are popular for

More information

Introduction to Computer Vision

Introduction to Computer Vision Introduction to Computer Vision Michael J. Black Nov 2009 Perspective projection and affine motion Goals Today Perspective projection 3D motion Wed Projects Friday Regularization and robust statistics

More information

Verification: is that a lamp? What do we mean by recognition? Recognition. Recognition

Verification: is that a lamp? What do we mean by recognition? Recognition. Recognition Recognition Recognition The Margaret Thatcher Illusion, by Peter Thompson The Margaret Thatcher Illusion, by Peter Thompson Readings C. Bishop, Neural Networks for Pattern Recognition, Oxford University

More information

Learning to Recognize Faces in Realistic Conditions

Learning to Recognize Faces in Realistic Conditions 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

What is machine learning?

What is machine learning? Machine learning, pattern recognition and statistical data modelling Lecture 12. The last lecture Coryn Bailer-Jones 1 What is machine learning? Data description and interpretation finding simpler relationship

More information

CSE 6242 A / CX 4242 DVA. March 6, Dimension Reduction. Guest Lecturer: Jaegul Choo

CSE 6242 A / CX 4242 DVA. March 6, Dimension Reduction. Guest Lecturer: Jaegul Choo CSE 6242 A / CX 4242 DVA March 6, 2014 Dimension Reduction Guest Lecturer: Jaegul Choo Data is Too Big To Analyze! Limited memory size! Data may not be fitted to the memory of your machine! Slow computation!

More information

What do we mean by recognition?

What do we mean by recognition? Announcements Recognition Project 3 due today Project 4 out today (help session + photos end-of-class) The Margaret Thatcher Illusion, by Peter Thompson Readings Szeliski, Chapter 14 1 Recognition What

More information

Announcements. CS 188: Artificial Intelligence Spring Generative vs. Discriminative. Classification: Feature Vectors. Project 4: due Friday.

Announcements. CS 188: Artificial Intelligence Spring Generative vs. Discriminative. Classification: Feature Vectors. Project 4: due Friday. CS 188: Artificial Intelligence Spring 2011 Lecture 21: Perceptrons 4/13/2010 Announcements Project 4: due Friday. Final Contest: up and running! Project 5 out! Pieter Abbeel UC Berkeley Many slides adapted

More information

Machine Learning in Biology

Machine Learning in Biology Università degli studi di Padova Machine Learning in Biology Luca Silvestrin (Dottorando, XXIII ciclo) Supervised learning Contents Class-conditional probability density Linear and quadratic discriminant

More information

K Nearest Neighbor Wrap Up K- Means Clustering. Slides adapted from Prof. Carpuat

K Nearest Neighbor Wrap Up K- Means Clustering. Slides adapted from Prof. Carpuat K Nearest Neighbor Wrap Up K- Means Clustering Slides adapted from Prof. Carpuat K Nearest Neighbor classification Classification is based on Test instance with Training Data K: number of neighbors that

More information

Dimensionality Reduction and Classification through PCA and LDA

Dimensionality Reduction and Classification through PCA and LDA International Journal of Computer Applications (09 8887) Dimensionality Reduction and Classification through and Telgaonkar Archana H. PG Student Department of CS and IT Dr. BAMU, Aurangabad Deshmukh Sachin

More information

CISC 4631 Data Mining

CISC 4631 Data Mining CISC 4631 Data Mining Lecture 03: Nearest Neighbor Learning Theses slides are based on the slides by Tan, Steinbach and Kumar (textbook authors) Prof. R. Mooney (UT Austin) Prof E. Keogh (UCR), Prof. F.

More information

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin Clustering K-means Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, 2014 Carlos Guestrin 2005-2014 1 Clustering images Set of Images [Goldberger et al.] Carlos Guestrin 2005-2014

More information

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

More information

CS 340 Lec. 4: K-Nearest Neighbors

CS 340 Lec. 4: K-Nearest Neighbors CS 340 Lec. 4: K-Nearest Neighbors AD January 2011 AD () CS 340 Lec. 4: K-Nearest Neighbors January 2011 1 / 23 K-Nearest Neighbors Introduction Choice of Metric Overfitting and Underfitting Selection

More information

Lecture 3: Camera Calibration, DLT, SVD

Lecture 3: Camera Calibration, DLT, SVD Computer Vision Lecture 3 23--28 Lecture 3: Camera Calibration, DL, SVD he Inner Parameters In this section we will introduce the inner parameters of the cameras Recall from the camera equations λx = P

More information

CSE 158. Web Mining and Recommender Systems. Midterm recap

CSE 158. Web Mining and Recommender Systems. Midterm recap CSE 158 Web Mining and Recommender Systems Midterm recap Midterm on Wednesday! 5:10 pm 6:10 pm Closed book but I ll provide a similar level of basic info as in the last page of previous midterms CSE 158

More information

Data Mining. Lecture 03: Nearest Neighbor Learning

Data Mining. Lecture 03: Nearest Neighbor Learning Data Mining Lecture 03: Nearest Neighbor Learning Theses slides are based on the slides by Tan, Steinbach and Kumar (textbook authors) Prof. R. Mooney (UT Austin) Prof E. Keogh (UCR), Prof. F. Provost

More information

Bayes Risk. Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Discriminative vs Generative Models. Loss functions in classifiers

Bayes Risk. Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Discriminative vs Generative Models. Loss functions in classifiers Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Examine each window of an image Classify object class within each window based on a training set images Example: A Classification Problem Categorize

More information

Bioinformatics - Lecture 07

Bioinformatics - Lecture 07 Bioinformatics - Lecture 07 Bioinformatics Clusters and networks Martin Saturka http://www.bioplexity.org/lectures/ EBI version 0.4 Creative Commons Attribution-Share Alike 2.5 License Learning on profiles

More information

SGN (4 cr) Chapter 10

SGN (4 cr) Chapter 10 SGN-41006 (4 cr) Chapter 10 Feature Selection and Extraction Jussi Tohka & Jari Niemi Department of Signal Processing Tampere University of Technology February 18, 2014 J. Tohka & J. Niemi (TUT-SGN) SGN-41006

More information

All lecture slides will be available at CSC2515_Winter15.html

All lecture slides will be available at  CSC2515_Winter15.html CSC2515 Fall 2015 Introduc3on to Machine Learning Lecture 9: Support Vector Machines All lecture slides will be available at http://www.cs.toronto.edu/~urtasun/courses/csc2515/ CSC2515_Winter15.html Many

More information

Chap.12 Kernel methods [Book, Chap.7]

Chap.12 Kernel methods [Book, Chap.7] Chap.12 Kernel methods [Book, Chap.7] Neural network methods became popular in the mid to late 1980s, but by the mid to late 1990s, kernel methods have also become popular in machine learning. The first

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu [Kumar et al. 99] 2/13/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu

More information

Classifiers for Recognition Reading: Chapter 22 (skip 22.3)

Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Examine each window of an image Classify object class within each window based on a training set images Slide credits for this chapter: Frank

More information

Naïve Bayes for text classification

Naïve Bayes for text classification Road Map Basic concepts Decision tree induction Evaluation of classifiers Rule induction Classification using association rules Naïve Bayesian classification Naïve Bayes for text classification Support

More information

Machine Learning. Nonparametric methods for Classification. Eric Xing , Fall Lecture 2, September 12, 2016

Machine Learning. Nonparametric methods for Classification. Eric Xing , Fall Lecture 2, September 12, 2016 Machine Learning 10-701, Fall 2016 Nonparametric methods for Classification Eric Xing Lecture 2, September 12, 2016 Reading: 1 Classification Representing data: Hypothesis (classifier) 2 Clustering 3 Supervised

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Nearest Neighbor Classification. Machine Learning Fall 2017

Nearest Neighbor Classification. Machine Learning Fall 2017 Nearest Neighbor Classification Machine Learning Fall 2017 1 This lecture K-nearest neighbor classification The basic algorithm Different distance measures Some practical aspects Voronoi Diagrams and Decision

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Random Forest A. Fornaser

Random Forest A. Fornaser Random Forest A. Fornaser alberto.fornaser@unitn.it Sources Lecture 15: decision trees, information theory and random forests, Dr. Richard E. Turner Trees and Random Forests, Adele Cutler, Utah State University

More information

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING SECOND EDITION IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING ith Algorithms for ENVI/IDL Morton J. Canty с*' Q\ CRC Press Taylor &. Francis Group Boca Raton London New York CRC

More information

Voronoi Region. K-means method for Signal Compression: Vector Quantization. Compression Formula 11/20/2013

Voronoi Region. K-means method for Signal Compression: Vector Quantization. Compression Formula 11/20/2013 Voronoi Region K-means method for Signal Compression: Vector Quantization Blocks of signals: A sequence of audio. A block of image pixels. Formally: vector example: (0.2, 0.3, 0.5, 0.1) A vector quantizer

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Unsupervised learning Daniel Hennes 29.01.2018 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Supervised learning Regression (linear

More information

Equation to LaTeX. Abhinav Rastogi, Sevy Harris. I. Introduction. Segmentation.

Equation to LaTeX. Abhinav Rastogi, Sevy Harris. I. Introduction. Segmentation. Equation to LaTeX Abhinav Rastogi, Sevy Harris {arastogi,sharris5}@stanford.edu I. Introduction Copying equations from a pdf file to a LaTeX document can be time consuming because there is no easy way

More information

Machine Learning in the Wild. Dealing with Messy Data. Rajmonda S. Caceres. SDS 293 Smith College October 30, 2017

Machine Learning in the Wild. Dealing with Messy Data. Rajmonda S. Caceres. SDS 293 Smith College October 30, 2017 Machine Learning in the Wild Dealing with Messy Data Rajmonda S. Caceres SDS 293 Smith College October 30, 2017 Analytical Chain: From Data to Actions Data Collection Data Cleaning/ Preparation Analysis

More information