Traditional clustering fails if:
|
|
- Ann Andrews
- 6 years ago
- Views:
Transcription
1 Traditional clustering fails if: -- in the input space the clusters are not linearly separable; -- the distance measure is not adequate; -- the assumptions limit the shape or the number of the clusters. Our solution 1. map to a high-dimensional space where clusters are linearly separable; 2. minimize a convex objective function to learn an appropriate distance; 3. do mean shift clustering in the learned space.
2 nonlinear input few must-link and cannot-link pairs cluster label not specified linearly separable kernel space Bregman logdet divergence kernel mean shift All parameters determined from the processing.
3 Bregman divergence
4 Kullback-Leibler divergence
5
6
7 For positive semidefinite matrices, like our kernel matrix with rank r, optimization is convex with global optimum.
8 Kernel Matrix Representation d x n
9 Gaussian kernel for a pairwise distance: The estimation already using Bergman log det divergence. distances of pairs of kernel points All databases in our experiments have a miminum in sigma. The kernel matrix at most rank n, the number of points. The reduced rank r < n kernel K has only 99 percent of the original kernels energy in the rest of computation.
10 Two examples: Olympic circles. [5 classes] n = 1500, r = 58. x300 faster (reduced_rank/full_rank) USPS digits. [10 classes] (here) n = 1000, r = 499. x2 faster (reduced_rank/full_rank)
11 Linear Transformation in the Kernel Space A M or C are the set of must-link or cannot-link constraint pair. The symmetric dxd matrix based on this constraint pair alone The value of s is computed based on the constraint pair and the updated kernel matrix is obtained. If s=1 is the projection into the null space. Some problems appear... see also paper below. O.Tuzel, F.Porikli, P.Meer Kernel methods for weakly supervised mean shift clustering ICCV 48 55, 2009
12 Projecting into the null space. The dimensionality of the kernel space reduces with one after each constraint is applied. Null space projection does not have cannot-link constraints. Sensitive to labeling errors.
13 An example of null space projection. 5x150 points. 6 x 5 = 30 must-link constraints pairwise 4 points per class Plus one mislabeled constraint.
14 Kernel Learning by Minimizing the Log Det Divergence The kernel learning takes one pair of constraint at a time: must-link or cannot-link with soft margin. At each iteration the slack variable is updated. For each constraint, the optimization is solved by Bregman projection based updates Updates are repeated until convergence to global minimum P. Jain, et. al., Metric and kernel learning using a linear transformation. JMLR, 13: , B. Kulis, et. al., Low-rank kernel learning with Bergman matrix divergences. JMLR, 10: , 2007.
15 The initial kernel matrix K has rank r n, then, and the update can be rewritten as The scalar variable is computed in each iteration using the kernel matrix and the constraint pair. The matrix is updated using the Cholesky decomposition. Jain,P. et. al., Metric and kernel learning using a linear transformation. JMLR, 13: ,
16 Low rank kernel learning algorithm 20
17 Kernel Learning Using the Log Det Divergence For very large datasets, it is infeasible to learn the entire kernel matrix, and to store it in the memory. Generalization to out of sample points where x or y or both are out of sample. Distances can be computed using the learned kernel function 21
18 Kernel Mean Shift Clustering A nonparametric kernel mean shift finds iteratively the closest mode (maximum) of the p.d.f. estimate. k(x) the kernel profile g(u) = - k'(u) is the new kernel mean. Not an input point. Dimension of the mean shift D is at most enough. Mean shift can run in a lower dimension if the optimal kernel has rank less than twenty five.
19 Variable Bandwith Parameter Selection Only the must-link constraint pairs participates. The whole algorithm with different k-s proves this clustering rule. only this d.b. entire algorithm to check! Olympic circles 5x10=50 must-link 10x10=100 USPS digits 5 per class
20 CONTINGENCY TABLE True T False F Negative N Positive P Result Ground-truth Not true True Not true TN FN True FP TP DEFINITIONS precision = TP quality = TP + FP + FN TP TP + FP [are correct] recall = TP F measure = 2 precision recall precision + recall TP + FN [true return] ROC curve ordinate: true positive rate (recall) abscissa: false positive alarm rate TP TP+FN FP FP+TN A threshold measures if the results are false or true. Changing the threshold rearranges all four true/not true boxes function of the ground truth. The thresholding start at one extreme, just one sample is true, TP = 1. Precision is one, recall close the zero. The other extreme is when the threshold has all the returned samples true. The TP is now the number of true samples. The precision is close to zero, the recall is one.
21 A simple example: The ground-truth have 15 items of A, associated as not true, and 5 items of B, associated as true. Threshold returning a single true value gives the contingency table Result Ground-truth Not true True Not true 15 4 True 0 1 and precision = 1 while recall = 0.2. Threshold returning an immediate result gives the contingency table Result Ground-truth Not true True Not true 9 2 True 6 3 and precision = 0.33 while recall = 0.6. Threshold returning all the results as true gives the contingency table Result Ground-truth Not true True Not true 0 0 True 15 5 and precision = 0.25 while recall = 1. 2
22 Adjusted Rand Index Scalar measure to evaluate clustering performance from the clustering output. negative if expected larger than obtained TP true positive; TN true negative; FP false positive; FN false negative. Compensates for chance, randomly assigned cluster labels get a low score. 27
23
24 Trade-off Parameter Measure with Adjusted Rand (AR) for the entire algorithm. Pairwise constraints: 15 points per class. Learning with half the data. Testing with the other half. Two-fold cross-validation. Must-link constraints: Olympic circles: 5x105=525 USPS digits: 10x105=1050
25 A few examples. The algorithm generalizes to out of sample points not taken into consideration in the kernel matrix optimization.
26 Comparison with the Semi-Supervised Kernel Mean Shift Clustering (SKMS) 1. Spectral clustering (E2CP); 2. Semi-supervised kernel k-means (SSKK); sigma given too in these two After LogDet Bregman divergence: 3. Kernel k-means (Kkm); 4. Kernel spectral clustering (KSC). These four methods need as input the number of clusters. 1. Zhiwu Lu, Horace H.S.Ip: Constrained spectral clustering via exhaustive and efficient. constraint propagation. ECCV, 1 14, B. Kulis, S. Basu, I.S. Dhillon, R.J. Mooney: Semi supervised graph clustering: A kernel approach. Machine Learning, 74:1-22, 2009.
27 Experimental Evaluation Two synthetic data sets Olympic circles (5 classes) Concentric circles (10 classes) Nonlinearly separable Can have intersecting boundaries Four real data sets Small number of classes USPS (10 Classes) MIT Scene (8 Classes) Large number of classes PIE faces (68 Classes) Caltech Objects (50 Classes) 50 independent runs. Average parameter values! 37
28 50 runs. Most frequent parameters - Synthetic Example: Olympic Circles 5x300 points along the five circles. 25 points per class selected at random. Original Sample Result Experiment 1 Varied number of labeled points [5, 7, 10, 12, 15, 17, 20, 25] from each class to generate pairwise constraints. max. number of labeled points per class: 8.33%
29
30 50 runs. Most frequent parameters Synthetic Example 2: Concentric Circles 100 points along each of the ten concentric circles Experiment 1 Varied number of labeled points [5, 7, 10, 12, 15, 17, 20, 25] from each class to generate pairwise constraints Experiment 2 25 labeled points per class Introduce labeling errors by swapping similarity pairs with dissimilarity pairs max. number of labeled points per class: 25% 43
31 50 runs. Most frequent parameters - Real Example 1: USPS Digits Ten classes with 1100 points per class. A total of points 100 points per class K 1000x1000 kernel matrix Varied number of labeled points [5, 7, 10, 12, 15, 17, 20, 25] from each class to generate pairwise constraints Cluster all data points by generalizing to the remaining points. ARI = x PDM max. number of labeled points per class: 2.27%
32 50 runs. Most frequent parameters - Real Example 2: MIT Scene Eight classes with 2688 points. The number of samples range between 260 and points per class K 800x800 kernel matrix. Varied number of labeled points [5, 7, 10, 12, 15, 17, 20] from each class to generate pairwise constraints. Cluster all 2688 data points by generalizing to the remaining 1888 points. max. number of labeled points per class: 7.44%
33 Is it really worth to do it per image? See three images misclassified from a class.
34 50 runs. Most frequent parameters Real Example 3: PIE Faces 68 subjects with 21 samples per subjects K 1428 x 1428 full ini al kernel matrix Varied number of labeled points [3, 4, 5, 6, 7] from each class to generate pairwise constraints Obtained perfect clustering for more than 5 labeled points per class max. number of labeled points per class: 33% 46
35 50 runs. Most frequent parameters - Real Example 3: Caltech 101 (subset) 50 categories with number of samples ranging between 31 and 40 points per class K 1959 x 1959 full kernel matrix Varied number of labeled points [5, 7, 10, 12, 15] from each class to generate pairwise constraints max. number of labeled points per class: 38%
36 Conclusion recovers arbitrarily shaped nonparametric clusters performs well with databases having large number of clusters... within the convention of clustering -- generalizing to out of sample points -- number of clusters is not an input parameter
Clustering will not be satisfactory if:
Clustering will not be satisfactory if: -- in the input space the clusters are not linearly separable; -- the distance measure is not adequate; -- the assumptions limit the shape or the number of the clusters.
More informationTraining-Free, Generic Object Detection Using Locally Adaptive Regression Kernels
Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIENCE, VOL.32, NO.9, SEPTEMBER 2010 Hae Jong Seo, Student Member,
More informationKernel Methods & Support Vector Machines
& Support Vector Machines & Support Vector Machines Arvind Visvanathan CSCE 970 Pattern Recognition 1 & Support Vector Machines Question? Draw a single line to separate two classes? 2 & Support Vector
More informationExpectation Maximization!
Expectation Maximization! adapted from: Doug Downey and Bryan Pardo, Northwestern University and http://www.stanford.edu/class/cs276/handouts/lecture17-clustering.ppt Steps in Clustering Select Features
More informationEvaluation Measures. Sebastian Pölsterl. April 28, Computer Aided Medical Procedures Technische Universität München
Evaluation Measures Sebastian Pölsterl Computer Aided Medical Procedures Technische Universität München April 28, 2015 Outline 1 Classification 1. Confusion Matrix 2. Receiver operating characteristics
More informationCCRMA MIR Workshop 2014 Evaluating Information Retrieval Systems. Leigh M. Smith Humtap Inc.
CCRMA MIR Workshop 2014 Evaluating Information Retrieval Systems Leigh M. Smith Humtap Inc. leigh@humtap.com Basic system overview Segmentation (Frames, Onsets, Beats, Bars, Chord Changes, etc) Feature
More informationClassification Part 4
Classification Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville Model Evaluation Metrics for Performance Evaluation How to evaluate
More informationGene Clustering & Classification
BINF, Introduction to Computational Biology Gene Clustering & Classification Young-Rae Cho Associate Professor Department of Computer Science Baylor University Overview Introduction to Gene Clustering
More informationCS145: INTRODUCTION TO DATA MINING
CS145: INTRODUCTION TO DATA MINING 08: Classification Evaluation and Practical Issues Instructor: Yizhou Sun yzsun@cs.ucla.edu October 24, 2017 Learnt Prediction and Classification Methods Vector Data
More informationEvaluation. Evaluate what? For really large amounts of data... A: Use a validation set.
Evaluate what? Evaluation Charles Sutton Data Mining and Exploration Spring 2012 Do you want to evaluate a classifier or a learning algorithm? Do you want to predict accuracy or predict which one is better?
More informationContents Machine Learning concepts 4 Learning Algorithm 4 Predictive Model (Model) 4 Model, Classification 4 Model, Regression 4 Representation
Contents Machine Learning concepts 4 Learning Algorithm 4 Predictive Model (Model) 4 Model, Classification 4 Model, Regression 4 Representation Learning 4 Supervised Learning 4 Unsupervised Learning 4
More informationInformation Management course
Università degli Studi di Milano Master Degree in Computer Science Information Management course Teacher: Alberto Ceselli Lecture 20: 10/12/2015 Data Mining: Concepts and Techniques (3 rd ed.) Chapter
More informationSupervised vs. Unsupervised Learning. Supervised vs. Unsupervised Learning. Supervised vs. Unsupervised Learning. Supervised vs. Unsupervised Learning
Overview T7 - SVM and s Christian Vögeli cvoegeli@inf.ethz.ch Supervised/ s Support Vector Machines Kernels Based on slides by P. Orbanz & J. Keuchel Task: Apply some machine learning method to data from
More informationKernel-based Transductive Learning with Nearest Neighbors
Kernel-based Transductive Learning with Nearest Neighbors Liangcai Shu, Jinhui Wu, Lei Yu, and Weiyi Meng Dept. of Computer Science, SUNY at Binghamton Binghamton, New York 13902, U. S. A. {lshu,jwu6,lyu,meng}@cs.binghamton.edu
More informationData Mining Classification: Alternative Techniques. Imbalanced Class Problem
Data Mining Classification: Alternative Techniques Imbalanced Class Problem Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar Class Imbalance Problem Lots of classification problems
More informationConstrained Clustering with Interactive Similarity Learning
SCIS & ISIS 2010, Dec. 8-12, 2010, Okayama Convention Center, Okayama, Japan Constrained Clustering with Interactive Similarity Learning Masayuki Okabe Toyohashi University of Technology Tenpaku 1-1, Toyohashi,
More informationWeka ( )
Weka ( http://www.cs.waikato.ac.nz/ml/weka/ ) The phases in which classifier s design can be divided are reflected in WEKA s Explorer structure: Data pre-processing (filtering) and representation Supervised
More informationSupport vector machines
Support vector machines When the data is linearly separable, which of the many possible solutions should we prefer? SVM criterion: maximize the margin, or distance between the hyperplane and the closest
More informationPattern recognition (4)
Pattern recognition (4) 1 Things we have discussed until now Statistical pattern recognition Building simple classifiers Supervised classification Minimum distance classifier Bayesian classifier (1D and
More informationEvaluation of different biological data and computational classification methods for use in protein interaction prediction.
Evaluation of different biological data and computational classification methods for use in protein interaction prediction. Yanjun Qi, Ziv Bar-Joseph, Judith Klein-Seetharaman Protein 2006 Motivation Correctly
More informationECLT 5810 Evaluation of Classification Quality
ECLT 5810 Evaluation of Classification Quality Reference: Data Mining Practical Machine Learning Tools and Techniques, by I. Witten, E. Frank, and M. Hall, Morgan Kaufmann Testing and Error Error rate:
More informationRobot Learning. There are generally three types of robot learning: Learning from data. Learning by demonstration. Reinforcement learning
Robot Learning 1 General Pipeline 1. Data acquisition (e.g., from 3D sensors) 2. Feature extraction and representation construction 3. Robot learning: e.g., classification (recognition) or clustering (knowledge
More informationArtificial Intelligence. Programming Styles
Artificial Intelligence Intro to Machine Learning Programming Styles Standard CS: Explicitly program computer to do something Early AI: Derive a problem description (state) and use general algorithms to
More informationCHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION
CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant
More informationLearning Better Data Representation using Inference-Driven Metric Learning
Learning Better Data Representation using Inference-Driven Metric Learning Paramveer S. Dhillon CIS Deptt., Univ. of Penn. Philadelphia, PA, U.S.A dhillon@cis.upenn.edu Partha Pratim Talukdar Search Labs,
More informationCS249: ADVANCED DATA MINING
CS249: ADVANCED DATA MINING Classification Evaluation and Practical Issues Instructor: Yizhou Sun yzsun@cs.ucla.edu April 24, 2017 Homework 2 out Announcements Due May 3 rd (11:59pm) Course project proposal
More informationA novel supervised learning algorithm and its use for Spam Detection in Social Bookmarking Systems
A novel supervised learning algorithm and its use for Spam Detection in Social Bookmarking Systems Anestis Gkanogiannis and Theodore Kalamboukis Department of Informatics Athens University of Economics
More informationData Clustering. Danushka Bollegala
Data Clustering Danushka Bollegala Outline Why cluster data? Clustering as unsupervised learning Clustering algorithms k-means, k-medoids agglomerative clustering Brown s clustering Spectral clustering
More informationIntroduction to Information Retrieval
Introduction to Information Retrieval http://informationretrieval.org IIR 6: Flat Clustering Wiltrud Kessler & Hinrich Schütze Institute for Natural Language Processing, University of Stuttgart 0-- / 83
More informationMetrics for Performance Evaluation How to evaluate the performance of a model? Methods for Performance Evaluation How to obtain reliable estimates?
Model Evaluation Metrics for Performance Evaluation How to evaluate the performance of a model? Methods for Performance Evaluation How to obtain reliable estimates? Methods for Model Comparison How to
More informationNetwork Traffic Measurements and Analysis
DEIB - Politecnico di Milano Fall, 2017 Sources Hastie, Tibshirani, Friedman: The Elements of Statistical Learning James, Witten, Hastie, Tibshirani: An Introduction to Statistical Learning Andrew Ng:
More informationEvaluating Machine Learning Methods: Part 1
Evaluating Machine Learning Methods: Part 1 CS 760@UW-Madison Goals for the lecture you should understand the following concepts bias of an estimator learning curves stratified sampling cross validation
More informationCLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS
CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of
More informationCSE 7/5337: Information Retrieval and Web Search Document clustering I (IIR 16)
CSE 7/5337: Information Retrieval and Web Search Document clustering I (IIR 16) Michael Hahsler Southern Methodist University These slides are largely based on the slides by Hinrich Schütze Institute for
More informationA Family of Contextual Measures of Similarity between Distributions with Application to Image Retrieval
A Family of Contextual Measures of Similarity between Distributions with Application to Image Retrieval Florent Perronnin, Yan Liu and Jean-Michel Renders Xerox Research Centre Europe (XRCE) Textual and
More informationConic Duality. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #02 1 Conic Duality Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/
More informationContents. Preface to the Second Edition
Preface to the Second Edition v 1 Introduction 1 1.1 What Is Data Mining?....................... 4 1.2 Motivating Challenges....................... 5 1.3 The Origins of Data Mining....................
More informationClustering and Dissimilarity Measures. Clustering. Dissimilarity Measures. Cluster Analysis. Perceptually-Inspired Measures
Clustering and Dissimilarity Measures Clustering APR Course, Delft, The Netherlands Marco Loog May 19, 2008 1 What salient structures exist in the data? How many clusters? May 19, 2008 2 Cluster Analysis
More informationBest First and Greedy Search Based CFS and Naïve Bayes Algorithms for Hepatitis Diagnosis
Best First and Greedy Search Based CFS and Naïve Bayes Algorithms for Hepatitis Diagnosis CHAPTER 3 BEST FIRST AND GREEDY SEARCH BASED CFS AND NAÏVE BAYES ALGORITHMS FOR HEPATITIS DIAGNOSIS 3.1 Introduction
More informationFlat Clustering. Slides are mostly from Hinrich Schütze. March 27, 2017
Flat Clustering Slides are mostly from Hinrich Schütze March 7, 07 / 79 Overview Recap Clustering: Introduction 3 Clustering in IR 4 K-means 5 Evaluation 6 How many clusters? / 79 Outline Recap Clustering:
More informationInformation Retrieval and Organisation
Information Retrieval and Organisation Chapter 16 Flat Clustering Dell Zhang Birkbeck, University of London What Is Text Clustering? Text Clustering = Grouping a set of documents into classes of similar
More informationINF 4300 Classification III Anne Solberg The agenda today:
INF 4300 Classification III Anne Solberg 28.10.15 The agenda today: More on estimating classifier accuracy Curse of dimensionality and simple feature selection knn-classification K-means clustering 28.10.15
More informationLimitations of Matrix Completion via Trace Norm Minimization
Limitations of Matrix Completion via Trace Norm Minimization ABSTRACT Xiaoxiao Shi Computer Science Department University of Illinois at Chicago xiaoxiao@cs.uic.edu In recent years, compressive sensing
More informationEvaluating Classifiers
Evaluating Classifiers Reading for this topic: T. Fawcett, An introduction to ROC analysis, Sections 1-4, 7 (linked from class website) Evaluating Classifiers What we want: Classifier that best predicts
More informationNaïve Bayes Classification. Material borrowed from Jonathan Huang and I. H. Witten s and E. Frank s Data Mining and Jeremy Wyatt and others
Naïve Bayes Classification Material borrowed from Jonathan Huang and I. H. Witten s and E. Frank s Data Mining and Jeremy Wyatt and others Things We d Like to Do Spam Classification Given an email, predict
More informationThe Un-normalized Graph p-laplacian based Semi-supervised Learning Method and Speech Recognition Problem
Int. J. Advance Soft Compu. Appl, Vol. 9, No. 1, March 2017 ISSN 2074-8523 The Un-normalized Graph p-laplacian based Semi-supervised Learning Method and Speech Recognition Problem Loc Tran 1 and Linh Tran
More informationCHAPTER 5 OPTIMAL CLUSTER-BASED RETRIEVAL
85 CHAPTER 5 OPTIMAL CLUSTER-BASED RETRIEVAL 5.1 INTRODUCTION Document clustering can be applied to improve the retrieval process. Fast and high quality document clustering algorithms play an important
More informationNaïve Bayes Classification. Material borrowed from Jonathan Huang and I. H. Witten s and E. Frank s Data Mining and Jeremy Wyatt and others
Naïve Bayes Classification Material borrowed from Jonathan Huang and I. H. Witten s and E. Frank s Data Mining and Jeremy Wyatt and others Things We d Like to Do Spam Classification Given an email, predict
More informationCSE 417T: Introduction to Machine Learning. Lecture 22: The Kernel Trick. Henry Chai 11/15/18
CSE 417T: Introduction to Machine Learning Lecture 22: The Kernel Trick Henry Chai 11/15/18 Linearly Inseparable Data What can we do if the data is not linearly separable? Accept some non-zero in-sample
More informationNon-exhaustive, Overlapping k-means
Non-exhaustive, Overlapping k-means J. J. Whang, I. S. Dhilon, and D. F. Gleich Teresa Lebair University of Maryland, Baltimore County October 29th, 2015 Teresa Lebair UMBC 1/38 Outline Introduction NEO-K-Means
More informationThe Pre-Image Problem in Kernel Methods
The Pre-Image Problem in Kernel Methods James Kwok Ivor Tsang Department of Computer Science Hong Kong University of Science and Technology Hong Kong The Pre-Image Problem in Kernel Methods ICML-2003 1
More informationEvaluating Machine-Learning Methods. Goals for the lecture
Evaluating Machine-Learning Methods Mark Craven and David Page Computer Sciences 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Some of the slides in these lectures have been adapted/borrowed from
More informationEvaluating Classifiers
Evaluating Classifiers Reading for this topic: T. Fawcett, An introduction to ROC analysis, Sections 1-4, 7 (linked from class website) Evaluating Classifiers What we want: Classifier that best predicts
More informationLearning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009
Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer
More informationData Analysis 3. Support Vector Machines. Jan Platoš October 30, 2017
Data Analysis 3 Support Vector Machines Jan Platoš October 30, 2017 Department of Computer Science Faculty of Electrical Engineering and Computer Science VŠB - Technical University of Ostrava Table of
More informationAdvanced Video Content Analysis and Video Compression (5LSH0), Module 8B
Advanced Video Content Analysis and Video Compression (5LSH0), Module 8B 1 Supervised learning Catogarized / labeled data Objects in a picture: chair, desk, person, 2 Classification Fons van der Sommen
More informationINF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering
INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering Erik Velldal University of Oslo Sept. 18, 2012 Topics for today 2 Classification Recap Evaluating classifiers Accuracy, precision,
More informationIntroduction to Information Retrieval
Introduction to Information Retrieval http://informationretrieval.org IIR 16: Flat Clustering Hinrich Schütze Institute for Natural Language Processing, Universität Stuttgart 2009.06.16 1/ 64 Overview
More informationCluster Evaluation and Expectation Maximization! adapted from: Doug Downey and Bryan Pardo, Northwestern University
Cluster Evaluation and Expectation Maximization! adapted from: Doug Downey and Bryan Pardo, Northwestern University Kinds of Clustering Sequential Fast Cost Optimization Fixed number of clusters Hierarchical
More informationA Weighted Majority Voting based on Normalized Mutual Information for Cluster Analysis
A Weighted Majority Voting based on Normalized Mutual Information for Cluster Analysis Meshal Shutaywi and Nezamoddin N. Kachouie Department of Mathematical Sciences, Florida Institute of Technology Abstract
More informationUsing Real-valued Meta Classifiers to Integrate and Contextualize Binding Site Predictions
Using Real-valued Meta Classifiers to Integrate and Contextualize Binding Site Predictions Offer Sharabi, Yi Sun, Mark Robinson, Rod Adams, Rene te Boekhorst, Alistair G. Rust, Neil Davey University of
More informationCRF Based Point Cloud Segmentation Jonathan Nation
CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to
More information12 Classification using Support Vector Machines
160 Bioinformatics I, WS 14/15, D. Huson, January 28, 2015 12 Classification using Support Vector Machines This lecture is based on the following sources, which are all recommended reading: F. Markowetz.
More informationCHAPTER 6 IDENTIFICATION OF CLUSTERS USING VISUAL VALIDATION VAT ALGORITHM
96 CHAPTER 6 IDENTIFICATION OF CLUSTERS USING VISUAL VALIDATION VAT ALGORITHM Clustering is the process of combining a set of relevant information in the same group. In this process KM algorithm plays
More informationGRAPHS are mathematical structures, consisting of nodes. Learning Graphs with Monotone Topology Properties and Multiple Connected Components
1 Learning Graphs with Monotone Topology Properties and Multiple Connected Components Eduardo Pavez, Hilmi E. Egilmez and Antonio Ortega arxiv:1705.10934v4 [stat.ml] 28 Feb 2018 Abstract Recent papers
More informationPreliminary Local Feature Selection by Support Vector Machine for Bag of Features
Preliminary Local Feature Selection by Support Vector Machine for Bag of Features Tetsu Matsukawa Koji Suzuki Takio Kurita :University of Tsukuba :National Institute of Advanced Industrial Science and
More informationECLT 5810 Clustering
ECLT 5810 Clustering What is Cluster Analysis? Cluster: a collection of data objects Similar to one another within the same cluster Dissimilar to the objects in other clusters Cluster analysis Grouping
More informationDATA MINING OVERFITTING AND EVALUATION
DATA MINING OVERFITTING AND EVALUATION 1 Overfitting Will cover mechanisms for preventing overfitting in decision trees But some of the mechanisms and concepts will apply to other algorithms 2 Occam s
More informationINF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering
INF4820 Algorithms for AI and NLP Evaluating Classifiers Clustering Murhaf Fares & Stephan Oepen Language Technology Group (LTG) September 27, 2017 Today 2 Recap Evaluation of classifiers Unsupervised
More informationDensity estimation. In density estimation problems, we are given a random from an unknown density. Our objective is to estimate
Density estimation In density estimation problems, we are given a random sample from an unknown density Our objective is to estimate? Applications Classification If we estimate the density for each class,
More informationPower to the points: Local certificates for clustering
Power to the points: Local certificates for clustering Suresh Venkatasubramanian University of Utah Joint work with Parasaran Raman Data Mining Pipeline Modelling Data Cleaning Exploration Modelling Decision
More informationJoint Vanishing Point Extraction and Tracking (Supplementary Material)
Joint Vanishing Point Extraction and Tracking (Supplementary Material) Till Kroeger1 1 Dengxin Dai1 Luc Van Gool1,2 Computer Vision Laboratory, D-ITET, ETH Zurich 2 VISICS, ESAT/PSI, KU Leuven {kroegert,
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based
More informationSupport Vector Machines.
Support Vector Machines srihari@buffalo.edu SVM Discussion Overview 1. Overview of SVMs 2. Margin Geometry 3. SVM Optimization 4. Overlapping Distributions 5. Relationship to Logistic Regression 6. Dealing
More informationLarge-Scale Face Manifold Learning
Large-Scale Face Manifold Learning Sanjiv Kumar Google Research New York, NY * Joint work with A. Talwalkar, H. Rowley and M. Mohri 1 Face Manifold Learning 50 x 50 pixel faces R 2500 50 x 50 pixel random
More informationDensity estimation. In density estimation problems, we are given a random from an unknown density. Our objective is to estimate
Density estimation In density estimation problems, we are given a random sample from an unknown density Our objective is to estimate? Applications Classification If we estimate the density for each class,
More informationMachine Learning for. Artem Lind & Aleskandr Tkachenko
Machine Learning for Object Recognition Artem Lind & Aleskandr Tkachenko Outline Problem overview Classification demo Examples of learning algorithms Probabilistic modeling Bayes classifier Maximum margin
More informationCS4491/CS 7265 BIG DATA ANALYTICS
CS4491/CS 7265 BIG DATA ANALYTICS EVALUATION * Some contents are adapted from Dr. Hung Huang and Dr. Chengkai Li at UT Arlington Dr. Mingon Kang Computer Science, Kennesaw State University Evaluation for
More informationThe exam is closed book, closed notes except your one-page (two-sided) cheat sheet.
CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or
More informationExtracting Rankings for Spatial Keyword Queries from GPS Data
Extracting Rankings for Spatial Keyword Queries from GPS Data Ilkcan Keles Christian S. Jensen Simonas Saltenis Aalborg University Outline Introduction Motivation Problem Definition Proposed Method Overview
More informationClassifying Imbalanced Data Sets Using. Similarity Based Hierarchical Decomposition
Classifying Imbalanced Data Sets Using Similarity Based Hierarchical Decomposition Cigdem BEYAN (Corresponding author), Robert FISHER School of Informatics, University of Edinburgh, G.12 Informatics Forum,
More informationFast Indexing and Search. Lida Huang, Ph.D. Senior Member of Consulting Staff Magma Design Automation
Fast Indexing and Search Lida Huang, Ph.D. Senior Member of Consulting Staff Magma Design Automation Motivation Object categorization? http://www.cs.utexas.edu/~grauman/slides/jain_et_al_cvpr2008.ppt Motivation
More informationA Short SVM (Support Vector Machine) Tutorial
A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange
More informationCOSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor
COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality
More informationINF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering
INF4820 Algorithms for AI and NLP Evaluating Classifiers Clustering Erik Velldal & Stephan Oepen Language Technology Group (LTG) September 23, 2015 Agenda Last week Supervised vs unsupervised learning.
More informationLocal similarity learning for pairwise constraint propagation
Multimed Tools Appl (2015) 74:3739 3758 DOI 10.1007/s11042-013-1796-y Local similarity learning for pairwise constraint propagation Zhenyong Fu Zhiwu Lu Horace H. S. Ip Hongtao Lu Yunyun Wang Published
More informationSupport Vector Machines
Support Vector Machines Xiaojin Zhu jerryzhu@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [ Based on slides from Andrew Moore http://www.cs.cmu.edu/~awm/tutorials] slide 1
More informationFabric Image Retrieval Using Combined Feature Set and SVM
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320 088X IMPACT FACTOR: 5.258 IJCSMC,
More informationInformation Retrieval. Lecture 7
Information Retrieval Lecture 7 Recap of the last lecture Vector space scoring Efficiency considerations Nearest neighbors and approximations This lecture Evaluating a search engine Benchmarks Precision
More informationVisual Representations for Machine Learning
Visual Representations for Machine Learning Spectral Clustering and Channel Representations Lecture 1 Spectral Clustering: introduction and confusion Michael Felsberg Klas Nordberg The Spectral Clustering
More informationLinear programming and duality theory
Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables
More informationPart II: A broader view
Part II: A broader view Understanding ML metrics: isometrics, basic types of linear isometric plots linear metrics and equivalences between them skew-sensitivity non-linear metrics Model manipulation:
More informationCOMP 551 Applied Machine Learning Lecture 13: Unsupervised learning
COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning Associate Instructor: Herke van Hoof (herke.vanhoof@mail.mcgill.ca) Slides mostly by: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551
More informationAdaptive Learning of an Accurate Skin-Color Model
Adaptive Learning of an Accurate Skin-Color Model Q. Zhu K.T. Cheng C. T. Wu Y. L. Wu Electrical & Computer Engineering University of California, Santa Barbara Presented by: H.T Wang Outline Generic Skin
More informationTopology-Invariant Similarity and Diffusion Geometry
1 Topology-Invariant Similarity and Diffusion Geometry Lecture 7 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 Intrinsic
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 09 130219 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Feature Descriptors Feature Matching Feature
More informationData Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University
Data Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Descriptive model A descriptive model presents the main features of the data
More informationECLT 5810 Clustering
ECLT 5810 Clustering What is Cluster Analysis? Cluster: a collection of data objects Similar to one another within the same cluster Dissimilar to the objects in other clusters Cluster analysis Grouping
More informationBoolean Classification
EE04 Spring 08 S. Lall and S. Boyd Boolean Classification Sanjay Lall and Stephen Boyd EE04 Stanford University Boolean classification Boolean classification I supervised learning is called boolean classification
More informationApproximate Nearest Centroid Embedding for Kernel k-means
Approximate Nearest Centroid Embedding for Kernel k-means Ahmed Elgohary, Ahmed K. Farahat, Mohamed S. Kamel, and Fakhri Karray University of Waterloo, Waterloo, Canada N2L 3G1 {aelgohary,afarahat,mkamel,karray}@uwaterloo.ca
More information