CS 188: Artificial Intelligence Fall 2008

Similar documents
Case-Based Reasoning. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. Parametric / Non-parametric.

Feature Extractors. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. The Perceptron Update Rule.

CS 343: Artificial Intelligence

Kernels and Clustering

Classification: Feature Vectors

CSE 573: Artificial Intelligence Autumn 2010

Announcements. CS 188: Artificial Intelligence Spring Classification: Feature Vectors. Classification: Weights. Learning: Binary Perceptron

Feature Extractors. CS 188: Artificial Intelligence Fall Some (Vague) Biology. The Binary Perceptron. Binary Decision Rule.

CSE 40171: Artificial Intelligence. Learning from Data: Unsupervised Learning

CS 343H: Honors AI. Lecture 23: Kernels and clustering 4/15/2014. Kristen Grauman UT Austin

Announcements. CS 188: Artificial Intelligence Spring Generative vs. Discriminative. Classification: Feature Vectors. Project 4: due Friday.

Unsupervised Learning: Clustering

Hierarchical Clustering 4/5/17

1 Case study of SVM (Rob)

Clustering Lecture 14

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

ECG782: Multidimensional Digital Signal Processing

Gene Clustering & Classification

Introduction to Machine Learning. Xiaojin Zhu

Today s lecture. Clustering and unsupervised learning. Hierarchical clustering. K-means, K-medoids, VQ

K Nearest Neighbor Wrap Up K- Means Clustering. Slides adapted from Prof. Carpuat

Clustering Lecture 8. David Sontag New York University. Slides adapted from Luke Zettlemoyer, Vibhav Gogate, Carlos Guestrin, Andrew Moore, Dan Klein

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

CS 340 Lec. 4: K-Nearest Neighbors

Machine Learning. Unsupervised Learning. Manfred Huber

Unsupervised Learning

Network Traffic Measurements and Analysis

CS 229 Midterm Review

Distribution-free Predictive Approaches

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning

Cluster Analysis: Agglomerate Hierarchical Clustering

Idea. Found boundaries between regions (edges) Didn t return the actual region

CS 2750: Machine Learning. Clustering. Prof. Adriana Kovashka University of Pittsburgh January 17, 2017

CSE 158. Web Mining and Recommender Systems. Midterm recap

CS5670: Computer Vision

Slides for Data Mining by I. H. Witten and E. Frank

Based on Raymond J. Mooney s slides

What to come. There will be a few more topics we will cover on supervised learning

Machine Learning using MapReduce

Lecture-17: Clustering with K-Means (Contd: DT + Random Forest)

Mathematics of Data. INFO-4604, Applied Machine Learning University of Colorado Boulder. September 5, 2017 Prof. Michael Paul

Classification and K-Nearest Neighbors

CSE 5243 INTRO. TO DATA MINING

K-Nearest Neighbors. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Clustering CS 550: Machine Learning

Introduction to Machine Learning Prof. Anirban Santara Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Clustering: K-means and Kernel K-means

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06

Supervised vs. Unsupervised Learning

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Working with Unlabeled Data Clustering Analysis. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Clustering & Classification (chapter 15)

Nearest Neighbor Classification. Machine Learning Fall 2017

Clustering. So far in the course. Clustering. Clustering. Subhransu Maji. CMPSCI 689: Machine Learning. dist(x, y) = x y 2 2

Supervised and Unsupervised Learning (II)

CSE 5243 INTRO. TO DATA MINING

CS6716 Pattern Recognition

Clustering Color/Intensity. Group together pixels of similar color/intensity.

K-Means Clustering 3/3/17

Clustering Results. Result List Example. Clustering Results. Information Retrieval

Unsupervised Learning. Presenter: Anil Sharma, PhD Scholar, IIIT-Delhi

Clustering and Dimensionality Reduction. Stony Brook University CSE545, Fall 2017

Unsupervised Learning and Clustering

DS504/CS586: Big Data Analytics Big Data Clustering Prof. Yanhua Li

Clustering. Subhransu Maji. CMPSCI 689: Machine Learning. 2 April April 2015

Search Engines. Information Retrieval in Practice

Linear Regression and K-Nearest Neighbors 3/28/18

Machine Learning. Classification

Midterm Examination CS 540-2: Introduction to Artificial Intelligence

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering

Introduction to Artificial Intelligence

5/15/16. Computational Methods for Data Analysis. Massimo Poesio UNSUPERVISED LEARNING. Clustering. Unsupervised learning introduction

Supervised Learning: Nearest Neighbors

Clustering Lecture 3: Hierarchical Methods

Object Recognition. Lecture 11, April 21 st, Lexing Xie. EE4830 Digital Image Processing

Data Mining. 3.5 Lazy Learners (Instance-Based Learners) Fall Instructor: Dr. Masoud Yaghini. Lazy Learners

Lecture 9: Support Vector Machines

Machine Learning and Data Mining. Clustering. (adapted from) Prof. Alexander Ihler

CS 188: Artificial Intelligence Fall 2008

INF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering

Exploratory Analysis: Clustering

Supervised Learning: K-Nearest Neighbors and Decision Trees

CS 8520: Artificial Intelligence. Machine Learning 2. Paula Matuszek Fall, CSC 8520 Fall Paula Matuszek

Clustering COMS 4771

Today s topic CS347. Results list clustering example. Why cluster documents. Clustering documents. Lecture 8 May 7, 2001 Prabhakar Raghavan

COMS 4771 Clustering. Nakul Verma

Machine Learning in Biology

Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions

CSSE463: Image Recognition Day 21

INF 4300 Classification III Anne Solberg The agenda today:

Administrative. Machine learning code. Supervised learning (e.g. classification) Machine learning: Unsupervised learning" BANANAS APPLES

Image classification Computer Vision Spring 2018, Lecture 18

Topics in Machine Learning

Machine Learning Techniques for Data Mining

CS178: Machine Learning and Data Mining. Complexity & Nearest Neighbor Methods

International Journal of Innovative Research in Computer and Communication Engineering

Semi-supervised Learning

Segmentation Computer Vision Spring 2018, Lecture 27

Unsupervised Data Mining: Clustering. Izabela Moise, Evangelos Pournaras, Dirk Helbing

Transcription:

CS 188: Artificial Intelligence Fall 2008 Lecture 25: Kernels and Clustering 12/2/2008 Dan Klein UC Berkeley 1 1

Case-Based Reasoning Similarity for classification Case-based reasoning Predict an instance s label using similar instances Nearest-neighbor classification 1-NN: copy the label of the most similar data point K-NN: let the k nearest neighbors vote (have to devise a weighting scheme) Key issue: how to define similarity Trade-off: Small k gives relevant neighbors Large k gives smoother functions Sound familiar? [DEMO] http://www.cs.cmu.edu/~zhuxj/courseproject/knndemo/knn.html3 2

Parametric / Non-parametric Parametric models: Fixed set of parameters More data means better settings Non-parametric models: Complexity of the classifier increases with data Better in the limit, often worse in the non-limit (K)NN is non-parametric Truth 2 Examples 10 Examples 100 Examples 10000 Examples 4 3

Nearest-Neighbor Classification Nearest neighbor for digits: Take new image Compare to all training images Assign based on closest example Encoding: image is vector of intensities: What s the similarity function? Dot product of two images vectors? Usually normalize vectors so x = 1 min = 0 (when?), max = 1 (when?) 5 4

Basic Similarity Many similarities based on feature dot products: If features are just the pixels: Note: not all similarities are of this form 6 5

Invariant Metrics Better distances use knowledge about vision Invariant metrics: Similarities are invariant under certain transformations Rotation, scaling, translation, stroke-thickness E.g: 16 x 16 = 256 pixels; a point in 256-dim space Small similarity in R 256 (why?) How to incorporate invariance into similarities? This and next few slides adapted from Xiao Hu, UIUC 7 6

Rotation Invariant Metrics Each example is now a curve in R 256 Rotation invariant similarity: s =max s( r( ), r( )) E.g. highest similarity between images rotation lines 8 7

Template Deformation Deformable templates: An ideal version of each category Best-fit to image using min variance Cost for high distortion of template Cost for image points being far from distorted template Used in many commercial digit recognizers Examples from [Hastie 94] 10 8

Recap: Classification Classification systems: Supervised learning Make a rational prediction given evidence We ve seen several methods for this Useful when you have labeled data (or can get it) 23 9

Clustering systems: Unsupervised learning Detect patterns in unlabeled data E.g. group emails or search results E.g. find categories of customers E.g. detect anomalous program executions Useful when don t know what you re looking for Requires data, but no labels Often get gibberish Clustering 24 10

Clustering Basic idea: group together similar instances Example: 2D point patterns What could similar mean? One option: small (squared) Euclidean distance 25 11

An iterative clustering algorithm Pick K random points as cluster centers (means) Alternate: Assign data instances to closest mean Assign each mean to the average of its assigned points Stop when no points assignments change K-Means 26 12

K-Means Example 27 13

K-Means as Optimization Consider the total distance to the means: points assignments means Each iteration reduces phi Two stages each iteration: Update assignments: fix means c, change assignments a Update means: fix assignments a, change means c 29 14

Phase I: Update Assignments For each point, re-assign to closest mean: Can only decrease total distance phi! 30 15

Phase II: Update Means Move each mean to the average of its assigned points: Also can only decrease total distance (Why?) Fun fact: the point y with minimum squared Euclidean distance to a set of points {x} is their mean 31 16

Initialization K-means is nondeterministic Requires initial means It does matter what you pick! What can go wrong? Various schemes for preventing this kind of thing: variance-based split / merge, initialization heuristics 32 17

K-Means Getting Stuck A local optimum: Why doesn t this work out like the earlier example, with the purple taking over half the blue? 33 18

K-Means Questions Will K-means converge? To a global optimum? Will it always find the true patterns in the data? If the patterns are very very clear? Will it find something interesting? Do people ever use it? How many clusters to pick? 34 19

Clustering for Segmentation Quick taste of a simple vision algorithm Idea: break images into manageable regions for visual processing (object recognition, activity detection, etc.) http://www.cs.washington.edu/research/imagedatabase/demo/kmcluster/ 35 20

Representing Pixels Basic representation of pixels: 3 dimensional color vector <r, g, b> Ranges: r, g, b in [0, 1] What will happen if we cluster the pixels in an image using this representation? Improved representation for segmentation: 5 dimensional vector <r, g, b, x, y> Ranges: x in [0, M], y in [0, N] Bigger M, N makes position more important How does this change the similarities? Note: real vision systems use more sophisticated encodings which can capture intensity, texture, shape, and so on. 36 21

K-Means Segmentation Results depend on initialization! Why? Note: best systems use graph segmentation algorithms 37 22

Other Uses of K-Means Speech recognition: can use to quantize wave slices into a small number of types (SOTA: work with multivariate continuous features) Document clustering: detect similar documents on the basis of shared words (SOTA: use probabilistic models which operate on topics rather than words) 38 23

Agglomerative Clustering Agglomerative clustering: First merge very similar instances Incrementally build larger clusters out of smaller clusters Algorithm: Maintain a set of clusters Initially, each instance in its own cluster Repeat: Pick the two closest clusters Merge them into a new cluster Stop when there s only one cluster left Produces not one clustering, but a family of clusterings represented by a dendrogram 39 24

Agglomerative Clustering How should we define closest for clusters with multiple elements? Many options Closest pair (single-link clustering) Farthest pair (complete-link clustering) Average of all pairs Distance between centroids (broken) Ward s method (my pick, like k- means) Different choices create different clustering behaviors 40 25

Collaborative Filtering Ever wonder how online merchants decide what products to recommend to you? Simplest idea: recommend the most popular items to everyone Not entirely crazy! (Why) Can do better if you know something about the customer (e.g. what they ve bought) Better idea: recommend items that similar customers bought A popular technique: collaborative filtering Define a similarity function over customers (how?) Look at purchases made by people with high similarity Trade-off: relevance of comparison set vs confidence in predictions How can this go wrong? You are here 41 26