Verification: is that a lamp? What do we mean by recognition? Recognition. Recognition

Similar documents
What do we mean by recognition?

EE795: Computer Vision and Intelligent Systems

Object recognition (part 1)

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints

EE795: Computer Vision and Intelligent Systems

Recognition problems. Face Recognition and Detection. Readings. What is recognition?

Recognition: Face Recognition. Linda Shapiro EE/CSE 576

Face/Flesh Detection and Face Recognition

12/3/2009. What is Computer Vision? Applications. Application: Assisted driving Pedestrian and car detection. Application: Improving online search

Object and Class Recognition I:

CS4495/6495 Introduction to Computer Vision. 8C-L1 Classification: Discriminative models

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

Announcements. Recognition I. Gradient Space (p,q) What is the reflectance map?

Announcements. Recognition. Recognition. Recognition. Recognition. Homework 3 is due May 18, 11:59 PM Reading: Computer Vision I CSE 152 Lecture 14

Understanding Faces. Detection, Recognition, and. Transformation of Faces 12/5/17

Applications Video Surveillance (On-line or off-line)

Dimension Reduction CS534

CSE 481C Imitation Learning in Humanoid Robots Motion capture, inverse kinematics, and dimensionality reduction

Recognition, SVD, and PCA

Face detection and recognition. Detection Recognition Sally

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Object detection as supervised classification

Visual Object Recognition

Visual Learning and Recognition of 3D Objects from Appearance

Face detection and recognition. Many slides adapted from K. Grauman and D. Lowe

CS5670: Intro to Computer Vision

Computer Vision. Exercise Session 10 Image Categorization

3D Models and Matching

Supervised Sementation: Pixel Classification

PART VI High-Level Vision: Probabilistic and Inferential Methods

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos

( ) =cov X Y = W PRINCIPAL COMPONENT ANALYSIS. Eigenvectors of the covariance matrix are the principal components

Deep Learning for Computer Vision

Lecture 16: Object recognition: Part-based generative models

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma

Homework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures:

Classifiers and Detection. D.A. Forsyth

Patch-based Object Recognition. Basic Idea

Local cues and global constraints in image understanding

Image processing & Computer vision Xử lí ảnh và thị giác máy tính

The Curse of Dimensionality

Unsupervised Learning

Specific Object Recognition: Matching in 2D

Beyond Bags of Features

Generative and discriminative classification techniques

All good things must...

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]

Generalized Principal Component Analysis CVPR 2007

Machine Learning Lecture 3

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Image Processing. Image Features

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Supervised learning. y = f(x) function

HW2 due on Thursday. Face Recognition: Dimensionality Reduction. Biometrics CSE 190 Lecture 11. Perceptron Revisited: Linear Separators

Bayes Risk. Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Discriminative vs Generative Models. Loss functions in classifiers

Category vs. instance recognition

Supervised learning. y = f(x) function

EECS 442 Computer Vision fall 2011

Image Classification pipeline. Lecture 2-1

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011

Object and Action Detection from a Single Example

CS 195-5: Machine Learning Problem Set 5

Local Feature Detectors

Classification Algorithms in Data Mining

Classifiers for Recognition Reading: Chapter 22 (skip 22.3)

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Image classification Computer Vision Spring 2018, Lecture 18

Snakes, level sets and graphcuts. (Deformable models)

Classification and Detection in Images. D.A. Forsyth

TA Section: Problem Set 4

Image Classification pipeline. Lecture 2-1

CS 223B Computer Vision Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Introduction to object recognition. Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba, and others

NAME: Sample Final Exam (based on previous CSE 455 exams by Profs. Seitz and Shapiro)

Face Detection and Recognition in an Image Sequence using Eigenedginess

Object Category Detection: Sliding Windows

Object Recognition. Computer Vision. Slides from Lana Lazebnik, Fei-Fei Li, Rob Fergus, Antonio Torralba, and Jean Ponce

GENDER CLASSIFICATION USING SUPPORT VECTOR MACHINES

FACE RECOGNITION USING SUPPORT VECTOR MACHINES

Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels

Classifying Images with Visual/Textual Cues. By Steven Kappes and Yan Cao

Learning based face hallucination techniques: A survey

Tracking Computer Vision Spring 2018, Lecture 24

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision

Course Administration

Unsupervised learning in Vision

Machine Learning Lecture 3

Image Analysis. Window-based face detection: The Viola-Jones algorithm. iphoto decides that this is a face. It can be trained to recognize pets!

CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS

Bayes Classifiers and Generative Methods

Lecture 12 Recognition

Local features: detection and description. Local invariant features

Network Traffic Measurements and Analysis

Fisher vector image representation

Transcription:

Recognition Recognition The Margaret Thatcher Illusion, by Peter Thompson The Margaret Thatcher Illusion, by Peter Thompson Readings C. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, 1998, Chapter 1. Szeliski, Chapter 14.2.1 (eigenfaces) What do we mean by recognition? Verification: is that a lamp? Next 15 slides adapted from Li, Fergus, & Torralba s excellent short course on category and object recognition

Detection: are there people? Identification: is that Potala Palace? Object categorization Scene and context categorization tree banner mountain building outdoor city street lamp people vendor

Applications: Computational photography Applications: Assisted driving Pedestrian and car detection Ped meters Car Ped meters Lane detection Collision warning systems with adaptive cruise control, Lane departure warning systems, Rear object detection systems, Applications: image search Challenges: viewpoint variation Michelangelo 1475-1564

Challenges: illumination variation Challenges: occlusion slide credit: S. Ullman Magritte, 1957 Challenges: scale Challenges: deformation Xu, Beihong 1943

Challenges: background clutter Challenges: intra-class variation Klimt, 1913 Face detection One simple method: skin detection skin How to tell if a face is present? Skin pixels have a distinctive range of colors Corresponds to region(s) in RGB color space for visualization, only R and G components are shown above Skin classifier A pixel X = (R,G,B) is skin if it is in the skin region But how to find this region?

Skin detection Skin classification techniques Learn the skin region from examples Manually label pixels in one or more training images as skin or not skin Plot the training data in RGB space skin pixels shown in orange, non-skin pixels shown in blue some skin pixels may be outside the region, non-skin pixels inside. Why? Skin classifier Given X = (R,G,B): how to determine if it is skin or not? Skin classifier: Given X = (R,G,B): how to determine if it is skin or not? Nearest neighbor find labeled pixel closest to X Find plane/curve that separates the two classes popular approach: Support Vector Machines (SVM) Data modeling fit a model (curve, surface, or volume) to each class probabilistic version: fit a probability density/distribution model to each class Probability Probabilistic skin classification Basic probability X is a random variable P(X) is the probability that X achieves a certain value called a PDF -probability distribution/density function -a 2D PDF is a surface, 3D PDF is a volume or continuous X discrete X Conditional probability: P(X Y) probability of X given that we already know Y Now we can model uncertainty Each pixel has a probability of being skin or not skin Skin classifier Given X = (R,G,B): how to determine if it is skin or not? Choose interpretation of highest probability set X to be a skin pixel if and only if Where do we get and?

Learning conditional PDF s Learning conditional PDF s We can calculate P(R skin) from a set of training images It is simply a histogram over the pixels in the training images each bin R i contains the proportion of skin pixels with color R i This doesn t work as well in higher-dimensional spaces. Why not? Approach: fit parametric PDF functions common choice is rotated Gaussian center covariance We can calculate P(R skin) from a set of training images It is simply a histogram over the pixels in the training images each bin R i contains the proportion of skin pixels with color R i But this isn t quite what we want Why not? How to determine if a pixel is skin? We want P(skin R) not P(R skin) How can we get it?» orientation, size defined by eigenvecs, eigenvals Bayes rule Bayesian estimation In terms of our problem: what we measure (likelihood) domain knowledge (prior) likelihood posterior (unnormalized) what we want (posterior) normalization term What could we use for the prior P(skin)? Could use domain knowledge P(skin) may be larger if we know the image contains a person for a portrait, P(skin) may be higher for pixels in the center Could learn the prior from the training set. How? P(skin) may be proportion of skin pixels in training set Bayesian estimation = minimize probability of misclassification Goal is to choose the label (skin or ~skin) that maximizes the posterior this is called Maximum A Posteriori (MAP) estimation

Bayesian estimation Skin detection results likelihood posterior (unnormalized) Bayesian estimation = minimize probability of misclassification Goal is to choose the label (skin or ~skin) that maximizes the posterior this is called Maximum A Posteriori (MAP) estimation Suppose the prior is uniform: P(skin) = P(~skin) = 0.5 in this case, maximizing the posterior is equivalent to maximizing the likelihood» if and only if this is called Maximum Likelihood (ML) estimation General classification This same procedure applies in more general circumstances More than two classes More than one dimension Linear subspaces convert x into v 1, v 2 coordinates What does the v 2 coordinate measure? - distance to line - use it for classification near 0 for orange pts What does the v 1 coordinate measure? - position along line - use it to specify which orange point it is Example: face detection Here, X is an image region dimension = # pixels each face can be thought of as a point in a high dimensional space Classification can be expensive Big search prob (e.g., nearest neighbors) or store large PDF s Suppose the data points are arranged as above Idea fit a line, classifier measures distance to line H. Schneiderman, T. Kanade. "A Statistical Method for 3D Object Detection Applied to Faces and Cars". IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2000) http://www-2.cs.cmu.edu/afs/cs.cmu.edu/user/hws/www/cvpr00.pdf H. Schneiderman and T.Kanade

Dimensionality reduction Linear subspaces Consider the variation along direction v among all of the orange points: What unit vector v minimizes var? What unit vector v maximizes var? Dimensionality reduction We can represent the orange points with only their v 1 coordinates since v 2 coordinates are all essentially 0 This makes it much cheaper to store and compare points A bigger deal for higher dimensional problems Solution: v 1 is eigenvector of A with largest eigenvalue v 2 is eigenvector of A with smallest eigenvalue Principal component analysis (PCA) The space of faces Suppose each data point is N-dimensional Same procedure applies: The eigenvectors of A define a new coordinate system eigenvector with largest eigenvalue captures the most variation among training vectors x eigenvector with smallest eigenvalue has least variation We can compress the data by only using the top few eigenvectors corresponds to choosing a linear subspace» represent points on a line, plane, or hyper-plane these eigenvectors are known as the principal components = + An image is a point in a high dimensional space An N x M image is a point in R NM We can define vectors in this space as we did in the 2D case

Dimensionality reduction Eigenfaces PCA extracts the eigenvectors of A Gives a set of vectors v 1, v 2, v 3,... Each one of these vectors is a direction in face space what do these look like? The set of faces is a subspace of the set of images Suppose it is K dimensional We can find the best subspace using PCA This is like fitting a hyper-plane to the set of faces spanned by vectors v 1, v 2,..., v K any face Projecting onto the eigenfaces The eigenfaces v 1,..., v K span the space of faces A face is converted to eigenface coordinates by Recognition with eigenfaces Algorithm 1. Process the image database (set of images with labels) Run PCA compute eigenfaces Calculate the K coefficients for each image 2. Given a new image (to be recognized) x, calculate K coefficients 3. Detect if x is a face 4. If it is a face, who is it? Find closest labeled face in database nearest-neighbor in K-dimensional space

Choosing the dimension K eigenvalues i = K How many eigenfaces to use? Look at the decay of the eigenvalues the eigenvalue tells you the amount of variance in the direction of that eigenface ignore eigenfaces with low variance NM Object recognition This is just the tip of the iceberg Better features: edges (e.g., SIFT) motion depth/3d info... Better classifiers: e.g., support vector machines (SVN) Speed (e.g., real-time face detection) Scale e.g., Internet image search Recognition is a very active research area right now