Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Similar documents
Scale Invariant Feature Transform

Scale Invariant Feature Transform

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Computer Vision for HCI. Topics of This Lecture

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

Outline 7/2/201011/6/

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Edge and corner detection

Image Segmentation and Registration

Scale Invariant Feature Transform by David Lowe

The SIFT (Scale Invariant Feature

School of Computing University of Utah

Schedule for Rest of Semester

Digital Image Processing COSC 6380/4393

Chapter 3 Image Registration. Chapter 3 Image Registration

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Announcements. Recognition I. Gradient Space (p,q) What is the reflectance map?

Corner Detection. GV12/3072 Image Processing.

Comparison of Feature Detection and Matching Approaches: SIFT and SURF

Image features. Image Features

Final Review CMSC 733 Fall 2014

CS5670: Computer Vision

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Local Features: Detection, Description & Matching

Vision and Image Processing Lab., CRV Tutorial day- May 30, 2010 Ottawa, Canada

Robotics Programming Laboratory

Anno accademico 2006/2007. Davide Migliore

Image Processing

Edge and local feature detection - 2. Importance of edge detection in computer vision

Local Image Features

Local Descriptors. CS 510 Lecture #21 April 6 rd 2015

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

Local features: detection and description. Local invariant features

Image processing and features

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image

Local Features Tutorial: Nov. 8, 04

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Image Processing. Image Features

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Motion illusion, rotating snakes

Local invariant features

Obtaining Feature Correspondences

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

SIFT - scale-invariant feature transform Konrad Schindler

AK Computer Vision Feature Point Detectors and Descriptors

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

2D Image Processing Feature Descriptors

EE795: Computer Vision and Intelligent Systems

IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION. Maral Mesmakhosroshahi, Joohee Kim

Local Feature Detectors

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi

A Comparison of SIFT, PCA-SIFT and SURF

3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform. Wenqi Zhu

EN1610 Image Understanding Lab # 3: Edges

Object Recognition with Invariant Features

Motion Estimation and Optical Flow Tracking

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision

CS4670: Computer Vision

Recognition of Degraded Handwritten Characters Using Local Features. Markus Diem and Robert Sablatnig

Local Image Features

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Computer Vision I - Basics of Image Processing Part 2

Lecture 4.1 Feature descriptors. Trym Vegard Haavardsholm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Implementing the Scale Invariant Feature Transform(SIFT) Method

Feature Detection and Matching

Automatic Logo Detection and Removal

Introduction to Medical Imaging (5XSA0)

CAP 5415 Computer Vision Fall 2012

Lecture 10 Detectors and descriptors

Support Vector Machines

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

Local features: detection and description May 12 th, 2015

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Key properties of local features

Object Detection by Point Feature Matching using Matlab

Lecture 7: Most Common Edge Detectors

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS

Automatic Image Alignment (feature-based)

3D Object Recognition using Multiclass SVM-KNN

Coarse-to-fine image registration

Classifying Images with Visual/Textual Cues. By Steven Kappes and Yan Cao

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.

3D from Photographs: Automatic Matching of Images. Dr Francesco Banterle

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

VK Multimedia Information Systems

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Wikipedia - Mysid

Recognition: Face Recognition. Linda Shapiro EE/CSE 576

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Transcription:

Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction

Preprocessing The goal of pre-processing is to try to reduce unwanted variation in image due to lighting, scale, deformation etc. to reduce data to a manageable size Give the subsequent model a chance Preprocessing definition: deterministic transformation of pixels p to create data vector x Usually heuristics based on experience 2

Structure Per-pixel transformations Edges, corners, and interest points Descriptors Dimensionality reduction 3

Normalization Fix first and second moments to standard values Remove contrast and constant additive luminance variations Before After 4

Histogram Equalization Make all of the moments the same by forcing the histogram of intensities to be the same Before/ normalized/ Histogram Equalized 5

Histogram Equalization 6

Convolution Takes pixel image P and applies a filter F Computes weighted sum of pixel values, where weights given by filter. Easiest to see with a concrete example 7

Blurring (convolve with Gaussian) 8

Gradient Filters Rule of thumb: big response when image matches filter 9

Gabor Filters 10

Haar Filters 11

Local binary patterns 12

Textons An attempt to characterize texture Replace each pixel with integer representing the texture type 13

Computing Textons Take a bank of filters and apply to lots of images Cluster in filter space For new pixel, filter surrounding region with same bank, and assign to nearest cluster 14

Structure Per-pixel transformations Edges, corners, and interest points Descriptors Dimensionality reduction 15

Edges (from Elder and Goldberg 2000) 16

Canny Edge Detector 17

Canny Edge Detector Compute horizontal and vertical gradient images h and v 18

Canny Edge Detector Quantize to 4 directions 19

Canny Edge Detector Non-maximal suppression 20

Canny Edge Detector Hysteresis Thresholding 21

Harris Corner Detector Make decision based on image structure tensor 22

SIFT Detector Filter with difference of Gaussian filters at increasing scales Build image stack (scale space) Find extrema in this 3D volume 23

SIFT Detector Identified Corners Remove those on edges Remove those where contrast is low 24

Assign Orientation Orientation assigned by looking at intensity gradients in region around point Form a histogram of these gradients by binning. Set orientation to peak of histogram. 25

Structure Per-pixel transformations Edges, corners, and interest points Descriptors Dimensionality reduction 26

Sift Descriptor Goal: produce a vector that describes the region around the interest point. All calculations are relative to the orientation and scale of the keypoint Makes descriptor invariant to rotation and scale 27

Sift Descriptor 1. Compute image gradients 2. Pool into local histograms 3. Concatenate histograms 4. Normalize histograms 28

HoG Descriptor 29

Bag of words descriptor Compute visual features in image Compute descriptor around each Find closest match in library and assign index Compute histogram of these indices over the region Dictionary computed using K-means 30

Shape context descriptor 31

Structure Per-pixel transformations Edges, corners, and interest points Descriptors Dimensionality reduction 32

Dimensionality Reduction Dimensionality reduction attempt to find a low dimensional (or hidden) representation h which can approximately explain the data x so that where is a function that takes the hidden variable and a set of parameters θ. Typically, we choose the function family and θ from training data and then learn h 33

Least Squares Criterion Choose the parameters θ and the hidden variables h so that they minimize the least squares approximation error (a measure of how well they can reconstruct the data x). 34

Simple Example Approximate each data example x with a scalar value h. Data is reconstructed by multiplying h by a parameter φ and adding the mean vector µ.... or even better, lets subtract µ from each data example to get mean-zero data 35

Simple Example Approximate each data example x with a scalar value h. Data is reconstructed by multiplying h by a factor φ. Criterion: 36

Criterion Problem: the problem is non-unique. If we multiply f by any constant α and divide each of the hidden variables h 1...I by the same constant we get the same cost. (i.e. (fα) (h i /α) = fh i ) Solution: We make the solution unique by constraining the length of f to be 1 using a Lagrange multiplier. 37

Criterion Now we have the new cost function: To optimize this we take derivatives with respect to φ and h i, equate the resulting expressions to zero and re-arrange. 38

Solution To compute the hidden value, take dot product with the vector φ 39

Solution To compute the hidden value, take dot product with the vector φ or where To compute the vector φ, compute the first eigenvector of the scatter matrix. 40

Computing h To compute the hidden value, take dot product with the vector φ 41

Reconstruction To reconstruct, multiply the hidden variable h by vector φ. 42

Principal Components Analysis Same idea, but not the hidden variable h is multi-dimensional. Each components weights one column of matrix F so that data is approximated as This leads to cost function: This has a non-unique optimum so we enforce the constraint that F should be a (truncated) rotation matrix and F T F=I 43

PCA Solution To compute the hidden vector, take dot product with each column of Φ. To compute the matrix Φ, compute the first the scatter matrix. eigenvectors of The basis functions in the columns of Φ are called principal components and the entries of h are called loadings 44

Dual PCA Problem: PCA as described has a major drawback. We need to compute the eigenvectors of the scatter matrix But this has size D x x D x. Visual data tends to be very high dimensional, so this may be extremely large. Solution: Reparameterize the principal components as weighted sums of the data...and solve for the new variables Ψ. 45

Geometric Interpretation Each column of Φ can be described as a weighted sum of the original datapoints. Weights given in the corresponding columns of the new variable Ψ. 46

Motivation Solution: Reparameterize the principal components as weighted sums of the data...and solve for the new variables Ψ. Why? If the number of datapoints I is less than the number of observed dimension D x then the Ψ will be smaller than Φ and the resulting optimization becomes easier. Intuition: we are not interested in principal components that are not in the subspace spanned by the data anyway. 47

Cost functions Principal components analysis...subject to Φ T Φ=I. Dual principal components analysis...subject to Φ T Φ=I or Ψ T X T XΨ=I. 48

Solution To compute the hidden vector, take dot product with each column of Φ=ΨX. 49

Solution To compute the hidden vector, take dot product with each column of Φ=ΨX. To compute the matrix Ψ, compute the first the inner product matrix X T X. eigenvectors of The inner product matrix has size I x I. If the number of examples I is less than the dimensionality of the data D x then this is a smaller eigenproblem. 50

K-Means algorithm Approximate data with a set of means Least squares criterion Alternate minimization 51

52