Instance recognition

Similar documents
Indexing local features and instance recognition May 16 th, 2017

Indexing local features and instance recognition May 15 th, 2018

Indexing local features and instance recognition May 14 th, 2015

CS 4495 Computer Vision Classification 3: Bag of Words. Aaron Bobick School of Interactive Computing

Recognizing Object Instances. Prof. Xin Yang HUST

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

By Suren Manvelyan,

Recognizing object instances

Recognizing object instances. Some pset 3 results! 4/5/2011. Monday, April 4 Prof. Kristen Grauman UT-Austin. Christopher Tosh.

Feature Matching + Indexing and Retrieval

Today. Main questions 10/30/2008. Bag of words models. Last time: Local invariant features. Harris corner detector: rotation invariant detection

Lecture 14: Indexing with local features. Thursday, Nov 1 Prof. Kristen Grauman. Outline

Distances and Kernels. Motivation

Pattern recognition (3)

Large Scale Image Retrieval

Instance recognition and discovering patterns

Advanced Techniques for Mobile Robotics Bag-of-Words Models & Appearance-Based Mapping

Recognition with Bag-ofWords. (Borrowing heavily from Tutorial Slides by Li Fei-fei)

Keypoint-based Recognition and Object Search

Recognition. Topics that we will try to cover:

Object Recognition and Augmented Reality

Object Classification for Video Surveillance

Visual Object Recognition

Instance-level recognition II.

Category Recognition. Jia-Bin Huang Virginia Tech ECE 6554 Advanced Computer Vision

Instance-level recognition part 2

Supervised learning. f(x) = y. Image feature

Visual Navigation for Flying Robots. Structure From Motion

Patch Descriptors. CSE 455 Linda Shapiro

Lecture 12 Recognition

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Lecture 12 Recognition. Davide Scaramuzza

Patch Descriptors. EE/CSE 576 Linda Shapiro

CS6670: Computer Vision

Large-scale visual recognition The bag-of-words representation

Lecture 12 Recognition

Lecture 12 Visual recognition

Part-based and local feature models for generic object recognition

Instance-level recognition

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011

EECS 442 Computer vision. Object Recognition

Instance-level recognition

Bag of Words Models. CS4670 / 5670: Computer Vision Noah Snavely. Bag-of-words models 11/26/2013

Visual Recognition and Search April 18, 2008 Joo Hyun Kim

Part based models for recognition. Kristen Grauman

Large scale object/scene recognition

Local features and image matching. Prof. Xin Yang HUST

The bits the whirl-wind left out..

Video Google: A Text Retrieval Approach to Object Matching in Videos

Beyond bags of features: Adding spatial information. Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba

Bundling Features for Large Scale Partial-Duplicate Web Image Search

Local features: detection and description. Local invariant features

Image Retrieval with a Visual Thesaurus

Local Features and Bag of Words Models

Geometric verifica-on of matching

Perception IV: Place Recognition, Line Extraction

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Today. Introduction to recognition Alignment based approaches 11/4/2008

SEARCH BY MOBILE IMAGE BASED ON VISUAL AND SPATIAL CONSISTENCY. Xianglong Liu, Yihua Lou, Adams Wei Yu, Bo Lang

Three things everyone should know to improve object retrieval. Relja Arandjelović and Andrew Zisserman (CVPR 2012)

Outline. Segmentation & Grouping. Examples of grouping in vision. Grouping in vision. Grouping in vision 2/9/2011. CS 376 Lecture 7 Segmentation 1

Grouping and Segmentation

Paper Presentation. Total Recall: Automatic Query Expansion with a Generative Feature Model for Object Retrieval

Image Features and Categorization. Computer Vision Jia-Bin Huang, Virginia Tech

The SIFT (Scale Invariant Feature

Segmentation and Grouping April 19 th, 2018

Compressed local descriptors for fast image and video search in large databases

Image Features and Categorization. Computer Vision Jia-Bin Huang, Virginia Tech

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability

Local Features Tutorial: Nov. 8, 04

CS 1674: Intro to Computer Vision. Midterm Review. Prof. Adriana Kovashka University of Pittsburgh October 10, 2016

Segmentation and Grouping

Object Recognition. Computer Vision. Slides from Lana Lazebnik, Fei-Fei Li, Rob Fergus, Antonio Torralba, and Jean Ponce

Local features, distances and kernels. Plan for today

Local Image Features

Image classification Computer Vision Spring 2018, Lecture 18

CMPSCI 670: Computer Vision! Grouping

Local features: detection and description May 12 th, 2015

Bias-Variance Trade-off (cont d) + Image Representations

6.819 / 6.869: Advances in Computer Vision

Efficient Representation of Local Geometry for Large Scale Object Retrieval

Local invariant features

Object detection as supervised classification

Object Recognition with Invariant Features

Miniature faking. In close-up photo, the depth of field is limited.

CS 4495 Computer Vision. Segmentation. Aaron Bobick (slides by Tucker Hermans) School of Interactive Computing. Segmentation

Lecture 24: Image Retrieval: Part II. Visual Computing Systems CMU , Fall 2013

Binary SIFT: Towards Efficient Feature Matching Verification for Image Search

Segmentation and Grouping April 21 st, 2015

Fuzzy based Multiple Dictionary Bag of Words for Image Classification

Supervised learning. y = f(x) function

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

CAP 5415 Computer Vision Fall 2012

Texture. COS 429 Princeton University

Large Scale 3D Reconstruction by Structure from Motion

Feature Matching and Robust Fitting

Recognition of Multiple Characters in a Scene Image Using Arrangement of Local Features

CS 558: Computer Vision 4 th Set of Notes

Feature Based Registration - Image Alignment

A Systems View of Large- Scale 3D Reconstruction

Transcription:

Instance recognition Thurs Oct 29 Last time Depth from stereo: main idea is to triangulate from corresponding image points. Epipolar geometry defined by two cameras We ve assumed known extrinsic parameters relating their poses Epipolar constraint limits where points from one view will be imaged in the other Makes search for correspondences quicker To estimate depth Limit search by epipolar constraint Compute correspondences, incorporate matching preferences 1

Virtual viewpoint video C. Zitnick et al, High-quality video view interpolation using a layered representation, SIGGRAPH 2004. Virtual viewpoint video C. Larry Zitnick et al, High-quality video view interpolation using a layered representation, SIGGRAPH 2004. http://research.microsoft.com/ivm/vvv/ 2

Review questions: What stereo rig yielded these epipolar lines? e e Epipole has same coordinates in both images. Points move along lines radiating from e: Focus of expansion Figure f rom Hartley & Zisserman Review questions When solving for stereo, when is it necessary to break the soft disparity gradient constraint? What can cause a disparity value to be undefined? What parameters relating the two cameras in the stereo rig must be known (or inferred) to compute depth? 3

Today Instance recognition Indexing local features efficiently Spatial verification models Recognizing or retrieving specific objects Example I: Visual search in feature films Visually defined query Groundhog Day [Rammis, 1993] Find this clock Find this place Slide credit: J. Sivic 4

Recognizing or retrieving specific objects Example II: Search photos on the web for particular places Find these landmarks...in these images and 1M more Slide credit: J. Sivic 5

Recall: matching local features? Image 1 Image 2 To generate candidate matches, find patches that have the most similar appearance (e.g., lowest SSD) Simplest approach: compare them all, take the closest (or closest k, or within a thresholded distance) Multi-view matching vs? Matching two given views for depth Search for a matching view for recognition 6

Indexing local features Indexing local features Each patch / region has a descriptor, which is a point in some high-dimensional feature space (e.g., SIFT) Descriptor s feature space 7

Indexing local features When we see close points in feature space, we have similar descriptors, which indicates similar local content. Descriptor s feature space Query image Database images Indexing local features With potentially thousands of features per image, and hundreds to millions of images to search, how to efficiently find those that are relevant to a new image? Possible solutions: Inverted file Nearest neighbor data structures Kd-trees Hashing 8

Indexing local features: inverted file index For text documents, an efficient way to find all pages on which a word occurs is to use an index We want to find all images in which a feature occurs. To use this idea, we ll need to map our features to visual words. Visual words Map high-dimensional descriptors to tokens/words by quantizing the feature space Word #2 Descriptor s feature space Quantize via clustering, let cluster centers be the prototype words Determine which word to assign to each new image region by finding the closest cluster center. 9

Visual words: main idea Extract some local features from a number of images e.g., SIFT descriptor space: each point is 128-dimensional Slide cr edit: D. Nister, CVPR 2006 Visual words: main idea 10

Visual words: main idea Visual words: main idea 11

Each point is a local descriptor, e.g. SIFT vector. 12

Visual words Example: each group of patches belongs to the same visual word Figure from Sivic & Zisserman, ICCV 2003 Visual words Also used for describing scenes and object categories for the sake of indexing or classification. Sivic & Zisserman 2003; Csurka, Bray, Dance, & Fan 2004; many others. 13

Dimension 2 (mean d/dy value) 10/28/2015 Visual words and textons First explored for texture and material representations Texton = cluster center of filter responses over collection of images Describe textures and materials based on distribution of prototypical texture elements. Leung & Malik 1999; Varma & Zisserman, 2002 Recall: Texture representation example Windows with primarily horizontal edges Both mean d/dx value mean d/dy value Win. #1 4 10 Win.#2 18 7 Win.#9 20 20 Dimension 1 (mean d/dx value) Windows with small gradient in both directions Windows with primarily vertical edges statistics to summarize patterns in small windows 14

Visual vocabulary formation Issues: Sampling strategy: where to extract features? Clustering / quantization algorithm Unsupervised vs. supervised What corpus provides features (universal vocabulary?) Vocabulary size, number of words Inverted file index Database images are loaded into the index mapping words to image numbers 15

Inverted file index When will this give us a significant gain in efficiency? New query image is mapped to indices of database images that share a word. Instance recognition: remaining issues How to summarize the content of an entire image? And gauge overall similarity? How large should the vocabulary be? How to perform quantization efficiently? Is having the same set of visual words enough to identify the object/scene? How to verify spatial agreement? How to score the retrieval results? Kristen Grauman 16

Analogy to documents Of all the sensory impressions proceeding to the brain, the v isual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain f rom our ey es. For a long time it was thought that the retinal image was transmitted sensory, point brain, by point to v isual centers in the brain; the cerebral cortex was a visual, perception, mov ie screen, so to speak, upon which the image in retinal, the ey e was cerebral projected. Through cortex, the discov eries of eye, Hubel cell, and Wiesel optical we now know that behind the origin of the v isual perception in the nerve, brain there image is a considerably more complicated Hubel, course Wiesel of ev ents. By f ollowing the v isual impulses along their path to the v arious cell lay ers of the optical cortex, Hubel and Wiesel hav e been able to demonstrate that the message about the image falling on the retina undergoes a stepwise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image. China is f orecasting a trade surplus of $90bn ( 51bn) to $100bn this y ear, a threef old increase on 2004's $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to $660bn. The f igures China, are likely trade, to f urther annoy the US, which has long argued that surplus, commerce, China's exports are unf airly helped by a deliberately exports, underv alued imports, y uan. Beijing US, agrees the yuan, surplus bank, is too high, domestic, but say s the y uan is only one f actor. Bank of China gov ernor Zhou foreign, Xiaochuan increase, said the country also needed to do trade, more to value boost domestic demand so more goods stay ed within the country. China increased the v alue of the y uan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the y uan to be allowed to trade f reely. Howev er, Beijing has made it clear that it will take its time and tread caref ully bef ore allowing the y uan to rise f urther in v alue. ICCV 2005 short course, L. Fei-Fei Object Bag of words ICCV 2005 short course, L. Fei-Fei 17

Bags of visual words Summarize entire image based on its distribution (histogram) of word occurrences. Analogous to bag of words representation commonly used for documents. 18

Comparing bags of words Rank frames by normalized scalar product between their (possibly weighted) occurrence counts---nearest neighbor search for similar images. [1 8 1 4] [5 1 1 0] sim d j,q = d j,q d j q d j q = i=1 V d j i q(i) i=1 V d j (i) 2 V i=1 q(i) 2 for vocabulary of V words tf-idf weighting Term frequency inverse document frequency Describe frame by frequency of each word within it, downweight words that appear often in the database (Standard weighting for text retrieval) Number of occurrences of word i in document d Number of words in document d Total number of documents in database Number of documents word i occurs in, in whole database 19

Inverted file index and bags of words similarity w 91 1. Extract words in query 2. Inverted file index to find relevant frames 3. Compare word counts Kristen Grauman Bags of words for content-based image retrieval Slide f rom Andrew Zisserman Siv ic & Zisserman, ICCV 2003 20

Visual Perc eptual Object and Recognition Sensory Augmented Tutorial Computing 10/28/2015 Slide f rom Andrew Zisserman Siv ic & Zisserman, ICCV 2003 Video Google System 1. Collect all words within query region 2. Inverted file index to find relevant frames 3. Compare word counts 4. Spatial verification Sivic & Zisserman, ICCV 2003 Demo online at : http://www.robots.ox.ac.uk/~vgg/r esearch/vgoogle/index.html Query region Retrieved frames K. Grauman, B. Leibe 46 21

Visual Perc eptual Object and Recognition Sensory Augmented Tutorial Computing Visual Perc eptual Object and Recognition Sensory Augmented Tutorial Computing 10/28/2015 Vocabulary Trees: hierarchical clustering for large vocabularies Tree construction: [Nister & Stewenius, CVPR 06] K. Grauman, B. Leibe Slide cr edit: David Nister Vocabulary Tree Training: Filling the tree [Nister & Stewenius, CVPR 06] K. Grauman, B. Leibe Slide cr edit: David Nister 22

Visual Perc eptual Object and Recognition Sensory Augmented Tutorial Computing Visual Perc eptual Object and Recognition Sensory Augmented Tutorial Computing 10/28/2015 Vocabulary Tree Training: Filling the tree [Nister & Stewenius, CVPR 06] K. Grauman, B. Leibe Slide cr edit: David Nister Vocabulary Tree Training: Filling the tree K. Grauman, B. Leibe [Nister & Stewenius, CVPR 06] Slide cr edit: David Nister 50 23

What is the computational advantage of the hierarchical representation bag of words, vs. a flat vocabulary? Vocabulary size Results for recognition task with 6347 images Branching factors Influence on performance, sparsity? Nister & Stewenius, CVPR 2006 24

Bags of words: pros and cons + flexible to geometry / deformations / viewpoint + compact summary of image content + provides vector representation for sets + very good results in practice - basic model ignores geometry must verify afterwards, or encode via features - background and foreground mixed when bag covers whole image - optimal vocabulary formation remains unclear Instance recognition: remaining issues How to summarize the content of an entire image? And gauge overall similarity? How large should the vocabulary be? How to perform quantization efficiently? Is having the same set of visual words enough to identify the object/scene? How to verify spatial agreement? How to score the retrieval results? Kristen Grauman 25

Which matches better? h e f a e z a e f e h Derek Hoiem Spatial Verification Query Query DB image with high BoW similarity DB image with high BoW similarity Both image pairs have many visual words in common. Slide credit: Ondrej Chum 26

Spatial Verification Query Query DB image with high BoW similarity DB image with high BoW similarity Only some of the matches are mutually consistent Slide credit: Ondrej Chum Spatial Verification: two basic strategies RANSAC Typically sort by BoW similarity as initial filter Verify by checking support (inliers) for possible transformations e.g., success if find a transformation with > N inlier correspondences Generalized Hough Transform Let each matched feature cast a vote on location, scale, orientation of the model object Verify parameters with enough votes 27

28 RANSAC verification Recall: Fitting an affine transformation ), ( i i y x ), ( i x i y 2 1 4 3 2 1 t t y x m m m m y x i i i i i i i i i i y x t t m m m m y x y x 2 1 4 3 2 1 1 0 0 0 0 1 0 0 Approximates viewpoint changes for roughly planar objects and roughly orthographic cameras.

RANSAC verification Spatial Verification: two basic strategies RANSAC Typically sort by BoW similarity as initial filter Verify by checking support (inliers) for possible transformations e.g., success if find a transformation with > N inlier correspondences Generalized Hough Transform Let each matched feature cast a vote on location, scale, orientation of the model object Verify parameters with enough votes 29

Voting: Generalized Hough Transform If we use scale, rotation, and translation invariant local features, then each feature match gives an alignment hypothesis (for scale, translation, and orientation of model in image). Model Novel image Adapted f rom Lana Lazebnik Voting: Generalized Hough Transform A hypothesis generated by a single match may be unreliable, So let each match vote for a hypothesis in Hough space Model Novel image 30

Gen Hough Transform details (Lowe s system) Training phase: For each model feature, record 2D location, scale, and orientation of model (relative to normalized feature frame) Test phase: Let each match btwn a test SIFT feature and a model feature vote in a 4D Hough space Use broad bin sizes of 30 degrees for orientation, a factor of 2 for scale, and 0.25 times image size for location Vote for two closest bins in each dimension Find all bins with at least three votes and perform geometric verification Estimate least squares affine transformation Search for additional features that agree with the alignment David G. Lowe. "Distinctive image features from scale-invariant keypoints. IJCV 60 (2), pp. 91-110, 2004. Slide credit: Lana Lazebnik Example result Background subtract for model boundaries Objects recognized, Recognition in spite of occlusion [Lowe] 31

Visual Perc eptual Object and Recognition Sensory Augmented Tutorial Computing 10/28/2015 Recall: difficulties of voting Noise/clutter can lead to as many votes as true target Bin size for the accumulator array must be chosen carefully In practice, good idea to make broad bins and spread votes to nearby bins, since verification stage can prune bad vote peaks. Example Applications Mobile tourist guide Self-localization Object/building recognition Photo/video augmentation B. Leibe [Quack, Leibe, Van Gool, CIVR 08] 32

Visual Perc eptual Object and Recognition Sensory Augmented Tutorial Computing Visual Perc eptual Object and Recognition Sensory Augmented Tutorial Computing 10/28/2015 Application: Large-Scale Retrieval Query Results from 5k Flickr images (demo available for 100k set) [Philbin CVPR 07] Web Demo: Movie Poster Recognition 50 000 movie posters indexed Query-by-image from mobile phone available in Switzerland http://www.kooaba.com/en/products_engine.html# 33

precision 10/28/2015 Instance recognition: remaining issues How to summarize the content of an entire image? And gauge overall similarity? How large should the vocabulary be? How to perform quantization efficiently? Is having the same set of visual words enough to identify the object/scene? How to verify spatial agreement? How to score the retrieval results? Kristen Grauman Scoring retrieval quality Query Database size: 10 images Relevant (total): 5 images Results (ordered): precision = #relevant / #returned recall = #relevant / #total relevant 1 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 recall Slide credit: Ondrej Chum 34

Recognition via alignment Pros: Effective when we are able to find reliable features within clutter Great results for matching specific instances Cons: Scaling with number of models Spatial verification as post-processing not seamless, expensive for large-scale problems Not suited for category recognition. What else can we borrow from text retrieval? China is f orecasting a trade surplus of $90bn ( 51bn) to $100bn this y ear, a threef old increase on 2004's $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to $660bn. The f igures China, are likely trade, to f urther annoy the US, which has long argued that surplus, commerce, China's exports are unf airly helped by a deliberately exports, underv alued imports, y uan. Beijing US, agrees the yuan, surplus bank, is too high, domestic, but say s the y uan is only one f actor. Bank of China gov ernor Zhou foreign, Xiaochuan increase, said the country also needed to do trade, more to value boost domestic demand so more goods stay ed within the country. China increased the v alue of the y uan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the y uan to be allowed to trade f reely. Howev er, Beijing has made it clear that it will take its time and tread caref ully bef ore allowing the y uan to rise f urther in v alue. 35

Query expansion Query: golf green Results: - How can the grass on the greens at a golf course be so perfect? - For example, a skilled golfer expects to reach the green on a par-four hole in... - Manufactures and sells synthetic golf putting greens and mats. Irrelevant result can cause a `topic drift : - Volkswagen Golf, 1999, Green, 2000cc, petrol, manual,, hatchback, 94000miles, 2.0 GTi, 2 Registered Keepers, HPI Checked, Air-Conditioning, Front and Rear Parking Sensors, ABS, Alarm, Alloy Slide credit: Ondrej Chum Query Expansion Results Query image Spatial verification New results New query Chum, Philbin, Sivic, Isard, Zisserman: Total Recall, ICCV 2007 Slide credit: Ondrej Chum 36

Query Expansion Step by Step Query Image Retrieved image Originally not retrieved Slide credit: Ondrej Chum Query Expansion Step by Step Slide credit: Ondrej Chum 37

Query Expansion Step by Step Slide credit: Ondrej Chum Query Expansion Results Original results (good) Query image Expanded results (better) Slide credit: Ondrej Chum 38

Summary Matching local invariant features Useful not only to provide matches for multi-view geometry, but also to find objects and scenes. Bag of words representation: quantize feature space to make discrete set of visual words Summarize image by distribution of words Index individual words Inverted index: pre-compute index to enable faster search at query time Recognition of instances via alignment: matching local features followed by spatial verification Robust fitting : RANSAC, GHT Kristen Grauman 39