Learning-based Methods in Vision 16-824 Sparsity and Deep Learning
Motivation Multitude of hand-designed features currently in use in vision - SIFT, HoG, LBP, MSER, etc. Even the best approaches, just capture low-level edge gradients [ Felzenszwalb, Girshick, McAllester and Ramanan, PAMI 2007 ] [ Yan & Huang ] (Winner of PASCAL 2010 classification competition) Slide adopted from Rob Fergus
Motivation Multitude of hand-designed features currently in use in vision - SIFT, HoG, LBP, MSER, etc. Even the best approaches, just capture low-level edge gradients [ Felzenszwalb, Girshick, McAllester and Ramanan, PAMI 2007 ] [ Yan & Huang ] (Winner of PASCAL 2010 classification competition) Can we learn the features? Slide adopted from Rob Fergus
Visual cortex bottom-up/top-down V1: primary visual cortex simple cells complex cells [ Scientific American, 1999 ] Slide adopted from Ying Nian Wu
Simple V1 cells [Daugman, 1985 ] Gabor wavelets: localized sine and cosine waves V1 simple cells Local sum respond to edges image pixels Slide adopted from Ying Nian Wu
Complex V1 cells [ Riesenhuber and Poggio, 1999 ] Local max V1 complex cells V1 simple cells Local sum respond to edges image pixels Slide adopted from Ying Nian Wu
Single Layer Architecture Input: Image Pixels / Features Slide from Rob Fergus
Single Layer Architecture Input: Image Pixels / Features Filter Slide from Rob Fergus
Single Layer Architecture Input: Image Pixels / Features Filter Normalize Slide from Rob Fergus
Single Layer Architecture Input: Image Pixels / Features Filter Normalize Pool Slide from Rob Fergus
Single Layer Architecture Input: Image Pixels / Features Filter Normalize Pool Output: Features / Classifier Slide from Rob Fergus
Single Layer Architecture Input: Image Pixels / Features Filter Pool Normalize Output: Features / Classifier Slide from Rob Fergus
SIFT Descriptor Image Pixels Apply Gabor filters Slide from Rob Fergus
SIFT Descriptor Image Pixels Apply Gabor filters Spatial pool (Sum) Slide from Rob Fergus
SIFT Descriptor Image Pixels Apply Gabor filters Spatial pool (Sum) Normalize to unit length Feature Vector Slide from Rob Fergus
Feature Learning Architecture Pixels / Features Filter with Dictionary (patch/tiled/ convolutional) + Non-linearity Slide from Rob Fergus
Feature Learning Architecture Pixels / Features Filter with Dictionary (patch/tiled/ convolutional) + Non-linearity Normalization between feature responses (Group) Sparsity Max / Softmax Local Contrast Normalization (Subtractive / Divisive) Slide from Rob Fergus
Feature Learning Architecture Pixels / Features Filter with Dictionary (patch/tiled/ convolutional) + Non-linearity Normalization between feature responses (Group) Sparsity Max / Softmax Local Contrast Normalization (Subtractive / Divisive) Spatial/Feature (Sum or Max) Features Slide from Rob Fergus
Spatial Pyramid Matching SIFT Features Filter with Visual Words [ Lazebnik, Schmid, Ponce, CVPR 2006 ] Max Multi-scale spatial pool (Sum) Classifier Slide from Rob Fergus
Role of Normalization Lots of different mechanisms (e.g., max, sparsity, local contrast normalization) All induce local competition between features to explain the input ( explaining away property).... Convolution Convolutional Sparse Coding Filters Zeiler et al. [CVPR 10/ICCV 11], Kavakouglou et al. [NIPS 10], Yang et al. [CVPR 10] Slide from Rob Fergus
Role of Pooling Spatial Pooling - Invariance to small transformations (e.g., shifts) - Larger receptive fields Pooling Across Features - Gives and/or behavior (grammar) - Compositionality [ Zeiler, Taylor, Fergus, ICCV 2011 ] Pooling with latent variables/springs Slide from Rob Fergus
Role of Pooling Spatial Pooling - Invariance to small transformations (e.g., shifts) - Larger receptive fields Pooling Across Features - Gives and/or behavior (grammar) - Compositionality [ Zeiler, Taylor, Fergus, ICCV 2011 ] Pooling with latent variables/springs [ Felzenszwalb, Girshick, McAllester, Ramanan, PAMI 2009 ] [ Chen, Zhu, Lin, Yuille, Zhang, NIPS 2007 ] Slide from Rob Fergus
Image Restoration [ Mairal, Bach, Ponce, Shapiro, Zisserman, ICCV 2009 ] Image Pixels Feature Vector
Image Restoration [ Mairal, Bach, Ponce, Shapiro, Zisserman, ICCV 2009 ] Image Pixels Filter with Dictionary (patch) Feature Vector
Image Restoration [ Mairal, Bach, Ponce, Shapiro, Zisserman, ICCV 2009 ] Image Pixels Filter with Dictionary (patch) Sparsity Feature Vector
Image Restoration [ Mairal, Bach, Ponce, Shapiro, Zisserman, ICCV 2009 ] Image Pixels Filter with Dictionary (patch) Sparsity Spatial pool (Sum) Feature Vector
Sparse Representation for Image Restoration y {z} observed image = x orig {z } true image + w {z} noise Slide adopted from Julien Mairal
Sparse Representation for Image Restoration y {z} observed image = x orig {z } true image + w {z} noise Can be cast as energy minimization problem: E(x) = 1 2 y x 2 2 {z } reconstruction of observed image + E prior (x) {z } image prior (-log prior) Slide adopted from Julien Mairal
Sparse Representation for Image Restoration y {z} observed image = x orig {z } true image + w {z} noise Can be cast as energy minimization problem: E(x) = 1 2 y x 2 2 {z } reconstruction of observed image + E prior (x) {z } image prior (-log prior) or probabilistically: p(y, x) = p(y x) {z } likelihood p(x) {z} prior Slide adopted from Julien Mairal
Sparse Representation for Image Restoration y {z} observed image = x orig {z } true image + w {z} noise Can be cast as energy minimization problem: E(x) = 1 2 y x 2 2 {z } reconstruction of observed image + E prior (x) {z } image prior (-log prior) or probabilistically: p(y, x) = p(y x) {z } likelihood p(x) {z} prior Classical priors: - Smoothness: - Total variation: Lx 2 2 rx 2 1 Slide adopted from Julien Mairal
Sparse Linear Model Let x be an image (or a signal) in R m And D =[d 1,...,d p ] 2 R m p basis vectors (dictionary) be a set of normalized linear We can represent x with few basis vectors, i.e., there exists a sparse vector 2 R p (sparse code) such that x D. ode. x }{{} x R m d 1 d 2 d p } {{ } D R m p Dictionary α[1] α[2]. α[p] }{{} Sparse code α R p,sparse Slide adopted from Julien Mairal
Why sparsity? A dictionary can be good for representing a class of signals We don t want to reconstruct noise! Any given patch looks like part of an image A sum of a few patches is likely to produce a reasonable patch from an image Image Patches Sum of many patches can
Lateral Inhibition Visual neurons respond less if they are activated at the same time than if one is activated alone. So the fewer neighboring neurons stimulated, the more strongly a neuron responds. Images from Ying Nian Wu
Sparse Representation for Image Restoration Hand designed dictionaries - Wavelets, Curvelets, Wedgelets, Bandlets,... - [Haar, 1910], [Zweig, Morlet, Grossman ~70s], [Meyer, Mallat, Daubechies, Coifman, Donoho, Candes ~80s-today]... (see [Mallat, 1999]) Learned dictionaries of patches - [Olshausen and Field, 1997], [Engan et al., 1999], [Lewicki and Sejnowski, 2000], [Aharon et al., 2006], [Roth and Black, 2005], [Lee et al., 2007] min i,d NX i=1 1 2 x i D i 2 2 {z } reconstruction + i 1 {z } sparsity L1-norm induces sparsity Slide from Julien Mairal
Optimization for Dictionary Learning min i,d NX i=1 1 2 x i D i 2 2 + i 1 Classical optimization does this in EM style (alternating between learning the dictionary and the sparse codes) Good results, but slow [ Mairal et al., 2009a] proposes online learning NX I denoised = 1 M i=1 R i D i Slide adopted from Julien Mairal
Results Slide adopted from Julien Mairal
Image Classifications (Bag-of-Features) SIFT Features Filter with Visual Words Max Spatial pool (Sum) Classifier
Learning Codebooks for Image Classification Image is represented by a set of low-level (SIFT) descriptors at N locations identified with their index i x i Hard-quantization (with p visual words) x i D i i 2{0, 1} p px j=1 i [j] =1 Soft-quantization Sparse coding: i [j] = N (x i ; d j ), 2 P p k=1 N (x i; d k, 2 ) min i,d NX i=1 1 2 x i D i 2 2 {z } reconstruction + i 1 {z } sparsity Slide adopted from Julien Mairal
Discriminative Learning of Dictionaries [ Mairal, Bach, Ponce, Sapiro, Zisserman, CVPR 2008 ] min i,d NX i=1 1 2 x i D i 2 2 {z } reconstruction + i 1 {z } sparsity Positive class min i,d NX i=1 1 2 x i D i 2 2 {z } reconstruction + i 1 {z } sparsity Negative class
Learning Codebooks for Image Classification [ Mairal, Bach, Ponce, Sapiro, Zisserman, CVPR 2008 ] Slide adopted from Julien Mairal
Visual cortex V1: primary visual cortex simple cells complex cells [ Scientific American, 1999 ] Slide adopted from Ying Nian Wu
Visual cortex V1: primary visual cortex simple cells complex cells What is beyond V1? [ Scientific American, 1999 ] Slide adopted from Ying Nian Wu
Visual cortex V1: primary visual cortex simple cells complex cells What is beyond V1? Hierarchical model [ Scientific American, 1999 ] Slide adopted from Ying Nian Wu
Mid-level features Beyond Edges Continuation Parallelism Junctions Corners High-level object parts Objects Scenes??? Slide adopted from Rob Fergus
Mid-level features Beyond Edges Continuation Parallelism Junctions Corners High-level object parts Objects Scenes??? Slide adopted from Rob Fergus
Challenges Grouping mechanism - Want edge structures to group into more complex forms - But it is hard to define explicit rules Invariance to local distortions - Under distortions, corners, T-junctions, parallel lines, etc. can look quite different Slide adopted from Rob Fergus
Deep Feature Learning Build hierarchy of feature extractors (layers) - All the way from pixels to classifiers - Homogeneous (simple) structure for all layers - Unsupervised training Image/Video Pixels Layer 1 Layer 2 Layer 3 Simple Classifier Slide from Rob Fergus
Deep Feature Learning Build hierarchy of feature extractors (layers) - All the way from pixels to classifiers - Homogeneous (simple) structure for all layers - Unsupervised training Image/Video Pixels Layer 1 Layer 2 Layer 3 Simple Classifier Numerous approaches: Restricted Boltzmann Machines [Hinton, Ng, Bengio, ] Sparse coding [Yu, Fergus, LeCun] Auto-encoders [LeCun, Bengio] ICA variants [Ng, Cottrell] & many more. Slide from Rob Fergus
Hierarchical Vision Models [Jin & Geman, CVPR 2006] e.g. animals, trees, rocks e.g. contours, intermediate objects e.g. linelets, curvelets, T- junctions e.g. discontinuities, gradient animal head instantiated by bear head Slide adopted from Rob Fergus
Hierarchical Vision Models [Jin & Geman, CVPR 2006] e.g. animals, trees, rocks e.g. contours, intermediate objects e.g. linelets, curvelets, T- junctions e.g. discontinuities, gradient animal head instantiated by bear head Slide adopted from Rob Fergus
Single Layer Convolutional Architecture Input: Image Pixels / Features Filter Normalize Pool Output: Features / Classifier Slide from Rob Fergus
Single Deconvolutional Layer Convolutional form of sparse coding Slide from Rob Fergus
Single Deconvolutional Layer Slide from Rob Fergus
Single Deconvolutional Layer Slide from Rob Fergus
Single Deconvolutional Layer Slide from Rob Fergus
Toy Example Feature maps Filters Slide from Rob Fergus
Reversible Max Pooling Feature Map Slide from Rob Fergus
Reversible Max Pooling Pooling Feature Map Slide from Rob Fergus
Reversible Max Pooling Pooled Feature Maps Pooling Feature Map Slide from Rob Fergus
Reversible Max Pooling Max Locations Switches Pooled Feature Maps Pooling Feature Map Slide from Rob Fergus
Reversible Max Pooling Max Locations Switches Pooled Feature Maps Pooling Unpooling Feature Map Reconstructed Feature Map Slide from Rob Fergus
Overall Architecture (1 layer) Slide from Rob Fergus
Toy Example Pooled maps Feature maps Filters Slide from Rob Fergus
Overall Architecture (2 Layers) Slide from Rob Fergus
Model Parameters 7x7 filters at all layers Slide from Rob Fergus
Layer 1 Filters 15 filters/feature maps, showing max for each map Slide from Rob Fergus
Layer 2 Filters 50 filters/feature maps, showing max for each map projected down to image Slide from Rob Fergus
Layer 3 Filters 100 filters/feature maps, showing max for each map Slide from Rob Fergus
Layer 4 Filters 150 in total; receptive field is entire image Slide from Rob Fergus
Relative Size of Receptive Fields (to scale) Slide from Rob Fergus
Restricted Boltzmann Machines
Restricted Boltzmann Machines
Restricted Boltzmann Machines Units v i are binary (0/1) v i W ij v j Logistic function p(v i =1 {v j },j 6= i) = 1 1+exp( b i j W ij v j ) Unit is activated based on linear combination of other units plus bias
Restricted Boltzmann Machines Units v i are binary (0/1) 1 p(v i = 1) v i 0 W ij 0 v j b i + j W ij v j Logistic function p(v i =1 {v j },j 6= i) = 1 1+exp( b i j W ij v j ) Unit is activated based on linear combination of other units plus bias
Restricted Boltzmann Machines v i Units are binary (0/1) v i W ij v j p(v) = exp( E(v)) v exp( E(v)) E(v) = i b i v i i6=j W ij v i v j More probable configuration = lower energy
Restricted Boltzmann Machines v i Units are binary (0/1) v i W ij Learning amounts to estimating parameters of the model: v j ={b i,w ij } p(v) = exp( E(v)) v exp( E(v)) E(v) = i b i v i i6=j W ij v i v j More probable configuration = lower energy
Restricted Boltzmann Machines v i Units are binary (0/1) v i W ij Learning amounts to estimating parameters of the model: v j ={b i,w ij } Maximum Likelihood p(v) = exp( E(v)) v exp( E(v)) E(v) = i b i v i i6=j W ij v i v j More probable configuration = lower energy
Maximum Likelihood Learning Typically - assume independence of N samples L( x 1,x 2,,x N )= NY i=1 p(x i ) Take log (which turns product into a sum) and do gradientbased optimization with respect to parameters For Boltzmann Machine that comes down to optimizing sum of energy functions minus the normalizing constant p(v) = exp( E(v)) v exp( E(v))
Bolzmann Machine Learning So, essentially we need to iteratively do: W (iter) W (1) ij = W (0) ij + W (0) ij (iter 1) ij = W ij + W (iter 1) ij b (iter) b (1) i = b (0) i + b (0) i (iter 1) i = b i + b (iter 1) i
Bolzmann Machine Learning So, essentially we need to iteratively do: W (iter) W (1) ij = W (0) ij + W (0) ij (iter 1) ij = W ij + W (iter 1) ij b (iter) b (1) i = b (0) i + b (0) i (iter 1) i = b i + b (iter 1) i Where do we get the gradients?
Bolzmann Machine Learning So, essentially we need to iteratively do: W (iter) W (1) ij = W (0) ij + W (0) ij (iter 1) ij = W ij + W (iter 1) ij b (iter) b (1) i = b (0) i + b (0) i (iter 1) i = b i + b (iter 1) i Where do we get the gradients? W ij /hv i v j i data hv i v j i model b i /hv i i data hv i i model This is easy, just look at the data Requires samples from the model (Gibbs sampler to equilibrium)
Bolzmann Machine Learning W ij /hv i v j i data hv i v j i model Law of large numbers: Approximate expectations using samples W ij = 1 D DX d=1 v (d) i v (d) j 1 M MX m=1 ṽ (m) i ṽ (m) j Where do we get the gradients? W ij /hv i v j i data hv i v j i model b i /hv i i data hv i i model This is easy, just look at the data Requires samples from the model (Gibbs sampler to equilibrium)
Restricted Boltzmann Machines Units v i are binary (0/1) h j W ij p(v, h) = exp( E(v, h)) Z v i E(v, h) = ij W ij v i h j i a i v i j b j h j p(h j =1 v) = 1 1+exp( b j i W ij v i ) p(v i =1 h) = 1 1+exp( a i j W ij h j )
Restricted Boltzmann Machines Units v i are binary (0/1) h j W ij v i Still not very realistic, because most data in real world is continuos p(v i =1 h) =N (a i + j W ij h j, 2 )
Auto-encoders [ Hinton and Salakhutdinov, Science 06 ] Patches 28x28 We train the auto-encoder to reproduce its input vector as its output This forces it to compress as much information as possible into the 30 numbers in the central bottleneck. 1000 neurons 500 neurons 250 neurons 30 These 30 numbers are then a good way to visualize data and do classification. 250 neurons 500 neurons 1000 neurons 28x28 Patches
Learning a Compositional Hierarchy of Object Structure [ Fidler & Leaonardis, CVPR 07; Fidler, Boben & Leonards, CVPR 08 ] Parts model The architecture Learned parts
Learning a Compositional Hierarchy of Object Structure [ Fidler & Leaonardis, CVPR 07; Fidler, Boben & Leonards, CVPR 08 ] Layer 2 Layer 3
Learning a Compositional Hierarchy of Object Structure [ Fidler & Leaonardis, CVPR 07; Fidler, Boben & Leonards, CVPR 08 ]
Learning a Compositional Hierarchy of Object Structure [ Fidler & Leaonardis, CVPR 07; Fidler, Boben & Leonards, CVPR 08 ]
Conclusions Interesting paradigm, where algorithm tries to learn everything, right? Patch size (8x8 or 20x20?) Learning parameters Need lots and lots of data typically Higher-levels mostly work on PASCAL or other simple datasets It is hard to train multi-layer architectures! Since learning is in effect unsupervised, it s difficult to debug or gather what s going on.