Does the Brain do Inverse Graphics?

Similar documents
Does the Brain do Inverse Graphics?

Why equivariance is better than premature invariance

Dynamic Routing Between Capsules

Dynamic Routing Between Capsules. Yiting Ethan Li, Haakon Hukkelaas, and Kaushik Ram Ramasamy

Machine Learning 13. week

Neural Networks for Machine Learning. Lecture 15a From Principal Components Analysis to Autoencoders

Neural Networks for unsupervised learning From Principal Components Analysis to Autoencoders to semantic hashing

Capsule Networks. Eric Mintun

CMU Lecture 18: Deep learning and Vision: Convolutional neural networks. Teacher: Gianni A. Di Caro

Dynamic Routing Using Inter Capsule Routing Protocol Between Capsules

Machine Learning. Deep Learning. Eric Xing (and Pengtao Xie) , Fall Lecture 8, October 6, Eric CMU,

Convolutional Neural Networks

Neural Networks for Machine Learning. Lecture 5a Why object recogni:on is difficult. Geoffrey Hinton with Ni:sh Srivastava Kevin Swersky

COMP 551 Applied Machine Learning Lecture 16: Deep Learning

DEEP LEARNING REVIEW. Yann LeCun, Yoshua Bengio & Geoffrey Hinton Nature Presented by Divya Chitimalla

Facial Expression Classification with Random Filters Feature Extraction

Breaking it Down: The World as Legos Benjamin Savage, Eric Chu

Deep Learning in Visual Recognition. Thanks Da Zhang for the slides

Exploring Capsules. Binghui Peng Runzhou Tao Shunyu Yao IIIS, Tsinghua University {pbh15, trz15,

Binocular cues to depth PSY 310 Greg Francis. Lecture 21. Depth perception

Deep Learning. Deep Learning provided breakthrough results in speech recognition and image classification. Why?

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa

Back propagation Algorithm:

Restricted Boltzmann Machines. Shallow vs. deep networks. Stacked RBMs. Boltzmann Machine learning: Unsupervised version

A Deep Learning primer

Su et al. Shape Descriptors - III

ImageNet Classification with Deep Convolutional Neural Networks

Practice Exam Sample Solutions

Deep Learning. Volker Tresp Summer 2014

Face Detection Using Convolutional Neural Networks and Gabor Filters

Computer Vision Lecture 16

Facial Keypoint Detection

Learning and Recognizing Visual Object Categories Without First Detecting Features

Deep Learning. Deep Learning. Practical Application Automatically Adding Sounds To Silent Movies

CSC321: Neural Networks. Lecture 13: Learning without a teacher: Autoencoders and Principal Components Analysis. Geoffrey Hinton

Final Review CMSC 733 Fall 2014

Convolutional Neural Networks. Computer Vision Jia-Bin Huang, Virginia Tech

Deep Learning with Tensorflow AlexNet

12/3/2009. What is Computer Vision? Applications. Application: Assisted driving Pedestrian and car detection. Application: Improving online search

Deep Learning for Computer Vision II

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

6. Convolutional Neural Networks

Deep Convolutional Neural Networks. Nov. 20th, 2015 Bruce Draper

Computational Foundations of Cognitive Science

Backpropagation and Neural Networks. Lecture 4-1

Emotion Classification

lecture 10 - depth from blur, binocular stereo

Dietrich Paulus Joachim Hornegger. Pattern Recognition of Images and Speech in C++

Pouya Kousha Fall 2018 CSE 5194 Prof. DK Panda

Why study Computer Vision?

Computer Vision: Making machines see

Quantifying Translation-Invariance in Convolutional Neural Networks

A Simple Vision System

ROB 537: Learning-Based Control

Image Classification pipeline. Lecture 2-1

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test

CPSC340. State-of-the-art Neural Networks. Nando de Freitas November, 2012 University of British Columbia

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Recognition (Part 4) Introduction to Computer Vision CSE 152 Lecture 17

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Locally Scale-Invariant Convolutional Neural Networks

Chapter 7. Conclusions and Future Work

Image Registration Lecture 1: Introduction

Unsupervised Learning

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

! References: ! Computer eyesight gets a lot more accurate, NY Times. ! Stanford CS 231n. ! Christopher Olah s blog. ! Take ECS 174!

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

CS 2750: Machine Learning. Neural Networks. Prof. Adriana Kovashka University of Pittsburgh April 13, 2016

Artificial Intelligence Introduction Handwriting Recognition Kadir Eren Unal ( ), Jakob Heyder ( )

Using Capsule Networks. for Image and Speech Recognition Problems. Yan Xiong

CSE 527: Introduction to Computer Vision

Multiview Feature Learning

INF 5860 Machine learning for image classification. Lecture 11: Visualization Anne Solberg April 4, 2018

Homework #1. Displays, Alpha Compositing, Image Processing, Affine Transformations, Hierarchical Modeling

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION

CSC 411 Lecture 18: Matrix Factorizations

Neural Network Neurons

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Deep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group

Character Recognition Using Convolutional Neural Networks

Object Detection System

Convolutional Neural Networks. CSC 4510/9010 Andrew Keenan

Object Recognition in 3D data using Capsules

CS4733 Class Notes, Computer Vision

Seeing the unseen. Data-driven 3D Understanding from Single Images. Hao Su

Research on Pruning Convolutional Neural Network, Autoencoder and Capsule Network

Advanced Introduction to Machine Learning, CMU-10715

Arbib: Slides for TMB2 Section 7.2 1

Deep Learning for Virtual Shopping. Dr. Jürgen Sturm Group Leader RGB-D

CSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D.

Machine Learning With Python. Bin Chen Nov. 7, 2017 Research Computing Center

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Final Exam Study Guide

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

Deep Learning Basic Lecture - Complex Systems & Artificial Intelligence 2017/18 (VO) Asan Agibetov, PhD.

Artificial Neural Networks. Introduction to Computational Neuroscience Ardi Tampuu

A Hierarchial Model for Visual Perception

Representing 3D Objects: An Introduction to Object Centered and Viewer Centered Models

Learning Class-Specific Image Transformations with Higher-Order Boltzmann Machines

Data-driven Depth Inference from a Single Still Image

Transcription:

Does the Brain do Inverse Graphics? Geoffrey Hinton, Alex Krizhevsky, Navdeep Jaitly, Tijmen Tieleman & Yichuan Tang Department of Computer Science University of Toronto

The representation used by the neural nets that work best for recognition (Yann LeCun) Convolutional neural nets use multiple layers of feature detectors that have local receptive fields and shared weights. The feature extraction layers are interleaved with sub-sampling layers that throw away information about precise position in order to achieve some translation invariance.

Combining the outputs of replicated features Get a small amount of translational invariance at each level by averaging some neighboring replicated detectors to give a single output to the next level. This reduces the number of inputs to the next layer of feature extraction, thus allowing us to have many more different types of feature at the next layer. Taking the maximum activation of some neighboring detectors works slightly better.

Why convolutional neural networks are doomed Convolutional nets are doomed for two reasons: Sub-sampling loses the precise spatial relationships between higher-level parts such as a nose and a mouth. The precise spatial relationships are needed for identity recognition But overlapping the sub-sampling pools mitigates this. They cannot extrapolate their understanding of geometric relationships to radically new viewpoints.

Equivariance vs Invariance Sub-sampling tries to make the neural activities invariant for small changes in viewpoint. This is a silly goal, motivated by the fact that the final label needs to be viewpoint-invariant. Its better to aim for equivariance: Changes in viewpoint lead to corresponding changes in neural activities. In the perceptual system, its the weights that code viewpoint-invariant knowledge, not the neural activities.

What is the right representation of images? Computer vision is inverse computer graphics, so the higher levels of a vision system should look like the representations used in graphics. Graphics programs use hierarchical models in which spatial structure is modeled by matrices that represent the transformation from a coordinate frame embedded in the whole to a coordinate frame embedded in each part. These matrices are totally viewpoint invariant. This representation makes it easy to compute the relationship between a part and the retina from the relationship between a whole and the retina. Its just a matrix multiply!

Some psychological evidence that our visual systems impose coordinate frames in order to represent shapes (after Irvin Rock)

The cube task You can all imagine a wire-frame cube. Now imagine the cube standing on one corner so that body-diagonal from that corner, through the center of the cube to the opposite corner is vertical. Where are the other corners of the cube?

An arrangement of 6 rods

A different percept of the 6 rods

Alternative representations The very same arrangement of rods can be represented in quite different ways. Its not like the Necker cube where the alternative percepts disagree on depth. The alternative percepts do not disagree, but they make different facts obvious. In the zig-zag representation it is obvious that there is one pair of parallel edges. In the crown representation there are no obvious pairs of parallel edges because the edges do not align with the intrinsic frame of any of the parts.

A structural description of the crown formed by the six rods

A structural description of the zig-zag

A mental image of the crown A mental image specifies how each node is related to the viewer. This makes it easier to see new relationships

A task that forces you to form a mental image Imagine making the following journey: You go a mile east. Then you go a mile north. Then you go a mile east again. What is the direction back to your staring point?

Analog operations on structural descriptions We can imagine the petals of the crown all folding upwards and inwards. What is happening in our heads when we imagine this continuous transformation? Why are the easily imagined transformations quite different for the two different structural descriptions? One particular continuous transformation called mental rotation was intensively studied by Roger Shepard and other psychologists in the 1970 s Mental rotation really is continuous: When we are halfway through performing a mental rotation we have a mental representation at the intermediate orientation.

Mental rotation: More evidence for coordinate frames We perform mental rotation to decide if the tilted R has the correct handedness, not to recognize that it is an R. But why do we need to do mental rotation to decide handedness?

What mental rotation achieves It is very difficult to compute the handedness of a coordinate transformation by looking at the individual elements of the matrix. The handedness is the sign of the determinant of the matrix relating the object to the viewer. This is a high-order parity problem. To avoid this computation, we rotate to upright preserving handedness, then we look to see which way it faces when it is in its familiar orientation. If we had individual neurons that represented a whole pose, we would not have a problem with identifying handedness, because these neurons would have a handedness.

Two layers in a hierarchy of parts A higher level visual entity is present if several lower level visual entities can agree on their predictions for its pose. p j T j face T ij T i T ij T h T hj T hj p i T i p h T h mouth pose of mouth i. e. relationship to camera nose

A crucial property of the pose vectors They allow spatial transformations to be modeled by linear operations. This makes it easy to learn a hierarchy of visual entities. It makes it easy to generalize across viewpoints. The invariant geometric properties of a shape are in the weights, not in the activities. The activities are equivariant.

Extrapolating shape recognition to very different sizes, orientations and positions that were not in the training data. Current wisdom (e.g. Andrew Ng s talk) Learn different models for very different viewpoints. Get a lot more training data to cover all significantly different viewpoints. The only reasonable alternative? Massive extrapolation requires linear models. So capture the knowledge of the geometry of the shape in a linear model.

A toy example of what we could do if we could reliably extract the poses of parts of objects Consider images composed of five ellipses. The shape is determined entirely by the spatial relations between the ellipses because all parts have the same shape. Can we sets of spatial relationships from data that contains several different shapes? Can we generalize far beyond the range of variation in the training examples? Can we learn without being told which ellipse corresponds to which part of a shape?

Examples of two shapes (Yichuan Tang) Examples of training data Examples of training data It can extrapolate! Generated data after training Generated data after training

Learning one shape using factor analysis The pose of the whole can predict the poses of all the parts. This generative model is easy to fit using the EM algorithm for factor analysis. T j face T ij T hj = factor loadings T i mouth T h nose

Learning the right assignments of the ellipses to the parts of a shape. If we do not know how to assign the ellipses to the parts of a known shape we can try many different assignments and pick the one that gives the best reconstruction. Then we assume that is the correct assignment and use that assignment for learning. This quickly leads to models that have strong opinions about the correct assignment. We could eliminate most of th search by using a feed-forward neural net to predict the assignments.

Fitting a mixture of shapes We can use a mixture of factor analysers (MFA) to fit images that contain many different shapes so long as each image only contains one example of one shape.

Recognizing deformed shapes with MFA

A big problem How do we get from pixels to the first level parts that output explicit pose parameters? We do not have any labeled data. This de-rendering stage has to be very non-linear.

A picture of three capsules i x y i x y i x y Intensity of the capsule s visual entity. input image

Extracting pose information by using a domain specific decoder (Navdeep Jaitly & Tijmen Tieleman) The idea is to define a simple decoder that produces an image by adding together contributions from each capsule. Each capsule learns a fixed template that gets intensity-scaled and translated differently for reconstruction each image. The encoder must learn to extract the appropriate intensity and translation for each capsule from the input image.

output image Decoder: Add together intensity-scaled and translated contributions from each capsule. learned template i x y i x y i x y Train encoder using backpropagation

The templates learned by the autoencoder Each template is multiplied by a casespecific intensity and translated by a casespecific Δ x, Δy Then it is added to the output image.

image reconstruction recon. error capsule 1 capsule 2 capsule 3 capsule 4

Modeling the capsule outputs with a factor analyser 10 factors i 1 x1 y1 i2 x2 y2 i3 x3 y3 i4 x4 y4... We learn a mixture of 10 factor analysers on unlabeled data. What do the means look like?

Means discovered by a mixture of factor analysers mean - 2 σ of each factor mean + 2 σ of each factor directly on pixels on outputs of capsules

Another simple way to learn the lowest level parts Use pairs of images that are related by a known coordinate transformation e.g. a small translation of the image. We often have non-visual access to image transformations e.g. When we make an eye-movement. Cats learn to see much more easily if they control the image transformations (Held & Hein)

Learning the lowest level capsules We are given a pair of images related by a known translation. Step 1: Compute the capsule outputs for the first image. Each capsule uses its own set of recognition hidden units to extract the x and y coordinates of the visual entity it represents (and also the probability of existence) Step 2: Apply the transformation to the outputs of each capsule - Just add Δx to each x output and Δy to each y output Step 3: Predict the transformed image from the transformed outputs of the capsules Each capsule uses its own set of generative hidden units to compute its contribution to the prediction.

actual output target output gate +Δx +Δy p x y +Δx +Δy p x y +Δx +Δy p x y probability that the capsule s visual entity is present input image

Why it has to work When the net is trained with back-propagation, the only way it can get the transformations right is by using x and y in a way that is consistent with the way we are using Δx and Δy. This allows us to force the capsules to extract the coordinates of visual entities without having to decide what the entities are or where they are.

gate output image p x x x p p probability that the feature is present input image

Dealing with the three dimensional world Use stereo images to allow the encoder to extract 3-D poses. Using capsules, 3-D would not be much harder than 2-D if we started with 3-D pixels. The loss of the depth coordinate is a separate problem from the complexity of 3-D geometry. At least capsules stand a chance of dealing with the 3-D geometry properly. Convolutional nets in 3-D are a nightmare.

Dealing with 3-D viewpoint (Alex Krizhevsky) Input stereo pair Output stereo pair Correct output

It also works on test data

the end The work on transforming autoencoders is in a paper called transforming autoencoders on my webpage: http://www.cs.toronto.edu/~hinton/absps/transauto6.pdf The work on dropout described in my previous lecture is in a paper on arxiv: http://arxiv.org/abs/1207.0580

Relationship to Slow Feature Analysis Slow feature analysis uses a very primitive model of the dynamics. It assumes that the underlying state does not change much. This is a much weaker way of extracting features than using precise knowledge of the dynamics. Also, slow feature analysis only produces dumb features that do not output explicit instantiation parameters.

Relationship to a Kalman filter A linear dynamical system can predict the next observation vector. But only when there is a linear relationship between the underlying dynamics and the observations. The extended Kalman filter assumes linearity about the current operating point. It s a fudge. Transforming autoencoders use non-linear recognition units to map the observation space to the space in which the dynamics is linear. Then they use non-linear generation units to map the prediction back to observation space This is a much better approach than the extended Kalman filter, especially when the dynamics is known.