Image processing and features

Similar documents
Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Visual Tracking (1) Feature Point Tracking and Block Matching

Feature Tracking and Optical Flow

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Chapter 3 Image Registration. Chapter 3 Image Registration

Outline 7/2/201011/6/

Edge and corner detection

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Feature Tracking and Optical Flow

Computer Vision. Exercise 3 Panorama Stitching 09/12/2013. Compute Vision : Exercise 3 Panorama Stitching

Autonomous Navigation for Flying Robots

Dense Image-based Motion Estimation Algorithms & Optical Flow

Application questions. Theoretical questions

EE795: Computer Vision and Intelligent Systems

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2

Computer Vision I - Basics of Image Processing Part 2

CS201: Computer Vision Introduction to Tracking

Optical flow and tracking

Local Feature Detectors

Visual Tracking (1) Pixel-intensity-based methods

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --

Peripheral drift illusion

CS 4495 Computer Vision Motion and Optic Flow

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

Final Exam Study Guide

Computer Vision Lecture 20

Computer Vision Lecture 20

3D Photography. Marc Pollefeys, Torsten Sattler. Spring 2015

Computer Vision I - Filtering and Feature detection

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Based Registration - Image Alignment

Other Linear Filters CS 211A

EE795: Computer Vision and Intelligent Systems

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

Corner Detection. GV12/3072 Image Processing.

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Dense 3D Reconstruction. Christiano Gava

Multi-stable Perception. Necker Cube

Anno accademico 2006/2007. Davide Migliore

Final Exam Study Guide CSE/EE 486 Fall 2007

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

3D Vision. Viktor Larsson. Spring 2019

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Scale Invariant Feature Transform

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Dense 3D Reconstruction. Christiano Gava

Capturing, Modeling, Rendering 3D Structures

Computer Vision for HCI. Topics of This Lecture

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

CS5670: Computer Vision

Scale Invariant Feature Transform

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

Local invariant features

SE 263 R. Venkatesh Babu. Object Tracking. R. Venkatesh Babu

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo --

Local Features: Detection, Description & Matching

Lecture 16: Computer Vision

Lecture 16: Computer Vision

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

CAP 5415 Computer Vision Fall 2012

Prof. Feng Liu. Spring /26/2017

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

EECS 556 Image Processing W 09

3D from Photographs: Automatic Matching of Images. Dr Francesco Banterle

Automatic Image Alignment (feature-based)

CS231A Section 6: Problem Set 3

Object Recognition with Invariant Features

HISTOGRAMS OF ORIENTATIO N GRADIENTS

School of Computing University of Utah

Tracking Computer Vision Spring 2018, Lecture 24

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Deep Learning: Image Registration. Steven Chen and Ty Nguyen

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

2D Image Processing Feature Descriptors

Image Features. Work on project 1. All is Vanity, by C. Allan Gilbert,

Multiple View Geometry

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Augmented Reality, Advanced SLAM, Applications

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Scott Smith Advanced Image Processing March 15, Speeded-Up Robust Features SURF

EE795: Computer Vision and Intelligent Systems

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Step-by-Step Model Buidling

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Towards the completion of assignment 1

Motion Estimation and Optical Flow Tracking

Mariya Zhariy. Uttendorf Introduction to Optical Flow. Mariya Zhariy. Introduction. Determining. Optical Flow. Results. Motivation Definition

Computer Vision Lecture 20

Motion Estimation. There are three main types (or applications) of motion estimation:

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

Local Image preprocessing (cont d)

Lecture 6: Edge Detection

Transcription:

Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys

Introduction Previous lectures: geometry Pose estimation Epipolar geometry What is the input to these geometric computations? Today: (point) correspondence problem 2

We always need some image processing Camera calibration Camera pose estimation Homography estimation F matrix estimation Triangulation 3

Introduction First approaches: marker based Marker (also fiducial): Artificial landmark Distinctive pattern (unique code) Easy detection in images http://www.hitl.washington.edu/artoolkit/ 4

Introduction Today: marker less image processing using natural landmarks/features CAD model Line model 5

Motivation (Kanade Lucas point tracker) 6

Formalisation The correspondence problem can consist in deciding which primitive in an image corresponds to which landmark in 3D space which primitive in one image corresponds to which primitive in another image (in the sense that they are both images of the same landmark in 3D space) Obvious problems?? 7

Nomenclature Feature: Primitive (point, region, line, curve, etc.) combined with a descriptor describing its appearance Associated to a natural or synthetic landmark in 3D space Corresponding features form the basis for pose and structure estimation Here: focus on rectangular markers and point features 8

Outline Marker based tracking (overview) Point of interest detection Feature matching Feature tracking Need for wide baseline matching 9

Marker based tracking Representation of a rectangular marker: Marker code 3D coordinates of corner points 0 1 <Marker> <Code Line1="1111" 3 2 Line2="0100" Line3="0100" 10 Line4="0000"/> <Points nb="4"> <Point x="0" y="0" z="0"/> <Point x="10" y="0" z="0"/> <Point x="10" y="10" z="0"/> <Point x="0" y="10" z="0"/> </Points> </Marker> 10

Marker based tracking: workflow 1. Closed contour extraction 2. Square detection 3. Identification (marker code) 4. Establishing 2D/3D correspondences 5. Homography estimation (previous lecture) 6. Camera pose extraction (previous lecture) 11

Marker based tracking 1. Closed contour extraction: Compute image gradient (convolution with 1 st order gradient filter) How do we start? 12

Explanation: image convolution Definition: discrete 2D convolution Properties: Commutative: Associative: Distributive: Derivation: x y y x H y y x x I y x H I ), ( ), ( ), ( 0 0 0 0 ) ), ( ), ( ( 0 0 0 0 dx dy y y x x I y x H I H H I G H I G H I ) ( ) ( ) ( ) ( ) ( G I H I G H I ) ( ) ( ) ( H I H I H I x x x 13 E.g. Gaussian smoothing, mean filter, etc.,

Explanation: 1 st order gradient filters Image derivative change in pixel intensity Better: Gaussian smoothing + differentiation Discrete approximation convolution with Sobel kernel 14 1 0 1 2 0 2 1 0 1 ), ( ˆ y x M x 1 2 1 0 0 0 1 2 1 ), ( ˆ y x M y 1 1 ), ( 1 1 ), ( y x D y x D y x Example filter masks: Example filter masks (Sobel): Example of approximated partial derivate of Gaussian kernel

Explanation: 1 st order gradient filters Image gradients are the basis for edge and corner detection Input image Vertical derivative: -1 1 1 θ 180 255 0 Edge strength: Horizontal derivative: Gradient orientation: θ 2, 15

1. Closed contour extraction: Marker based tracking Compute image gradient (convolution with 1 st order gradient filter) Find edges (e.g. Canny operator) Non maxima suppression Hysteresis threshold Find closed contours How does this work? RGB greyscale Binary (edge/non egde) 16

Marker based tracking 2. Square detection: Approximate closed contours as polygons Check for squares: 4 corners, minimal/maximal area, convexity, approx. 90 degree angles Find corners 17

Marker based tracking 3. Marker identification: assign known markers to the detected squares? 18

Marker based tracking 3. Marker identification: for each detected square 1. Estimate homography 2. Sample and compare code in all four directions 3. Find the best overall fit (use any similarity measure) 19

Marker based tracking Advantages Reliable detection Few outliers Disadvantages: Modification/preparation of natural scene required All markers must be given in one coordinate system hard to measure Registration of natural objects in marker coordinate system required (for augmentations) Therefore: marker less tracking 20

Marker less tracking Principle: natural features are detected in the images and are then either tracked or matched in subsequent frames. Possible feature descriptors: Surrounding texture patch Colour histograms Gradient histogram (SIFT) Topics: Point of interest (POI) detection Feature matching Feature tracking 21

Point of interest detection What are good point features? Significant greyscale changes in all image directions corners Stable across views homogeneous region: no change edge: change in one direction corner: significant change in all directions 22

Point of interest detection Find points that differ as much as possible from all neighbouring points. homogeneous edge corner 23

Point of interest detection How do we detect corners? Change of intensity under small displacement Sum of squared differences over area with shift :, displaced intensity intensity For corners, the SSD is big for all 24

Point of interest detection Approximation for small displacements (first order Taylor expansion):,, is a symmetric matrix called structure tensor, which describes the intensity changes of respective image region. 25

Point of interest detection homogeneous edge corner Find points for which the minimum change at some shift, is still big, for i.e. maximize smallest eigenvalue of M 26

Point of interest detection Geometric interpretation of the structure tensor: eigenvalues and eigenvectors describe the gradient distribution. Direction of smallest change Direction of largest change 2 1 27

Point of interest detection Classification of image pixels based on the eigenvalues of the structure tensor: 2 Edge 2 >> 1 Corner 1 and 2 big, 1 ~ 2 ; intensity change in all directions 1 and 2 small, constant intensity in all directions Homogenious region Edge 1 >> 2 28 1

Point of interest detection Image Interest operator (e.g. Harris, KLT) Interest map Non maxima suppression POI list 29

Point of interest detection: properties Rotation invariant Not scale invariant Now we have extracted suitable points. How do we follow them across images? 30

Feature matching vs. tracking Extract features independently and then match by comparing descriptors Often when searching correspondences between two different images/views, or without prior pose information (e.g. initialisation, loop closing) Problem: outliers Extract features in first images and then try to find same feature back in next view Often when searching correspondences within a continuous image sequence, with good pose prediction Frame to frame tracking Problem: drift 31

Simple feature matching For each corner in image 1 find the corner in image 2 that is most similar and vice versa Only compare geometrically compatible points Keep mutual best matches 32

Simple feature matching: example 3 2 4 1 5 0.96 0.40 0.16 0.39 0.19 0.05 0.75 0.47 0.51 0.72 0.18 0.39 0.73 0.15 0.75 0.27 0.49 0.16 0.79 0.21 3 2 4 1 5 0.08 0.50 0.45 0.28 0.99 What are the problems? Non distinctive features Perspective distortions/illumination changes 33

Simple feature matching Comparing image regions: compare intensities pixel by pixel I(x,y) J(x,y) Dissimilarity measures Sum of Squared Differences 34

Simple feature matching Comparing image regions: compare intensities pixel by pixel I(x,y) J(x,y) Similarity measures Zero mean Normalized Cross Correlation,,, What does this compute?,,, 35

Feature tracking Direct registration/tracking of features in their local neighbourhood Simple method block matching : For each corner in frame A, find the displacement in frame B by trying out all positions in a block with fixed size around the previous position and searching the best fit. Frame A Frame B y, yv x xu 36

Optical flow Alternative method: Kanade Lucas tracker (KLT) Iterative minimisation of SSD Intensity of previous image Intensity of transformed current image Frame A Frame B Model with parameters 37

Optical flow Method: Gauss Newton like minimisation Set derivative with respect to to zero First order Taylor expansion of, Solve resulting linear equation system iteratively: recompute, until the change is small enough A nice derivation is given in: [J. Y. Bouguet. Pyramidal Implementation of the Lucas Kanade Feature Tracker: Description of the algorithm. Technical report, Intel Corporation, Microprocessor Research Labs, 2002] 38

Optical flow: translational model Assumption: small feature displacements Estimate pure translation: Problems (as before): Big displacement Frame A Frame B Perspective distortions Illumination changes y, yv Drift! x xu 39

Optical flow: multi scale extension The registration may not converge, if the feature displacements are big (little/not enough overlap) Solution: coarse to fine registration (image pyramid) 40

Optical flow: affine model Model and estimate affine geometric and photometric transformations contrast brightness + Advantages: Handles affine distortions Handles affine illumination changes Problem: Converges only with good initial estimate 41

Optical flow tracking 42

Usually two stage approach: Advanced optical flow algorithm Stage 1: Estimate pure translation from frame to frame (coarse to fine) Stage 2: Estimate the affine transformation and illumination parameters to the initial feature patch (use previous estimate as initial guess),,,,,,,,,, Remove drift! 43

Wide baseline matching,,,,,,,,,, More about this in the next but one lecture 44

References Marker tracking: H. Kato and M. Billinghurst. Marker Tracking and HMD Calibration for a video based Augmented Reality Conferencing System. In International Workshop on Augmented Reality, page 85 94, San Francisco, USA, October 1999. Optical flow: B. Lucas and T. Kanade. An Iterative Image Registration Technique with an Application to Stereo Vision. In International Joint Conference on Artificial Intelligence (IJCAI), pages 674 679, April 1981. J. Y. Bouguet. Pyramidal Implementation of the Lucas Kanade Feature Tracker: Description of the algorithm. Technical report, Intel Corporation, Microprocessor Research Labs, 2002. T. Zinßer, C. Gräßl, and H. Niemann. Efficient Feature Tracking for Long Video Sequences. In Deutsche Arbeitsgemeinschaft Mustererkennung (DAGM), pages 326 333, Tübingen, Germany, August 2004. 45

Source code ARToolKit http://www.hitl.washington.edu/artoolkit/ (marker tracking) OpenCV http://sourceforge.net/projects/opencvlibrary/ 46