Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Similar documents
Stereo pair 1 H23 I. Stereo pair 2 R M. l r. e C C (A) (B) (B) (C) Match planes between stereo pairs and compute homographies H 23 and U 23

Feature Transfer and Matching in Disparate Stereo Views through the Use of Plane Homographies

Stereo Views of 3D Scenes Containing Planes M.I.A. Lourakis, S.V. Tzurbakis, A.A. Argyros and S.C. Orphanoudakis Computer Vision and Robotics Laborato

Matching disparate views of planar surfaces using projective invariants

arxiv: v1 [cs.cv] 28 Sep 2018

calibrated coordinates Linear transformation pixel coordinates

Compositing a bird's eye view mosaic

Structure from Motion. Prof. Marco Marcon

Refining Single View Calibration With the Aid of Metric Scene Properties

Segmentation and Tracking of Partial Planar Templates

Automatic Line Matching across Views

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Rectification for Any Epipolar Geometry

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Chaining Planar Homographies for Fast and Reliable 3D Plane Tracking

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics

Chaining Planar Homographies for Fast and Reliable 3D Plane Tracking

Invariant Features from Interest Point Groups

Camera Geometry II. COS 429 Princeton University

Stereo and Epipolar geometry

Detecting Planar Homographies in an Image Pair. submission 335. all matches. identication as a rst step in an image analysis

Epipolar Geometry in Stereo, Motion and Object Recognition

A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES

Geometry of Multiple views

Robust Wide Baseline Point Matching Based on Scale Invariant Feature Descriptor

Estimation of common groundplane based on co-motion statistics

Affine Surface Reconstruction By Purposive Viewpoint Control

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

But First: Multi-View Projective Geometry

Stereo Image Rectification for Simple Panoramic Image Generation

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License

Chaining Planar Homographies for Fast and Reliable 3D Plane Tracking

Computer Vision Lecture 17

Computer Vision Lecture 17

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

CS5670: Computer Vision

A novel point matching method for stereovision measurement using RANSAC affine transformation

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images

A Factorization Method for Structure from Planar Motion

A Novel Stereo Camera System by a Biprism

Midterm Exam Solutions

Unit 3 Multiple View Geometry

Step-by-Step Model Buidling

Chapter 3 Image Registration. Chapter 3 Image Registration

Coplanar circles, quasi-affine invariance and calibration

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

Combining Appearance and Topology for Wide

CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry

Camera Calibration with a Simulated Three Dimensional Calibration Object

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

Flexible Calibration of a Portable Structured Light System through Surface Plane

Video Google: A Text Retrieval Approach to Object Matching in Videos

Perception and Action using Multilinear Forms

Dense 3D Reconstruction. Christiano Gava

BIL Computer Vision Apr 16, 2014

Epipolar Geometry and Stereo Vision

Multiple View Geometry in Computer Vision Second Edition

Factorization Method Using Interpolated Feature Tracking via Projective Geometry

Camera Calibration Using Line Correspondences

3D Modeling using multiple images Exam January 2008

Using Geometric Blur for Point Correspondence

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Epipolar Geometry and Stereo Vision

Stereo Vision. MAN-522 Computer Vision

Camera model and multiple view geometry

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47

arxiv: v1 [cs.cv] 28 Sep 2018

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

Image correspondences and structure from motion

Dense 3D Reconstruction. Christiano Gava

Multiple View Geometry

1 Projective Geometry

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds

A robust and convergent iterative approach for determining the dominant plane from two views without correspondence and calibration

3D Reconstruction from Scene Knowledge

Multiple Views Geometry

The end of affine cameras

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

Single-view metrology

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Instance-level recognition part 2

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Passive 3D Photography

Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation

Visualization 2D-to-3D Photo Rendering for 3D Displays

Robust line matching in image pairs of scenes with dominant planes 1

PUBLICATIONS The papers listed below are available in electronic form from

Object Recognition with Invariant Features

Recursive Estimation of Motion and Planar Structure

Rectification and Distortion Correction

CSE 252B: Computer Vision II

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Model Selection for Automated Architectural Reconstruction from Multiple Views

5LSH0 Advanced Topics Video & Analysis

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Local Feature Detectors

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:

Homographies and RANSAC

Transcription:

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of Computer Science (ICS) Foundation for Research and Technology Hellas (FORTH) Heraklion, Crete, Greece April 2002 CMP, Prague

Presentation outline Problem statement Overview Method description Experimental results Discussion

Problem statement Given some corresponding features in two stereo images, match them with features extracted from a second stereo pair that is captured at a distant location

Overview Matching is a fundamental problem in Computer Vision, appearing in different forms in tasks such as: Discrete motion estimation Feature-based stereo Object recognition Image registration Camera self-calibration Image based rendering Fundamental difficulty: Feature matching should rely upon the affinity of photometric or geometric properties between different views of a scene.

Existing approaches Adopt a semi-automatic approach and assume a priori knowledge of geometric constraints that are satisfied by the different views, e.g: Georgis, Petrou, Kittler (IVC 98), Schmid and Zisserman (CVPR 97) Pritchett and Zisserman (ICCV 98), Faugeras and Robert (IJCV 96) Exploit quantities that remain unchanged under general perspective projection and can be directly computed from the images (i.e. projective invariants). Due to the lack of general-case view invariants, make assumptions regarding the structure of the viewed scene, e.g: Meer, Lenz, Ramakrishna (IJCV 98) Lourakis, Halkidis, Orphanoudakis (IVC 00)

Our approach Assume that the viewed scene contains at least two planes Use existing approaches (Lourakis, Halkidis, Orphanoudakis, IVC 00) to match planes between disparate views. Match points and lines not belonging to the two planes using geometric constraints defined by the matched planes.

Our approach Stereo pair 1 (SP1) I 2 I 1 H 12 U 12 SP1 SP2(A) Compute feature correspondences (B) Segm. dominant planes and compute H 12, U 12 SP2 SP2(A) Compute feature correspondences (B) Segm. dominant planes and compute H 34, U 34 H 23 U 23 I 3 Stereo pair 2 (SP2) I 4 H 34 U 34 (C) Match planes between stereo pairs and compute homographies H 23 and U 23 (D) Match features between I 2 and I 3, using reprojection based on H 23, U 23 and geometric constraints

(A) Computing Intra-stereo feature correspondences: points Point correspondences based on [1] [1] Z. Zhang, R. Deriche, O. Faugeras, and Q.-T. Luong, A Robust Technique for Matching Two Uncalibrated Images Through the Recovery of the Unknown Epipolar Geometry, AI Journal, vol. 78, pp. 87 119, 1995.

(A) Computing Intra-stereo feature correspondences: lines Line correspondences based on [2] [2] M.I.A. Lourakis, Establishing Straight Line Correspondence, Tech. Rep. 208, ICS/FORTH, Aug. 1997, available at ftp://ftp.ics.forth.gr/tech-reports/1997.

(B) Segmentation of planar surfaces A plane can be defined by a point and a line The homographies of the planes defined by all possible pairs of lines and points are computed.

(B) Segmentation of planar surfaces Each of the induced homographies is used to predict the location of every feature in one image to the other image. Each feature votes in favor of the homography that best predicted its location in the other image. In addition, this feature is assumed to belong to the plane defined by the homography in question. The two planes that receive the largest and second largest number of votes are identified as the two most prominent ones. The homographies are re-estimated using robust regression, based on the full set of features belonging to them, based on [3]. Using the homography estimates as initial solutions the two homographies are refined using a non-linear method, based on [4]. [3] M.I.A. Lourakis, S.T. Halkidis, and S.C. Orphanoudakis, Matching Disparate Views of Planar Surfaces Using Projective Invariants, Image and Vision Computing, vol. 18, pp. 673 683, 2000. [4] Q.-T. Luong and O.D. Faugeras, Determining the Fundamental Matrix with Planes: Instability and New Algorithms, in Proc. of CVPR 93, 1993, pp. 489 494.

Our approach Stereo pair 1 (SP1) I 2 I 1 H 12 U 12 SP1 SP2(A) Compute feature correspondences (B) Segm. dominant planes and compute H 12, U 12 SP2 SP2(A) Compute feature correspondences (B) Segm. dominant planes and compute H 34, U 34 I 3 Stereo pair 2 (SP2) I 4 H 34 U 34

(C) Matching planes between disparate views Employ a randomized search scheme, guided by geometric constraints derived using the two-line-two-point projective invariant [3] [3] M.I.A. Lourakis, S.T. Halkidis, and S.C. Orphanoudakis, Matching Disparate Views of Planar Surfaces Using Projective Invariants, Image and Vision Computing, vol. 18, pp. 673 683, 2000.

(C) Matching planes between disparate views Result of homography-based warping of second image onto the first:

Our approach Stereo pair 1 (SP1) I 2 I 1 H 12 U 12 SP1 SP2(A) Compute feature correspondences (B) Segm. dominant planes and compute H 12, U 12 SP2 SP2(A) Compute feature correspondences (B) Segm. dominant planes and compute H 34, U 34 H 23 U 23 I 3 Stereo pair 2 (SP2) I 4 (C) Match planes between stereo pairs and compute homographies H 23 and U 23 H 34 U 34

(D) Match features not belonging to the planes, between stereo pairs π Given two planes π 1 and π 2, a π 2 1 point O not belonging in either planes can be represented as the intersection of lines with end Q P Q Ο P points on the planes. Thus, if we know how planes and their points are transferred in distant views, we may transfer all 3D points as well.

(D) Match features not belonging to the planes, between stereo pairs For the case of lines, it suffices to transfer their end-points For robustness, more points on the line can also be transferred

(D) Match features not belonging to the planes, between stereo pairs MATCHING After transferring features between stereo-pairs, matching is performed by considering proximity between transferred and extracted features Feature proximity is quantified by the Euclidean distance between the normalized homogeneous vectors representing transferred and extracted features.

Experimental results: on the accuracy of plane segmentation 15 o Height of obstacles: [0.15m, 2.0m] 1,5m

Experimental results: on the accuracy of plane segmentation Simulated scenario related to an obstacle detection task Camera overlooking a planar floor at a height of 1.5 meters, with its optical axis at an angle of 15 o with respect to the horizon. Simulated retina: 750 750 pixels, focal length: 700 pixels. Camera is moving rigidly with 3D translational velocity of (0.100, 0.181, 0.676) meters/frame Synthetic scene consists of 300 3D points and 60 3D lines 100 3D points and 20 3D lines are assumed to lie on the simulated 3D floor. The heights of the 3D points not lying on the floor are assumed to be uniformly distributed in the range of [0.15, 2.0] meters. The standard deviation of the Gaussian noise added to retinal projections was varied between 0.0 and 2.0 pixels in steps of 0.2.

Experimental results: on the accuracy of plane segmentation

Experimental results (indoor scene) Input images

Experimental results (indoor scene) Intra stereo-pair matches

Experimental results (indoor scene) Inter-stereo pair matches and planes defined

Experimental results (indoor scene) 3D Reconstruction Top view Side view

Experimental results (outdoor scene) Input images

Experimental results (outdoor scene) Intra stereo-pair matches

Experimental results (outdoor scene) Inter-stereo pair matches and planes defined

Experimental results (outdoor scene) 3D Reconstruction Side view Top view

Sample quantitative results Average reprojection error (in pixels) Proposed method Faugeras, Robert (IJCV '96) Indoor scene 1.59 2.95 Outdoor Scene 1.79 6.15

Discussion Benefits of the approach: Fully automatic method Capability to handle matching features in images that have been captured from significantly different viewpoints Exploits constraints arising from the structure of the scene which are valid regardless of the viewpoints of images No camera calibration required Multiple constraints are defined for each feature thereby increasing robustness by over-determining the solution Points and lines are handled in a unified manner Disadvantages of the approach: Assumption regarding the existence of at least two planes in the viewed scene