3D Modeling using multiple images Exam January 2008

Similar documents
Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

calibrated coordinates Linear transformation pixel coordinates

Multiple Views Geometry

Camera model and multiple view geometry

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Structure from motion

Final Exam Study Guide

Multiple View Geometry

BIL Computer Vision Apr 16, 2014

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

1 Projective Geometry

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

Visual Recognition: Image Formation

Step-by-Step Model Buidling

CSE 252B: Computer Vision II

Stereo and Epipolar geometry

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

Stereo Vision. MAN-522 Computer Vision

Camera Model and Calibration

TD2 : Stereoscopy and Tracking: solutions

N-Views (1) Homographies and Projection

3-D D Euclidean Space - Vectors

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Computer Vision I - Appearance-based Matching and Projective Geometry

Introduction to Homogeneous coordinates

Computer Vision cmput 428/615

DD2429 Computational Photography :00-19:00

Structure from motion

Flexible Calibration of a Portable Structured Light System through Surface Plane

CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Unit 3 Multiple View Geometry

CS231A Midterm Review. Friday 5/6/2016

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

Rectification. Dr. Gerhard Roth

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Agenda. Rotations. Camera models. Camera calibration. Homographies

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Final Exam Study Guide CSE/EE 486 Fall 2007

Lecture 14: Basic Multi-View Geometry

Assignment 2 : Projection and Homography

3D Reconstruction of a Hopkins Landmark

The end of affine cameras

More on single-view geometry class 10

Two-View Geometry (Course 23, Lecture D)

Pin Hole Cameras & Warp Functions

CS6670: Computer Vision

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

Camera Model and Calibration. Lecture-12

Pin Hole Cameras & Warp Functions

Contents. 1 Introduction Background Organization Features... 7

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Rectification and Distortion Correction

Structure from Motion. Prof. Marco Marcon

Multiple View Geometry in Computer Vision Second Edition

Lecture 3: Camera Calibration, DLT, SVD

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

Midterm Exam Solutions

Take Home Exam # 2 Machine Vision

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Introduction to Computer Vision

Computer Vision I - Appearance-based Matching and Projective Geometry

Two-view geometry Computer Vision Spring 2018, Lecture 10

CREATING 3D WRL OBJECT BY USING 2D DATA

Agenda. Rotations. Camera calibration. Homography. Ransac

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License

How to Compute the Pose of an Object without a Direct View?

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

Projector Calibration for Pattern Projection Systems

3D Reconstruction from Scene Knowledge

Lecture 9: Epipolar Geometry

Geometric transformations in 3D and coordinate frames. Computer Graphics CSE 167 Lecture 3

Humanoid Robotics. Projective Geometry, Homogeneous Coordinates. (brief introduction) Maren Bennewitz

Multi-view stereo. Many slides adapted from S. Seitz

C18 Computer Vision. Lecture 1 Introduction: imaging geometry, camera calibration. Victor Adrian Prisacariu.

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry

arxiv: v1 [cs.cv] 28 Sep 2018

3D Reconstruction from Two Views

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Multiple-Choice Questionnaire Group C

Topological Mapping. Discrete Bayes Filter

3D reconstruction class 11

Computer Vision Lecture 17

Computer Vision Lecture 17

Vision 3D articielle Multiple view geometry

Stereo Image Rectification for Simple Panoramic Image Generation

Compositing a bird's eye view mosaic

Transcription:

3D Modeling using multiple images Exam January 2008 All documents are allowed. Answers should be justified. The different sections below are independant. 1 3D Reconstruction A Robust Approche Consider a set of n images, whose projection matrices, i.e. the intrinsic and extrinsic parameters, are all known. Consider points q 1 q n, where q i is a point in the ith image. These n points are supposed to match one another, i.e. they are supposed to be projections of the same 3D point. However, there may be outliers among them. The goal of this question is to reconstruct the 3D point. Sketch an approach for doing this, that is robust to these outliers. It is not required to give formulas, but all steps of the approach should be clearly described (if one step corresponds exactly to a method described in the course notes, then it is enough to make a reference to this). M2R IVR 1

2 3D Reconstruction from Two Images An Alternative Method During the lecture, a 3D reconstruction method has been explained. The goal of this question is to develop an alternative method, for the case of two images. Suppose that the intrinsic and extrinsic parameters of the two images are known. Let q 1 and q 2 be matching points in the two images. As always, these points are supposed to be noisy, i.e. they do not correspond to the exact projections of the original 3D point. For the 3D reconstruction method to be developed, we propose to impose that the 3D point Q lies on the line of sight associated with point q 1. The point Q can thus be parameterized by a single variable, say λ. Write down this parameterization. Describe how to compute λ, using the point q 2 on the second image and the knowledge of intrinsic and extrinsic parameters. M2R IVR 2

3 Point matching for 3 Images During the lecture, two key concepts for matching pairs of images have been explained: use of a geometric constraint given by the epipolar geometry and comparison of image windows in order to detect matching points (by computing e.g. a correlation score). One of the problems when using only 2 images for matching is the possibility of obtaining a wrong solution: for a point in the first image, there may exist several points on the epipolar line in the second image which look similar to it. This problem may be avoided by using more than 2 images. Here, we consider the case of 3 images. We suppose to know the epipolar geometry for each of the 3 pairs of images that may be formed. The goal is to find, for a point q 1 in the first image, the matching points q 2 and q 3 in the other two images. Sketch a method for doing this. The method should benefit from the fact that 3 images are used, in order to avoid the above mentioned problem existing for the 2-image case. Advice: It is not required to develop formulas; it is enough to describe the method using words, and if applicable, to refer to the course notes for details which they already contain. M2R IVR 3

4 Question Linked to Image Mosaics Let us put in the same setting as the one considered for the generation of image mosaics: a camera acquires images while rotating about its optical center. Consider the following special case. The camera takes two images. Between the two images, it carries out a rotation about the X axis: 1 0 0 R = 0 cosγ sin γ 0 sin γ cosγ As for the intrinsic parameters of the camera, we suppose that they correspond to a simplified calibration matrix whose only unknown is the focal length α: α 0 0 K = 0 α 0 0 0 1 During the lecture, it has been shown that there exists a projective transformation (an homography) that links the two images, and that it can be estimated from 4 or more point matches. The goal of this question is to estimate the homography from a single point match. To do so, write down the homographie H in terms of the two unknowns, the rotation angle γ and the focal length α. Then, develop a method (formulas) for computing these two unknowns, from a single point match (q 1 in the first image and q 2 in the second one). 5 Projective Geometry 1. What are the benefits of the projective geometry with respect to the Euclidean Geometry? 2. How do we go from a projective space to an affine space? 3. Explain the duality principle in a projective space? 4. What is an invariant for a group of transformation and why does affine transformations preserve parallelism? 6 Surface modeling We are given a 3D modeling system composed of n cameras. Cameras are fixed and calibrated and we consider the 3D modeling process using information extracted from the images. M2R IVR 4

We first assume that the silhouettes S i [1..n] of an observed object, and only the silhouettes, are available: 1. What kind of model can we obtain? 2. hen n goes to infinity what becomes the model? 3. Propose an algorithm that computes this model based on space discretization into voxels (the algorithm will be sketeched only). 4. The object under consideration is not completely observed by all cameras. Modify your algorithm, if necessary, so that it handles also this situation. We consider secondly the photometric information: 1. Can we improve the models computed before using geometric information only? if yes how? 2. In the case of a Lambertian surface and if we assume the calibration to be exact can we obtain a model of the observed object which is exact up to the discretization? 3. Voxel based models present a tradeoff between precision and complexity. Explain how the grid based algorithm could be modified in order to improve this tradeoff. Images are acquired, without any loss, by the cameras with a frequency f. Do those cameras necessarly need to be synchronized in the acquisition process? 7 Motion modeling Modeling motion consists in obtaining, from images, information on the motion of a moving object in the scene. 1. What are the main differences with shape modeling and what is the interest of such information? 2. Explain (sketch) how the Vicon system works and identify the principle steps required, once images are acquired, to estimate motion with such systems. M2R IVR 5