Refinement of scene depth from stereo camera ego-motion parameters

Similar documents
Fast Window Based Stereo Matching for 3D Scene Reconstruction

Computer Graphics Chapter 7 Three-Dimensional Viewing Viewing

CS5670: Computer Vision

Dense Disparity Estimation in Ego-motion Reduced Search Space

Robust Camera Calibration for an Autonomous Underwater Vehicle

Stereo Image Rectification for Simple Panoramic Image Generation

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

X y. f(x,y,d) f(x,y,d) Peak. Motion stereo space. parameter space. (x,y,d) Motion stereo space. Parameter space. Motion stereo space.

A new fuzzy visual servoing with application to robot manipulator

Rectification and Distortion Correction

New Geometric Interpretation and Analytic Solution for Quadrilateral Reconstruction

Estimation of large-amplitude motion and disparity fields: Application to intermediate view reconstruction

Measurement of Pedestrian Groups Using Subtraction Stereo

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Range Sensors (time of flight) (1)

Classical Mechanics Examples (Lagrange Multipliers)

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Dense 3D Reconstruction. Christiano Gava

Direct Plane Tracking in Stereo Images for Mobile Navigation

Image Segmentation using K-means clustering and Thresholding

Shift-map Image Registration

THE TASK of ground plane detection in images of 3D

A Panoramic Stereo Imaging System for Aircraft Guidance

Using the disparity space to compute occupancy grids from stereo-vision

Real Time On Board Stereo Camera Pose through Image Registration*

Exercises of PIV. incomplete draft, version 0.0. October 2009

Dense 3D Reconstruction. Christiano Gava

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

Towards a visual perception system for LNG pipe inspection

NEW METHOD FOR FINDING A REFERENCE POINT IN FINGERPRINT IMAGES WITH THE USE OF THE IPAN99 ALGORITHM 1. INTRODUCTION 2.

Time-to-Contact from Image Intensity

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Camera Model and Calibration. Lecture-12

CAMERAS AND GRAVITY: ESTIMATING PLANAR OBJECT ORIENTATION. Zhaoyin Jia, Andrew Gallagher, Tsuhan Chen

AN INVESTIGATION OF FOCUSING AND ANGULAR TECHNIQUES FOR VOLUMETRIC IMAGES BY USING THE 2D CIRCULAR ULTRASONIC PHASED ARRAY

Computer Vision Lecture 17

Computer Vision Lecture 17

INFO - H Pattern recognition and image analysis. Vision

EE795: Computer Vision and Intelligent Systems

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

Shift-map Image Registration

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

Stereo imaging ideal geometry

Lecture 14: Computer Vision

Structure from Motion. Prof. Marco Marcon

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

6 Gradient Descent. 6.1 Functions

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

High Accuracy Depth Measurement using Multi-view Stereo

EFFICIENT ON-LINE TESTING METHOD FOR A FLOATING-POINT ADDER

Research Article Inviscid Uniform Shear Flow past a Smooth Concave Body

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)

Quasi-Euclidean Uncalibrated Epipolar Rectification

Calculation on diffraction aperture of cube corner retroreflector

Recap from Previous Lecture

Dual Arm Robot Research Report

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Physics INTERFERENCE OF LIGHT

3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,

Transient analysis of wave propagation in 3D soil by using the scaled boundary finite element method

Approximation with Active B-spline Curves and Surfaces

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Projector Calibration for Pattern Projection Systems

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Computer Graphics Inf4/MSc. Computer Graphics. Lecture 6 View Projection Taku Komura

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

A Robust Two Feature Points Based Depth Estimation Method 1)

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Fast Fractal Image Compression using PSO Based Optimization Techniques

Online Appendix to: Generalizing Database Forensics

Transactions on Information and Communications Technologies vol 19, 1997 WIT Press, ISSN

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

DEPTH ESTIMATION USING STEREO FISH-EYE LENSES

LUMS Mine Detector Project

CS6670: Computer Vision

Robust Ground Plane Tracking in Cluttered Environments from Egocentric Stereo Vision

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

A Framework for 3D Pushbroom Imaging CUCS

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy

ELEC Dr Reji Mathew Electrical Engineering UNSW

Simultaneous surface texture classification and illumination tilt angle prediction

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Robert Collins CSE486, Penn State Lecture 08: Introduction to Stereo

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

UNIT 9 INTERFEROMETRY

Announcements. Stereo

Introduction to Computer Vision

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Structure from Motion

Stereo Vision Computer Vision (Kris Kitani) Carnegie Mellon University

Step-by-Step Model Buidling

Direct Plane Tracking in Stereo Images for Mobile Navigation

Compositing a bird's eye view mosaic

EFFICIENT STEREO MATCHING BASED ON A NEW CONFIDENCE METRIC. Won-Hee Lee, Yumi Kim, and Jong Beom Ra

Transcription:

Refinement of scene epth from stereo camera ego-motion parameters Piotr Skulimowski, Pawel Strumillo An algorithm for refinement of isparity (epth) map from stereoscopic sequences is propose. The metho is base on estimation of ego-motion parameters of a camera system an 3D scene frame-by-frame preiction. Sub-pixel resolution isparity maps were obtaine an no each single frame isparity computations are require. Introuction: Recovering 3-imensional scene structure is one of the most important tasks in computer vision. Applications of the technology capable of seeing in 3-imensions range from virtual reality an robot guiance systems, photogrammetry (remote sensing), enoscopic surgery to electronic travel ais for the blin. Major scene epth sensing technologies are the active methos like: raar, liar an ultrasoun scanners, structure light projecting systems an the passive methos that are preominantly base on stereoscopy [1]. The passive methos are particularly attractive as being least interfering with the environment. In this letter we propose a computational metho for improving epth estimation accuracy for mobile stereoscopic systems uner the constraint of short baseline between the camera optical centres. Stereovision basics: Computational stereo is an imaging metho in which at least two planar projections of a scene are analyse for reconstructing its 3-D structure [1]. For the two-view case the cue use for ientifying epth of a unique scene point is the amount of shift (isparity) between image coorinates at which the scene point is projecte onto image planes of the left an right cameras. For a non-verge camera set-up the isparity is given by xl xr nx, where: x, x L R are the row coorinates of the 1

scene point images in the left an right igital cameras accoringly, x is the horizontal pixel size an n is the amount of shift in pixel units. Then, the epths of a scene point can be compute from Z f B, where f is the focal length of the cameras an B is the baseline between camera optical centres, e.g. for B 8cm, f 3. 5mm, an nx 108. 8m the epth is Z 3. 18m. An absolute epth error for the given stereo rig can be Z estimate from: Z. This error is inversely proportional to the fb baseline between cameras. Hence, the trae-off between system mobility (short baseline) an epth accuracy nees to be resolve. For the stereo rig consiere in the given example, for pixel shifts n 01,,, 3, the estimate epths are, 31. 8m, 15. 9m, 10. 6m, corresponingly. See Fig. illustrating the problem of poor epth resolution ue to short stereo baseline. Sub-pixel epth resolution can be achieve by using interpolation methos which, however, require aitional computations an can give unsatisfactory results for texture-less scene regions [3]. Ego-motion estimation: Let X,Y,Z be a right-hane coorinate system attache to the stereo rig, which is assume to move smoothly in a static environment. 6DoF (egrees of freeom) ego-motion parameters of the rig are efine by two vectors: translational motion vector T rotational motion vector T ω T U V W an. Let a feature point of the environment has instantaneous coorinates P X,Y,Z. Then velocity T V X Y Z of this point is given by [3]: X U Z Y Y V X Z Z W Y Z (1)

Assuming perspective projection of the feature point P X,Y,Z onto image point p x, y, an instantaneous velocity,v plane (be it the right image) can be obtaine from [4,5]: u of this point in image Uf xw u Z Vf yw v Z xy x f f y f f f y xy f x () Equation () relates motion of scene points (relative to the rig) with D motion vector fiel in the image plane. By etecting at least 3 well efine feature points of a scene an tracking them frame-to-frame 6DoF stereo rig ego-motion parameters can be iteratively compute. A fast an robust graient-base metho propose in [6] is use for etecting the feature points (see Fig. 1) an a local block matching technique is use for tracking the features. The ego-motion parameters are compute by solving the normal equation obtaine from the least squares cost function [7]: u x x' v y y' E (3) where: x, y an ', y' x are feature point coorinates ientifie in two image consecutive frames an u,v are substitute from (). Disparity refinement algorithm: The block iagram in Fig. 3 explains the concept of iterative computations leaing to isparity refinement. First, for an initial frame t ense isparity map is calculate an the ego-motion parameters are estimate from motion vectors u,v of the tracke feature points. Then, again using () the next t 1 image frame is preicte. Finally, correction of epth value for each point of the isparity map is compute from (1). To avoi propagation of errors (e.g. ue to occlue objects) the estimate isparity map nees to be verifie against the isparity map compute irectly from the stereo algorithm. This 3

verification, however, nees to be one for every i-th frame only. Hence, consierable savings in computing eman can be achieve. A simple an effective proceure for eliminating error propagation is implemente. If isparity e t estimate for t frame iffers more than 0.5x from the value s t calculate for the same image frame pair from the stereo algorithm, the estimate isparity value e t is correcte accoring to the formula: e e s e t1 t t t, where <1 is a constant value experimentally set to 0.4, for which efficient reuction of error propagations were obtaine on the teste sequence). Experimental results: The propose metho for epth refinement was teste on image sequences taken by in-house stereo rig with geometric parameters as use in the earlier example. The system was mounte onto a tripo enabling 6DoF motion measurements. The tripo was fixe to a platform that was rolle along a 6m long rail (see Fig. 1). Every capture image pair was correcte for geometric istortions an rectifie before ense epth calculations were run by using the stereo block matching technique [1]. The recore sequences consiste of 60 frames taken with 7 frames/sec rate. Table 1 summarizes results of ego-motion estimations for the test sequences. A single frame taken from the refine isparity map sequence estimate accoring to the propose algorithm is epicte in Fig. 4. Note, consierable improvement in the quality of the epth map versus the map that was calculate solely from the stereo-vision algorithm (Fig. ). The contours of iscrete epth layers visible in Fig. have been smoothe out. Conclusion: A proceure for refinement of epth map sequences from camera ego-motion 6DoF parameters was propose. Effectively, the subpixel accuracy of 3D scene reconstruction has be achieve. An avantage of the algorithm is that time consuming the stereovision matching algorithm oes not nee to be run every single frame. Instea, epening on the application, the epth maps can be verifie for every i-th frame. The 4

propose epth refinement metho is particularly suitable for mobile stereovision systems requiring short camera baseline, e.g. for mini-robots. Acknowlegments: This work has been supporte by the Ministry of Eucation an Science of Polan grant No. R0 013 03 in years 007 010. References [1] Brown M.Z., Burschka D., Hager G.D., Avances in Computational Stereo, IEEE Trans. on Pattern Analysis an Machine Intelligence, vol. 5, no. 8, pp. 993 1008, 003. [] Fusiello, A., Trucco, E., an Verri, A.: A compact algorithm for rectification of stereo pairs, Mach. Vis. Appl., 000, 1, (1), pp. 16 [3] Szeliski R., Scharstein D., Symmetric Sub-Pixel Stereo Matching, 7th European Conference on Computer Vision, Copenhagen, Denmark, vol., pp. 55 540, May 00. [4] Bruss A. R., Horn B. K. P., "Passive Navigation,'' Computer Vision, Graphics, an Image Processing, vol. 1, No. 1, pp. 3-0, Jan. 1983. [5] Duric, Z., an Rosenfel, A.: Image sequence stabilization in real time, Real-Time Imaging, 1996,, pp. 71 84 [6] Lepetit V., Fua P., Towars Recognizing Feature Points using Classification Trees, EPFL Technical Report IC/004/74, 004. [7] Skulimowski P., Strumiłło P., Refinement of isparity map sequences from stereo camera ego-motion parameters, ICSES 006 International Conference on Signals an Electronics Systems, Łóź, Polan, September 17-0, 006, Łóź, Polan, pp. 379-38. Authors affiliation: Piotr Skulimowski, Pawel Strumillo (Institute of Electronics, Technical University of Loz, 11/15 Wolczanska Str., 90-94 Loz, Polan) e-mail: piotr.skulimowski@p.loz.pl 5

Fig. 1. The scene use for testing the epth refinement algorithm; note, the scene feature points selecte for tracking Fig.. The epth map compute with pixel accuracy for the scene shown in Fig. 1 (brighter regions correspon to closer objects) Fig. 3. Block iagram explaining the propose metho for egomotion estimation an epth refinement. 6

Fig. 4. A single frame taken from a sequence of refine epth maps obtaine from the propose refinement algorithm (same frame as in Fig. ) Table. 1. Verification of ego-motion parameters estimation (rotation angles are given in egrees) 7