Multibody Motion Estimation and Segmentation from Multiple Central Panoramic Views

Similar documents
Structure from Small Baseline Motion with Central Panoramic Cameras

Catadioptric camera model with conic mirror

Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera

The end of affine cameras

Exploiting Rolling Shutter Distortions for Simultaneous Object Pose and Velocity Computation Using a Single View

Formation Control of Nonholonomic Mobile Robots with Omnidirectional Visual Servoing and Motion Segmentation

D-Calib: Calibration Software for Multiple Cameras System

Precise Omnidirectional Camera Calibration

Epipolar Constraint. Epipolar Lines. Epipolar Geometry. Another look (with math).

Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images

MEM380 Applied Autonomous Robots Winter Robot Kinematics

Epipolar Geometry of Central Projection Systems Using Veronese Maps

3D Geometry and Camera Calibration

NATIONAL UNIVERSITY OF SINGAPORE. (Semester I: 1999/2000) EE4304/ME ROBOTICS. October/November Time Allowed: 2 Hours

An Overview of Robot-Sensor Calibration Methods for Evaluation of Perception Systems

Determining the 2d transformation that brings one image into alignment (registers it) with another. And

Structure from Motion

A Factorization Method for Structure from Planar Motion

Perspective Projection Transformation

UNIFYING IMAGE PLANE LIFTINGS FOR CENTRAL CATADIOPTRIC AND DIOPTRIC CAMERAS

The Epipolar Geometry Toolbox (EGT) for MATLAB

Mathematics of a Multiple Omni-Directional System

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

The Double Sphere Camera Model

An Omnidirectional Vision-Based Moving Obstacle Detection in Mobile Robot

A Practical Camera Calibration System on Mobile Phones

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space

Homography-based Tracking for Central Catadioptric Cameras

Coarsely Calibrated Visual Servoing of a Mobile Robot using a Catadioptric Vision System

Computer Vision Lecture 20

An Iterative Multiresolution Scheme for SFM

calibrated coordinates Linear transformation pixel coordinates

Image Metamorphosis By Affine Transformations

Unit 3 Multiple View Geometry

CSE328 Fundamentals of Computer Graphics: Theory, Algorithms, and Applications

Structure from motion

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

World Academy of Science, Engineering and Technology International Journal of Computer and Information Engineering Vol:10, No:4, 2016

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Two Dimensional Viewing

Structure from Motion. Lecture-15

arxiv:cs/ v1 [cs.cv] 24 Mar 2003

CS 664 Structure and Motion. Daniel Huttenlocher

MAN-522: COMPUTER VISION SET-2 Projections and Camera Calibration

Stereo and Epipolar geometry

Course 23: Multiple-View Geometry For Image-Based Modeling

Ego-Mot ion and Omnidirectional Cameras*

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Multiple Motion Scene Reconstruction from Uncalibrated Views

Developing a Tracking Algorithm for Underwater ROV Using Fuzzy Logic Controller

BIL Computer Vision Apr 16, 2014

Structure from Motion and Multi- view Geometry. Last lecture

CS559: Computer Graphics

What and Why Transformations?

Modeling Transformations

Epipolar Geometry in Stereo, Motion and Object Recognition

Factorization with Missing and Noisy Data

Structure from motion

Closure Polynomials for Strips of Tetrahedra

Lecture 3.4 Differential Equation Based Schemes

Midterm Exam Solutions

Vision-based Real-time Road Detection in Urban Traffic

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

A New Concept on Automatic Parking of an Electric Vehicle

Announcements. Recognition I. Optical Flow: Where do pixels move to? dy dt. I + y. I = x. di dt. dx dt. = t

Jacobian: Velocities and Static Forces 1/4

Perception and Action using Multilinear Forms

3-Dimensional Viewing

Motion estimation. Lihi Zelnik-Manor

Optical flow Estimation using Fractional Quaternion Wavelet Transform

Geometric Model of Camera

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

3D Motion Segmentation Using Intensity Trajectory

CS231M Mobile Computer Vision Structure from motion

GENERAL CENTRAL PROJECTION SYSTEMS

Single View Point Omnidirectional Camera Calibration from Planar Grids

The Graph of an Equation

Fast Bilinear SfM with Side Information

Modeling Transformations

Fitting Conics to Paracatadioptric Projections of Lines

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

EXPANDING THE CALCULUS HORIZON. Robotics

Tracking Multiple Objects in 3D. Coimbra 3030 Coimbra target projection in the same image position (usually

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

The Epipolar Geometry Toolbox: multiple view geometry and visual servoing for MATLAB

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

DIGITAL ORTHOPHOTO GENERATION

Final Exam Study Guide CSE/EE 486 Fall 2007

Jacobian: Velocities and Static Forces 1/4

Lucas-Kanade Image Registration Using Camera Parameters

3D Photography: Epipolar geometry

A Method to Measure Eye-Hand Coordination for Extracting Skilled Elements-Simultaneous Measurement of Eye-Gaze and Hand Location-

8.6 Three-Dimensional Cartesian Coordinate System

CMSC 425: Lecture 10 Basics of Skeletal Animation and Kinematics

A 3D shape constraint on video

Transcription:

Multibod Motion Estimation and Segmentation from Multiple Central Panoramic Views Omid Shakernia René Vidal Shankar Sastr Department of Electrical Engineering & Computer Sciences Universit of California at Berkele, Berkele CA 9472-1772 {omids,rvidal,sastr}@eecsberkeleedu Abstract We present an algorithm for infinitesimal motion estimation and segmentation from multiple central panoramic views We first show that the central panoramic optical flows corresponding to independent motions lie in orthogonal ten-dimensional subspaces of a higher-dimensional linear space We then propose a factorization-based technique that estimates the number of independent motions, the segmentation of the image measurements and the motion of each object relative to the camera from a set of image points and their optical flows in multiple frames Finall, we present eperimental results on motion estimation and segmentation for a real image sequence with two independentl moving mobile robots, and evaluate the performance of our algorithm b comparing the estimates with measurements gathered b the mobile robots I INTRODUCTION The panoramic field of view offered b omnidirectional cameras makes them ideal candidates for man based mobile robot applications, such as autonomous navigation, localization, and formation control A problem that is fundamental to most of these applications is multibod motion estimation and segmentation, which is the problem of estimating of the number of independentl moving objects in the scene; the segmentation of the objects from the background; and the relative motion between the camera and each one of the objects in the scene Most of the previous work in omnidirectional (see Section I-B) has been concerned with the special case in which a static scene is observed b a moving camera, the so-called structure from motion problem The case in which both the camera and multiple objects move is more challenging, because one needs to simultaneousl estimate multiple motion models without knowing which image measurements correspond to which moving object Recent work has considered motion estimation and segmentation for orthographic and perspective cameras To the best of our knowledge our work is the first one to address this problem in the case of central panoramic cameras A Contributions of this paper In this paper, we present the first algorithm for motion estimation and segmentation from multiple central panoramic views Our algorithm estimates the number of Fig 1 Motion segmentation for two mobile robots based on their omnidirectional optical flows independent motions, the segmentation of the image data and the motion of each object relative to the camera from a set of image points and their central panoramic optical flows in multiple frames Our algorithm is based on a rank constraint on the optical flows generated b independentl moving objects, which must lie in orthogonal ten-dimensional subspaces of a higher-dimensional linear space We present eperimental results on motion estimation and segmentation for a real image sequence with two independentl moving mobile robots (see Fig 1), and evalute the performance of our algorithm b comparing the estimates with differential measurements gathered b the mobile robots In [17, we applied the results of this work to -based formation control of nonholonomic mobile robots B Previous Work Central panoramic cameras are realizations of omnidirectional sstems that combine a mirror and a lens and have a unique effective focal point In [1, an entire class of omnidirectional cameras containing a single effective focal point is derived A single effective focal point is necessar for the eistence of epipolar geometr that is independent of the scene structure [11, making it possible to etend man results from perspective projection to the omnidirectional case The problem of estimating the 3D motion of a moving central panoramic camera imaging a single static object

has received a lot of attention over the past few ears Researchers have generalized man two-view structure from motion algorithms from perspective projection to central panoramic projection, both in the case of discrete [5 and differential motion [7, [13 In [7, [13, the image velocit vectors are mapped to a sphere using the Jacobian of the transformation between the projection model of the camera and spherical projection Once the image velocities are on the sphere, one can appl well-known ego-motion algorithms for spherical projection In [1, we proposed the first algorithm for motion estimation from multiple central panoramic views Our algorithm does not need to map the image data onto the sphere, and is based on a rank constraint on the central panoramic optical flows which naturall generalizes the well-known rank constraints for orthographic [12, and affine and paraperspective [9 cameras The problem of estimating the 3D motion of multiple moving objects observed b a moving camera is, on the other hand, onl partiall understood Costeira and Kanade [3 proposed a factorization method based on the fact that, under orthographic projection, discrete image measurements lie in a low-dimensional linear variet Recentl, Vidal et al [18 (see also [8) proposed a factorization algorithm for infinitesimal motion and perspective cameras based on the fact that independent motions lie in orthogonal subspaces of a higher dimensional space The case of discrete image measurements was recentl addressed b Vidal et al [15, who proposed a linear algebraic algorithm based on the so-called multibod epipolar constraint between two perspective views of a scene To the best of our knowledge, there is no work on motion segmentation from two or more central panoramic views, neither in the discrete not in the continuous case Paper Outline: In Section II we describe the projection model for central panoramic cameras and derive the optical flow equations In Section III we present an algorithm for motion estimation and segmentation of multiple independentl moving objects from multiple central panoramic views of a scene In Section IV we specialize the algorithm to the case of planar motion in the X Y plane In Section V we present eperimental results evaluating the performance of the algorithm, and we conclude in Section VI II CENTRAL PANORAMIC IMAGING In this section, we describe the projection model for a central panoramic camera and derive the central panoramic optical flow equations (see [1 for further details) A Projection Model Catadioptric cameras are realizations of omnidirectional sstems which combine a curved mirror and a lens Eamples of catadioptric cameras are a parabolic mirror in front of an orthographic lens and a hperbolic mirror in front of a perspective lens Camera sstems with a unique effective focal point are called central panoramic cameras It was shown in [4 that all central panoramic cameras can be modeled b a mapping of a 3D point onto a sphere followed b a projection onto the image plane from a point in the optical ais of the camera B varing two parameters (ξ, m), one can model all catadioptric cameras that have a single effective viewpoint The particular values of (ξ, m) in terms of the shape parameters of different tpes of mirrors are listed in [2 According to the unified projection model [4, the image point (, ) T of a 3D point q = (X, Y, Z) T obtained through a central panoramic camera with parameters (ξ, m) is given b: [ ξ + m = Z + ξ X 2 + Y 2 + Z 2 [ s X + s Y [ c c, (1) where ξ 1, m, and (s, s ) are scales that depend on the geometr of the mirror, the focal length and the aspect ratio of the lens, and (c, c ) T is the mirror center Since central panoramic cameras for ξ can be easil calibrated from a single image of three lines [6, [2, in this paper, we assume that the camera has been calibrated, ie we know the parameters (s, s, c, c, ξ, m) Therefore, without loss of generalit, we consider the following calibrated central panoramic projection model: [ = 1 [ X Y, Z + ξ X 2 + Y 2 + Z 2 (2) which is valid for Z < It is direct to check that ξ = corresponds to perspective projection, and ξ = 1 corresponds to paracatadioptric projection (a parabolic mirror in front of an orthographic lens) B Back-projection Ras Since central panoramic cameras have a unique effective focal point, one can efficientl compute the backprojection ra (a ra from the optical center in the direction of the 3D point being imaged) associated with each image point One ma consider the central panoramic projection model in equation (2) as a simple projection onto a curved virtual retina whose shape depends on the parameter ξ We thus define the back-projection ra as the lifting of the image point (, ) T onto this retina That is, as shown in Fig 2, given an image (, ) T of a 3D point q = (X, Y, Z) T, we define the back-projection ras as: b (,, z) T, (3) where z = f ξ (, ) is the height of the virtual retina We construct f ξ (, ) in order to re-write the central panoramic projection model in (2) as a simple scaling: b = q, (4)

frag replacements image plane O virtual retina z = f ξ (, ) (, ) T b = (,, z) T q = (X, Y, Z) T Fig 2 Showing the curved virtual retina in central panoramic projection and back-projection ra b associated with image point (, ) T where the unknown scale is lost in the projection Using equations (4) and (2), it is direct to solve for the height of the virtual retina as: z f ξ (, ) = 1 + ξ 2 ( 2 + 2 ) 1 + ξ 1 + (1 ξ 2 )( 2 + 2 ) (5) Notice that in the case of paracatadioptric projection ξ = 1 and the virtual retina is the parabola z = 1 2 (2 + 2 1) C Central Panoramic Optical Flow If the camera undergoes a linear velocit v R 3 and an angular velocit ω R 3, then the coordinates of a static 3D point q R 3 evolve in the camera frame as q = ωq + v Here, for ω R 3, ω so(3) is the skewsmmetric matri generating the cross product Then, after differentiating equation (4), we obtain: b + ḃ = ωb + v, (6) where = e T 3 q +ξr, e 3 (,, 1) T and r q Now, using q = b, we get r = (1 + e T 3 b)/ξ Also, it is clear that = e T 3 ( ωq + v) + ξq T v/r Thus, after replacing all these epressions into (6), we obtain the following epression for the velocit of the back-projection ra in terms of the relative 3D camera motion: ḃ = (I + be T 3 ) bω + 1 (I + be T3 ξ2 bb T ) 1 + e T 3 b v (7) Since the first two components of the back-projection ra are simpl (, ) T, the first two rows of (7) give us the epression for central panoramic optical flow: [ẋ [ z = 2 ω + (8) ẏ 1 (z 2 ) [ 1 ρ 2 ρ (1 ρz) ρ 1 ρ 2 (1 ρz) v, where = Z + ξ X 2 + Y 2 + Z 2, z = f ξ (, ), and ρ ξ 2 /(1+z) Notice that when ξ =, then ρ = and (8) becomes the well-known equation for the optical flow of a perspective camera When ξ = 1, then ρ = 1/( 2 + 2 ), and (8) becomes the equation for the optical flow of a paracatadioptric camera [1 III MULTIBODY MOTION ESTIMATION AND SEGMENTATION In this section, we derive a factorization method for estimating the motion of an unknown number n o of independentl moving objects from multiple central panoramic views To this end, let ( i, i ) T, i = 1,, N, be a piel in the zeroth frame associated with one of the moving objects and let (ẋ if, ẏ if ) T be its optical flow in frame f = 1,, F, relative to the zeroth frame We assume no prior segmentation of the image points, ie we do not know which image points correspond to which moving object We define the optical flow matri W R 2N F as: W ẋ 11 ẋ N1 ẏ 11 ẏ N1 ẋ 1F ẋ NF ẏ 1F ẏ NF and the segmentation matri W s R N 2F as: ẋ 11 ẏ 11 ẋ 1F ẏ 1F W s (1) ẋ N1 ẏ N1 ẋ NF ẏ NF A Single Bod Motion Estimation First consider the case where there is a single rigid bod motion generating the optical flow, ie n o = 1 Following equation (8), define the matri of rotational flows Ψ and the matri of translational flows Φ as: Ψ = Φ = [ [ {} {z 2 } {} {z 2 } {} {} { 1 ρ2 } { ρ } { (1 ρz) } { ρ } { 1 ρ2 } { (1 ρz) } T R 2N 3, (9) R 2N 3, where, eg {} = ( 1 1,, N N ) T R N Then, the optical flow matri W R 2N F satisfies [1: [ ω1 ω W = [Ψ Φ F 2N 6 = SM T, (11) v 1 v F 6 F where ω f and v f are the rotational and linear velocities, respectivel, of the object relative to the camera between the zeroth and the f-th frames We call S R 2N 6 the structure matri and M R F 6 the motion matri We conclude that the optical flow matri generated b a single moving bod undergoing a general translation and rotation has rank 6 Such a rank constraint can be naturall used to derive a factorization method for estimating the relative velocities (ω f, v f ) and the scales i from image points ( i, i ) T and optical flows (ẋ if, ẏ if ) T We can do so b factorizing W into its motion and structure components To this end, consider the singular value decomposition (SVD) of W = USV T, with U R 2N 6 and VS R F 6, and let S = U and M = VS Then we have S = SA and M = MA T for some A R 6 6 Let A k be the k-th column of A Then the columns of

A must satisf: SA1 3 = Ψ and SA 4 6 = Φ Since Ψ is known, A 1 3 can be immediatel computed The remaining columns of A and the vector of inverse scales {1/} R N can be obtained up to scale from: diag({1 ρ 2 }) S diag({ρ}) S diag({(1 ρz)}) S {1/} diag({ρ}) S A 4 diag({1 ρ 2 }) S A 5 = A 6 diag({(1 ρz)}) S where S R N 6 and S R N 6 are the upper and lower part of S, respectivel B Multibod Motion Segmentation Let us now consider the case of a dnamic scene with n o independent motions In this case, the algorithm will proceed in two steps: (1) Use the segmentation matri to associate groups of image measurements that correspond to the same moving object; (2) Given the segmentation, appl the algorithm in Section III-A to each group of measurements to estimate the motion of each object We first notice from (8) that, for a single motion k, the segmentation matri W s R N 2F can be decomposed as W s = S k M T k (12) where the structure matri S k R N 1 is given b S k =[{}, {z 2 }, {}, {z 2 }, {}, { 1 ρ2 }, { ρ }, {(1 ρz) }, { 1 ρ2 }, { (1 ρz) } and the motion matri M k R 2F 1 is given b ω 1 ω 1 ω z1 v 1 v 1 v z1 ω 1 ω 1 ω z1 v 1 v 1 v z1 M k = ω F ω F ω zf v F v F v zf ω F ω F ω zf v F v F v zf Hence, for a single object in the scene, the collection of central panoramic optical flows across multiple frames lies on a ten-dimensional subspace of R 2F More generall, the segmentation matri associated with n o independentl moving objects can be decomposed as: S 1 W s = S no M T 1 M T n o = SM T where S R N 1no, M R 2F 1no, S k R Nk 1, M k R 2F 1, N k is the number of piels associated with object k for k = 1,, n o, and N = n o k=1 N k Since we are assuming that the segmentation of the image points is unknown, the rows of W s ma be in a Fig 3 Showing eigenvector based segmentation Left: Elements of first eigenvector of a nois W s matri Right: Re-ordering the elements based on the levels different order However, the reordering of the rows of W s will not affect its rank Assuming that N 1n o and that F 5n o, we conclude that when the objects undergo a general motion the number of independent motions n o can be estimated as n o = rank(w s )/1 One can show [14, [19 that the block diagonal structure of W s implies that the entries of its leading singular vector will be the same for piels corresponding to the same motion, and different otherwise Thus, as proposed in [14, one can use the entries of the leading singular vector of W s as an effective criterion for segmenting the piels of the current frame into a collection of n o groups corresponding to the independent motions Figure 3 shows the elements of the leading singular vector for a nois W s matri, and its re-ordering after thresholding its entries Once the optical flow has been segmented into independent motions, we can appl the factorization algorithm described in Section III-A separatel on each group of measurements to estimate the motion of each object IV ROBOT NAVIGATION: MOTION IN X-Y PLANE The algorithm for infinitesimal motion estimation and segmentation described in Section III assumes that the motion of the objects is general, so that the subspace associated with each object is full ten-dimensional However, there are important robotics applications, eg ground robot navigation, in which the motion of the robots is restricted, hence the general algorithm described in Section III can not be directl applied In this section, we consider the special case where the moving objects are restricted to move onl in the X-Y plane In this case, the angular velocit is ω = (,, ω z ) T and the linear velocit is v = (v, v, ) T Hence, the optical flow generated b a single moving bod is: [ẋ = ẏ [ ω z + 1 [ 1 ρ 2 ρ ρ 1 ρ 2 [ v v (13) where = Z + ξ X 2 + Y 2 + Z 2, Z = Z ground, z is given in (5), and ρ ξ 2 /(1 + z)

A Motion Estimation in the X-Y Plane The optical flow matri for a single bod moving with velocities ω f = (,, ω zf ) T, v f = (v f, v f, ) T in frame f can be written as ω z1 ω zf W = [Ψ Φ 2N 3 v 1 v F = SM T, v 1 v F 3 F with the matrices of rotational flows Ψ R 2N 1 and translational flows Φ R 2N 2 given b [ [ {} { 1 ρ2 Ψ =, Φ = } { ρ } {} { ρ } { 1 ρ2 } We conclude that, in the case of a single object undergoing general motion in the X-Y plane, the optical flow matri satisfies rank(w ) = 3 Therefore, as in Section III-A, we consider the SVD of W = USV T and let S = U R 2N 3 and M = VS R F 3 Then S = SA T and M = MA for some A R 3 3 Let A k be the k-th column of A Then the columns of A must satisf: SA 1 = Ψ and SA 2 3 = Φ Since Ψ is known, A 1 can be immediatel computed The remaining columns of A and the vector of inverse scales {1/} R n can be obtained up to scale from: diag({1 ρ 2 }) S diag({ρ}) S {1/} diag({ρ}) S A 2 =, diag({1 ρ 2 }) S A 3 where S R N 3 and S R N 3 are the upper and lower part of S, respectivel B Motion Segmentation in the X-Y Plane We now specialize the procedure of Section III-B to the case of planar motion Following (13), the structure and motion matrices become S k =[ {}, {}, { 1 ρ2 }, { ρ }, {1 ρ2 } R N 5 ω zf v f v f ω zf v f v f M k = R 2F 5 ω zf v f v f ω zf v f v f We conclude that the collection of central panoramic optical flows for a single object moving in the X-Y plane lies in a 5-dimensional subspace of R 2F Therefore, the number of independent motions can be estimated as n o = rank(w s )/5 Then, the segmentation of the image measurements can be obtained as before from the leading singular vector of W s Once the optical flow has been segmented into independent motions, we can use the single-bod factorization algorithm of Section IV-A to separatel estimate the motion and structure of each independentl moving object V EXPERIMENTS Here we evaluate the performance of the proposed multibod motion estimation and segmentation algorithm in the case where two independentl moving mobile robots are viewed b a static paracatadioptric camera (ξ =1) The robots are equipped with sensors with an accurac of 2cm We use the robots measurements to evaluate the performance of our motion estimation algorithm We grabbed 18 images of size 24 24 piels at a framerate of 5Hz The optical flow was computed directl in the image plane using Black s algorithm available at http://wwwcsbrownedu/people/black/ignchtml Fig 1 shows a sample of the motion segmentation based on the optical flow On the left, the optical flow generated b the two moving robots is shown, and on the right is the segmentation of the piels corresponding to the independent motions The two moving robots are segmented ver well from the static background Fig 4 and Fig 5 show the output of our motion estimation algorithm for the two moving robots These figures plot the estimated translational (v, v ) and rotational velocit ω z for the robots as a function of time in comparison with the values obtained b the on-board sensors Fig 6 shows the root mean squared error for the motion estimates of the two robots The estimates of linear velocit are within 15 m/s of the estimates The estimates of angular velocit are more nois than the estimates of linear velocit, because the optical flow due to rotation is smaller than the one due to translation VI CONCLUSIONS AND FUTURE WORK We have presented an algorithm for infinitesimal motion estimation and segmentation from multiple central panoramic views Our algorithm is a factorization approach based on the fact that optical flows generated b a rigidl moving object across man frames lie in orthogonal ten-dimensional subspaces of a higher-dimensional space We presented eperimental results that show that our algorithm can effectivel segment and estimate the motion of multiple moving objects from multiple catadioptric views Future work will consider relaing the constraint on the motion of the objects being full ten-dimensional While the case of five-dimensional motion in the X-Y plane was easil handled with minor modifications of the general algorithm, the case of planar motion in an arbitrar plane is much more comple as demonstrated in [16 for the case of a single motion Future work will also consider the etension of the current algorithm to the case of uncalibrated central panoramic cameras VII ACKNOWLEDGMENTS We thank the support of ONR grant N14--1-621

replacements replacements Robot 1: v (m/s) Robot 1: v (m/s) 14 16 18 2 22 5 1 15 2 25 3 35 4 6 8 1 12 5 1 15 2 25 3 35 Robot 1: ω (rad/s) 5 5 5 1 15 2 25 3 35 t (sec) Fig 4 Comparing the output of our -based motion estimation algorithm with data for robot 1 Robot 2: v (m/s) Robot 2: v (m/s) 1 2 5 1 15 2 25 3 35 5 5 1 15 5 1 15 2 25 3 35 Robot 2: ω (rad/s) 5 5 1 5 1 15 2 25 3 35 t (sec) Fig 5 Comparing the output of our -based motion estimation algorithm with data for robot 2 45 4 35 3 25 2 15 1 5 Fig 6 robots Robot1: RMS Error v_ (m/s) v_ (m/s) w (rad/s) 5 45 4 35 3 25 2 15 1 5 Robot2: RMS Error v_ (m/s) v_ (m/s) w (rad/s) Showing the RMS error for the motion estimates of the two VIII REFERENCES [1 S Baker and S Naar A theor of single-viewpoint catadioptric image formation International Journal on Computer Vision, 35:175 196, 1999 [2 J Barreto and H Araujo Geometric properties of central catadioptric line images In 7th European Conference on Computer Vision, pages 237 251, Ma 22 [3 J Costeira and T Kanade A multibod factorization method for independentl moving objects International Journal of Computer Vision, 29(3):159 179, 1998 [4 C Geer and K Daniilidis A unifing theor for central panoramic sstems and practical implications In European Conference on Computer Vision, pages 445 461, 2 [5 C Geer and K Daniilidis Structure and motion from uncalibrated catadioptric views In IEEE Conf on Computer Vision and Pattern Recognition, pages 279 286, 21 [6 C Geer and K Daniilidis Paracatadioptric camera calibration IEEE Transactions on Pattern Analsis and Machine Intelligence, 4(24):1 1, April 22 [7 J Gluckman and SK Naar Ego-motion and omnidirectional cameras In Proceedings of IEEE 6th International Conference on Computer Vision, pages 999 15, 1998 [8 M Machline, L Zelnik-Manor, and M Irani Multi-bod segmentation: Revisiting motion consistenc In Workshop on Vision and Modeling of Dnamic Scenes, ECCV 22 [9 C J Poelman and T Kanade A paraperspective factorization method for shape and motion recover IEEE Transactions on Pattern Analsis and Machine Intelligence, 19(3):26 18, 1997 [1 O Shakernia, R Vidal, and S Sastr Infinitesimal motion estimation from multiple central panoramic views In IEEE Workshop on Motion and Video Computing, pages 229 234, 22 [11 T Svoboda, T Pajdla, and V Hlavac Motion estimation using panoramic cameras In IEEE Conference on Intelligent Vehicles, pages 335 35, 1998 [12 C Tomasi and T Kanade Shape and motion from image streams under orthograph International Journal of Computer Vision, 9(2):137 154, 1992 [13 RF Vassallo, J Santos-Victor, and J Schneebeli A general approach for egomotion estimation with omnidirectional images In IEEE Workshop on Omnidirectional Vision, pages 97 13, 22 [14 R Vidal, A Ames, and S Sastr Polnomial segmentation: an analtic solution to eigenvector segmentation In IEEE Conference on Computer Vision and Pattern Recognition, 23 Submitted [15 R Vidal, Y Ma, S Soatto, and S Sastr Two-view multibod structure from motion International Journal of Computer Vision, 22 Submitted [16 R Vidal and J Oliensis Structure from planar motions with small baselines In European Conference on Computer Vision (LNCS 2351), volume 2, pages 383 398, 22 [17 R Vidal, O Shakernia, and S Sastr Formation control of nonholonomic mobile robots with omnidirectional visual servoing and motion segmentation In IEEE International Conference on Robotics and Automation, 23 [18 R Vidal, S Soatto, and S Sastr A factorization method for multibod motion estimation and segmentation In Fortieth Annual Allerton Conference on Communication, Control and Computing, pages 1625 1634, 22 [19 Yair Weiss Segmentation using eigenvectors: A unifing view In IEEE International Conference on Computer Vision, pages 975 982, 1999