Omni Flow. Libor Spacek Department of Computer Science University of Essex, Colchester, CO4 3SQ, UK. Abstract. 1. Introduction

Size: px
Start display at page:

Download "Omni Flow. Libor Spacek Department of Computer Science University of Essex, Colchester, CO4 3SQ, UK. Abstract. 1. Introduction"

Transcription

1 Omni Flow Libor Spacek Department of Computer Science University of Essex, Colchester, CO4 3SQ, UK. Abstract Catadioptric omnidirectional sensors (catadioptric cameras) capture instantaneous images with panoramic 360 field of view. Entire surroundings are projected via a circularly symmetrical mirror onto the image. Nothing disappears from view even when objects move and the robot rotates. Therefore, catadioptric cameras are very effective in dynamic visual guidance applications such as tracking of objects, obstacle avoidance, site modelling, mapping, simultaneous location and mapping (SLAM), visual guidance, and navigation. Computation of image velocities (optic flow) is central to dynamic computer vision applications in robotics. This paper exploits the specific properties of catadioptric projection via conical mirrors to derive a simple direct solution for optic flow, named Omni Flow to emphasise its omnidirectional scope. Unlike earlier optic flow methods, Omni Flow is non-iterative and therefore better suited to real-time robotics applications. Keywords: omni flow, optic flow, omnidirectional vision, catadioptric camera, mobile robotics. 1. Introduction Early attempts at using omnidirectional sensors included camera clusters (Swaminathan and Nayar, 2000) and various arrangements of mechanically rotating cameras and planar mirrors, (Rees, 1970), (Kang and Szeliski, 1997), (Ishiguro et al., 1992). These mostly had problems with registration, motion, or both. Fisheye lens cameras have also been used to increase the field of view (Shah and Aggarwal, 1997) but they proved difficult because of their irreversible distortion of nearby objects and the lack of a single viewpoint. Single viewpoint projection geometry exists when the light rays arriving from all directions intersect at a single point known as the (single) effective viewpoint. For example, by placing the centre of the perspective camera lens at the outer focus of a hyperbolic mirror, the inner focus then becomes the single effective viewpoint. A single viewpoint is generally thought to be necessary for an accurate unwarping of images and for an accurate perspective projection which is relied on by most current computer vision methods (Baker and Nayar, 1999). The single viewpoint projection has been endorsed and recommended by (Baker and Nayar, 1998, 2001), (Daniilidis and Geyer, 2000), (Geyer and Daniilidis, 2000a,b, 2002b), (Svoboda and Pajdla, 2002) and others. There have been few attempts at analysing multi-viewpoint sensors (Swaminathan and Nayar, 2001), (Fiala and Basu, 2002), (Spacek, 2004), although various people (Yagi and Kawato, 1990) used them previously without analysis. Catadioptric sensors (Nayar, 1997) consist of a fixed dioptric camera, usually mounted vertically, plus a fixed rotationally symmetrical mirror suspended above or below the camera. There are a few different shapes of mirrors in use with catadioptric sensors. They are discussed and compared in (Ishiguro, 1998). The advantages of catadioptric sensors in general derive from the fact that, unlike the rotating cameras, their scanning of the surroundings is almost instantaneous, the camera exposure time being usually shorter than the full circle mechanical rotation time. Shorter exposure time means fewer image capture problems caused by motion and vibration of the camera, or by moving objects. Specifically, in the context of this paper, it allows faster frame rate and therefore a meaningful computation of the omnidirectional optic flow (Gluckman and Nayar, 1998), (Vassallo et al., 2002). Use in dynamic environments is clearly an important consideration, especially as one of the chief benefits of omnidirectional vision in general is the ability to retain objects in view even when their bearings have changed suddenly and significantly. Catadioptric omnidirectional sensors are therefore ideally suited to visual navigation (Rushant and Spacek, 1998), (Winters et al., 2000), visual guidance applications (Pajdla and Hlavac, 1999), site mapping (Yagi et al., 1995), stereopsis (Gluckman et al., 1998), motion analysis (Yagi et al., 1996), and optic flow. Spacek (2003) obtained perspective projection with a multi-viewpoint conical mirror, leading to a simple and effective solution for coaxial omnidirectional stereopsis. This paper builds directly on those results, summarised here, to solve the omni flow problem.

2 2. Perspective Projection via Conical Mirror The image of a rotationally symmetric mirror viewed along its axis of symmetry is circular. See figures 1 and 2. It is convenient to use the polar coordinates (r i, θ) to represent the image positions and the related cylindrical coordinates (r, θ, h) for the 3D scene. See Figure 3 for the cross section in θ direction of the perspective projection via a conical mirror with the 90 angle at the tip. (It can be generalised for any angle). Suppose that we are projecting scene point P located at the 3D coordinates (r, θ, h). The point P, the centre of the lens, and the image projection of P all lie on the same ray and can therefore be modeled as collinear. See Figure (3). Let us denote the image radius value of the projection of P by h i. The perspective projection formula relates h i to h. It is obtained directly from the collinearity property. h i = v h (1) d + r h i values are always positive (denoting image radius). Equation (1) is simpler than the corresponding projection equations for the radially curved mirrors. v is the distance of the image plane behind the centre of the thin lens in Gaussian optics. The calibration of v is obtained by considering a point on the edge of the mirror and substituting the image radius of the mirror r m for h i and the real radius of the mirror R for both h and r in equation (1): v = ( d R + 1) r m (2) r m is determined in practice by locating the edge (outer contour) of the mirror in the image by using Hough transform or other methods. 2.1 Registration The above projection is valid and accurate when the axis of view coincides with the axis of the mirror. Registration may need to be performed to find the two translation and three rotation parameters needed to align the axis. Existing registration methods apply in this situation. Geyer and Daniilidis (2001, 2002a) present good solutions to this problem within the context of omnidirectional vision. Straight lines in the 3D world become generally conic section curves when projected. However, lines which are coplanar with the axis of the mirror will project into radial lines. Concentric circles around the mirror will project again into concentric circles. These properties can be utilised for a simple test card registration method, where the test card is of the shooting target type consisting of cross-hairs and concentric circles, centered on the cone axis. 3. Coaxial Omni Stereo Various arrangements have been proposed for binocular systems using catadioptric sensors. Two mirrors situated side by side can be used to compute the distance of objects in terms of the disparity measured as the arising difference in angles θ (Brassart et al., 2000). However, such arrangement is not truly omnidirectional, as a large part of the scene will be obstructed by the other catadioptric sensor. It is better to arrange the cameras coaxially to avoid this problem. The coaxial arrangement has the further major advantage of having radial epipolar lines of the same orientation θ. A coaxial omni stereo using telecentric (orthographic) optics and parabolic mirrors was demonstrated by Gluckman et al. (1998). The radial distance r is measured from the common axis of the mirrors to any 3D scene point P in the region that is visible by both cameras (the common region). See Figure 4. In order to obtain the triangulation formula, we use two instances of equation (1) for two coaxial mirrors separated by distance s between them (s is measured along the h axis). We assume here that the parameters v and d are the same for both cameras, though this can be easily generalised if necessary. (d + r)h 1 = v(h s) (3) (d + r)h 2 = vh (4) Subtracting (3) from (4) and manipulating a little, we obtain the triangulation formula: r = vs d (5) h 2 h 1 where h 1 and h 2 are the two image radii of P in the two images. This is very similar to the usual triangulation formula from classical side-by-side stereopsis but here the disparity is radial. Note that the extra distance d of the reflected camera is correctly subtracted out. The familiarity of this formula is not surprising, as the two reflected cameras form a classical stereo system which happens to have a vertical baseline. 4. Omni Flow Let f(x i, y i ) be the omnidirectional input image in the usual rectangular coordinates. Find the centre x c, y c and the radius r m of the mirror visible in f. Then the following equations are used to transform between the initial rectangular coordinates x i, y i, the polar coordinates h i, θ i, and (optionally) the rectangular coordinates x, y of the unwarped image of dimensions x m, y m (in pixel units): x i = x c + h i cos θ i = x c + r m y m y cos ( 2π x m x) (6) y i = y c + h i sin θ i = y c + r m y m y sin ( 2π x m x) (7)

3 4.1 Optic Flow Assumptions The image function is differentiable. We enforce this by globally fitting a differentiable function to the image data. It is commonly emulated by locally smoothing the image. The motion of the image function surface f(h i, θ i, t) is smooth (the frame-rate is sufficiently fast). At least two successive frames are available. The viewed objects reflect light equally in all directions and there are no sudden changes in illumination, ie. the grey level values of the same object do not change between two successive frames. This is not strictly true for highly specular (shiny) surfaces. However, it is a reasonable assumption to make for most materials involved in small motions and small changes of viewing angle. This assumption can be expressed succinctly as: df = 0. The grey level values at a fixed image point however do change with time, as they image (depict the projections of) different points in the moving 3D scene. Keeping h i, θ i fixed is expressed using the partial derivative f t. The changing grey levels mean that f t Omni Flow Equation Suppose that an object point is imaged in the first time frame as f(h i, θ i, t) and in the next frame as f(h i + dh i, θ i + dθ i, t + ). The point has moved across the image by ds = dh 2 i + h2 i dθ2 i during the small time interval. We relate the two images by using Taylor series in three variables using polar coordinates. It is written here compactly using the differential operator to the power of n: f(h i + dh i, θ i + dθ i, t + ) = 1 n! (dh i + h i dθ i + θ i t )n f(h i, θ i, t) n=0 The first term of the series, when n = 0, is simply f(h i, θ i, t). The assumption that the grey level of the same viewed point is the same in both frames can be written as: f(h i + dh i, θ i + dθ i, t + ) = f(h i, θ i, t) We subtract these terms from both sides of the above Taylor series, getting: 1 n! (dh i + h i dθ i + θ i t )n f(h i, θ i, t) = 0 n=1 Disregarding second and higher order terms (using only n = 1), we get the following approximation: (dh i + h i dθ i + θ i t )f(h i, θ i, t) 0 Dividing by and rearranging, we obtain the omni flow version of the well known optic flow equation, written compactly as: f v f (8) t where v = ( dhi, h ( i dθi ) is the polar optic ) flow vector to be found, f = f(hi,θ i,t), 1 f(h iθ i,t) h i θ i is the image gradient returned by our polar edge finder applied to the first frame (at the first image point), denotes the scalar product of two vectors, and bold type indicates vectors. This equation is underconstrained. Its classical optic flow equivalent required additional assumptions and usually an iterative solution. Simple partial solutions of the optic flow and the omni flow exist at those image points where one of the image gradient components is sufficiently close to zero. When this is the case the other component of the flow vector can be obtained directly from equation (8). 4.3 Coaxial Omni Flow The full direct solution of the omni flow equation will be obtained by using two mirrors in the same coaxial arrangement as that used by the omni stereo. Omni stereo and omni flow are to run concurrently sharing the same apparatus and the same edge-finder results. Let f 1 and f 2 be two (coaxial) stereo images, both obtained at the same time t = 1. Suppose that f 1 (h 1, θ 1, 1) and f 2 (h 2, θ 2, 1) are imaging the same object point and h 1, h 2 have been found by the radial stereo matcher. Let v 1 and v 2 be the two omni flow vectors at the two matched image points. We seek first to relate, and then to determine, v 1 and v 2. We adopt the following notation shortcuts: v 1 = ( dh1, h1dθ1 ) and v 2 = ( dh2, h2dθ2 dh ). 1 means dhi found (evaluated) at h 1, θ 1 and similarly for the other components. The components of f 1, evaluated at (h 1, θ 1, 1), will be written as: ( f1 f h 1, 1 h 1 θ 1 ). The components of f 2, evaluated at (h 2, θ 2, 1), will be written as: ( f2 There are now two instances of the omni flow equation: h 2, f 2 h 2 θ 2 ). f 1 t v 1 f 1 (9) f 2 t v 2 f 2 (10) Note that, because of the coaxial arrangement, θ 1 = θ 2 and dθ1 = dθ2, ie. the angular positions and the angular velocities are the same at the two matched image positions. Next, consider the object s radial velocity dr, using

4 the same notation shortcuts: dr = r dh 1 h 1 + r r dθ 1 θ 1 = r dh 2 h 2 + r r dθ 2 θ 2 (11) r dh 1 = r dh 2 h 1 h 2 (12) Rearranging and differentiating equation (1), we obtain: r = vh h 2 = d + r (13) i h i Substituting two instances of equation (13) to equation (12), we get: (d + r) dh 1 (d + r) dh 2 = h 1 h 2 dh 2 = h 2 dh 1 h 1 Substituting equation (14) to equation (10): (14) f 2 t (h 2 dh 1 h 1, h dθ 1 2 ) f 2 (15) Equations (9) and (15) have the direct solution: where dh 1 D 1 D, dθ 1 D 2 D D = 1 f 1 f 2 h 2 h 2 h 1 θ 1 h 2 f 2 f 1 0, 1 h 2 θ 1 D 1 = 1 h 2 f 1 t f 2 θ h 1 f 2 t f 1 θ 1, D 2 = 1 f 1 f 2 h 2 h 1 t + 1 f 2 f 1 h 1 h 2 t 5. Implementation (16) We apply the forward Discrete Cosine Transform (DCT) to the input image f(x i, y i ) as follows: c(q, p) = X 1 x i=0 a(q, p) XY Y 1 y i=0 cos ( πq Y (y i + 0.5)) f(x i, y i ) cos ( πp X (x i + 0.5)) (17) where: a(q, p) = 1 when q = p = 0; a(q, p) = 2 when q p and qp = 0; a(q, p) = 4 otherwise. This normalised definition of a(q, p) simplifies the inverse DCT. X, Y are the dimensions of the discrete input image. c(q, p) is the normalised coefficients array of dimensions P, Q produced by the forward DCT. Choosing P = X 4 and Q = Y 4 is usually adequate for a nearly perfect continuous fit to the image data. The inverse DCT is then: f(x i, y i ) P 1 p=0 Q 1 q=0 5.1 Polar Edge Finding cos ( πq Y (y i + 0.5)) c(q, p) cos ( πp X (x i + 0.5)) (18) Edges are located by the first derivative polar edge-finder proposed by Spacek (2003), using the inverse DCT and the polar coordinates (h i, θ i ) of the input image. This is a convenient way to compute the partial derivatives of the input image in h i and θ i directions. To find the image gradient vector, we differentiate equation (18) using equations (6) and (7), instead of differentiating the input image. This is legitimate as the inverse DCT has finite number of terms P Q. Differentiating with respect to h i we get: f(x i, y i ) = { q(y i y c ) Y p(x i x c ) X Q 1 P 1 q=0 p=0 π c(q, p) (xi x c ) 2 + (y i y c ) 2 sin ( πq Y (y i + 0.5)) cos ( πp X (x i + 0.5)) + sin ( πp X (x i + 0.5)) cos ( πq Y (y i + 0.5))} Differentiating with respect to θ i produces the second component of the polar image gradient: f(x i, y i ) θ i = { q(x i x c ) Y p(y i y c ) X Q 1 P 1 q=0 p=0 π c(q, p) (19) sin ( πq Y (y i + 0.5)) cos ( πp X (x i + 0.5)) sin ( πp X (x i + 0.5)) cos ( πq Y (y i + 0.5))} (20) We now have a global gradient function (a continuous polar edge map) of the input image. It follows that it is not necessary to generate edge maps of the whole images when doing the stereo matching. The image gradient can be evaluated on demand at any sub-pixel point. This polar edge finding approach entirely avoids the slow unwarping process. The unwarping is only needed for the optional convenience of human viewing of the partial results. This methodology should be of interest to omnidirectional vision generally, as it can be used with any rotationally symmetric mirrors. Any higher derivatives or other functions can be applied in the same way to the inverse DCT fit to the input image data.

5 The polar edge finder provides features selection for the radial stereo matching and it also supports the Omni Flow computation. 5.2 Radial Matching The outline of the radial stereo matching algorithm is as follows: 1. Given a pair of stereo images f 1 and f 2, find all significant points in f 1 where abs( f1 ) passes some threshold (abs() is the absolute value function). Store the values at such points. f 1 2. At each significant point, evaluate and store f1 θ i as well. 3. Select the next significant point s in f 1 and note its θ i value. 4. Find and store, in the same way as above, all significant points along the radial line of θ i orientation in f 2. If done before for this θ i, retrieve from memory instead. 5. Find the best match for s along this line, looking for the most similar image gradient vector ( f2(hi,θi), f2(hi,θi) h i θ i ). Eliminate all potential matches whose disparity is too small, as those objects are too distant to be of interest (e.g. for obstacle avoidance). 6. Compute r by substituting the successfully matched radial position values h 1 and h 2 to the triangulation equation (5). 7. Repeat from 3. There are other sophisticated stereo matching methods that could be adapted to these circumstances, for example (Sara, 2002). The general benefits of the coaxial omnidirectional stereopsis are both practical (objects do not disappear from view due to vehicle rotation), and theoretical/computational (the epipolar geometry is simpler than in classical stereopsis). 5.3 Omni Flow After successful radial matching, substitute the obtained values of h 1 and h 2 to the Omni Flow solution equation (16). We don t need the value of r to find the Omni Flow vectors. Other constants needed are the image polar gradient vectors f 1 and f 2 supplied by the polar edge finder and the partial time derivatives f1 t, f2 t, which are in practice approximated by the simple frame differences: f 1 t f 1(h 1, θ 1, 2) f 1 (h 1, θ 1, 1) f 2 t f 2(h 2, θ 1, 2) f 2 (h 2, θ 1, 1) More sophisticated approach is to use several time frames and a 3D DCT fit to their data, followed by finding the 3D image gradient analytically. This is a straightforward generalisation of the above 2D DCT methods to 3D DCT but it is somewhat slower to compute. 6. Conclusion The main contribution of this paper is the omni flow solution, obtained by combining omni stereo with omni flow. In addition, several important ideas and mathematical solutions have been presented: Omnidirectional vision is useful for robotics for practical reasons (not losing sight of objects). Omnidirectional coaxial stereo has simple epipolar geometry. Polar edge finding is a useful process in omnidirectional vision. It can be implemented cleanly and consistently by using the discrete cosine transform. It avoids computationally expensive unwarping and only needs to be evaluated at selected sub-pixel points. Omnidirectional optic flow (omni flow) can be conveniently derived and implemented by using polar image coordinates and associated cylindrical scene coordinates. Combining coaxial omni stereo with omni flow carries no additional penalties. On the contrary, it provides the major benefit of the direct non-iterative solution to the omni flow equation. The omni flow method saves time by re-using the edge-finding results at the selected points, as the same image points are used by both omni stereo and omni flow. The applied extensions of this theoretical work are: Averaging the computed omni flow vectors for some object of interest and using the average flow vector to steer in the same direction in order to follow (home-in) on the object. Steering in the opposite direction to avoid the object. Both steering methods work even for moving robots and independently moving objects, as the omni flow vectors capture the instantaneous relative motion between the moving robot and the moving object. Computing dr using equation (11) and using it for applying brakes (collision avoidance) and related applications, such as estimating the time of arrival. Using r, θ of objects at known positions in order to triangulate (fix) the current robot s position (navigation).

6 Separating dθi into the contributions due to the rotation of the robot and the motion of the object, in order to support navigation and odometry. References Baker, S., Nayar, S., A theory of catadioptric image formation. In: ICCV98. pp Baker, S., Nayar, S., November A theory of singleviewpoint catadioptric image formation. IJCV 32 (2), Baker, S., Nayar, S., Single viewpoint catadioptric cameras. In: PV01. pp Brassart, E., et al., June Experimental results got with the omnidirectional vision sensor: Syclop. In: EEE Workshop on Omnidirectional Vision (OM- NIVIS 00). pp Daniilidis, K., Geyer, C., Omnidirectional vision: Theory and algorithms. In: ICPR00. Vol. 1. pp Fiala, M., Basu, A., Panoramic stereo reconstruction using non-svp optics. In: ICPR02. Vol. 4. pp Geyer, C., Daniilidis, K., 2000a. Equivalence of catadioptric projections and mappings of the sphere. In: OMNIVIS00. pp. xx yy. Geyer, C., Daniilidis, K., 2000b. A unifying theory for central panoramic systems and practical applications. In: ECCV00. pp. xx yy. Geyer, C., Daniilidis, K., Structure and motion from uncalibrated catadioptric views. In: CVPR01. Vol. 1. pp Geyer, C., Daniilidis, K., April 2002a. Paracatadioptric camera calibration. IEEE PAMI 24 (4), Geyer, C., Daniilidis, K., 2002b. Properties of the catadioptric fundamental matrix. In: ECCV02. Vol. 2. p. 140 ff. Gluckman, J., Nayar, S., Thorek, K., Real-time omnidirectional and panoramic stereo. Gluckman, J., Nayar, S. K., Ego-motion and omnidirectional cameras. In: ICCV. pp Ishiguro, H., Development of low-cost compact omnidirectional vision sensors and their applications. Ishiguro, H., Yamamoto, M., Tsuji, S., February Omni-directional stereo. PAMI 14 (2), Kang, S., Szeliski, R., November d scene data recovery using omnidirectional multibaseline stereo. IJCV 25 (2), Nayar, S., Catadioptric omnidirectional cameras. In: CVPR97. pp Pajdla, T., Hlavac, V., Zero phase representation of panoramic images for image vased localization. In: Computer Analysis of Images and Patterns. pp Rees, D., April Panoramic television viewing system. US Patent No. 3,505,465. Rushant, K., Spacek, L., January An autonomous vehicle navigation system using panoramic vision techniques. In: International Symposium on Intelligent Robotic Systems, ISIRS98. pp Sara, R., Finding the largest unambiguous component of stereo matching. In: ECCV (3). pp Shah, S., Aggarwal, J., Mobile robot navigation and scene modeling using stereo fish-eye lens system. MVA 10 (4), Spacek, L., August Omnidirectional catadioptric vision sensor with conical mirrors. In: Proc. Towards Intelligent Mobile Robots - TIMR04. Spacek, L., May Coaxial omnidirectional stereopsis. In: Proc. European Conference on Computer Vision - ECCV04. Svoboda, T., Pajdla, T., August Epipolar geometry for central catadioptric cameras. IJCV 49 (1), Swaminathan, R. Grossberg, M., Nayar, S., Caustics of catadioptric cameras. In: ICCV02. Swaminathan, R., Nayar, S. K., Nonmetric calibration of wide-angle lenses and polycameras. IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (10), Vassallo, R. F., Santos-Victor, J., Schneebeli, H. J., June A general approach for egomotion estimation with omnidirectional images. In: OMNIVIS 02 Held in Conjuction with ECCV02. Winters, N., Gaspar, J., Lacey, G., Santos-Victor, J., Omni-directional vision for robot navigation. In: Proc. IEEE Workshop on Omnidirectional Vision - Omnivis00. Yagi, Y., Kawato, S., Panoramic scene analysis with conic projection. In: IROS90. Yagi, Y., Nishii, W., Yamazawa, K., Yachida, M., Rolling motion estimation for mobile robot by using omnidirectional image sensor hyperomnivision. In: ICPR96.

7 Yagi, Y., Nishizawa, Y., Yachida, M., October Map-based navigation for a mobile robot with omnidirectional image sensor copis. Trans. Robotics and Automation 11 (5), Figure 1: An omnidirectional image obtained using a hyperbolic mirror and an ordinary perspective camera above it. Figure 2: A conical mirror image showing a grass area, a paved area, and a part of a jacket, taken with an ordinary perspective camera also above the mirror. The entire mirror image now depicts useful data.

8 P C h h i v effective viewpoint d d (0,0,0) r axis of reflection v h i real camera Figure 3: Cross section of the perspective projection of P via the conical mirror: d is the distance from the tip of the cone to the centre of the thin lens, v is the distance from the centre of the thin lens to the image. h Region visible by camera1 Region visible by both cameras h i1 Reflected camera 1 Real camera 1 s Region visible by camera2 h i2 v Reflected camera 2 d (0,0,0) r Real camera 2 Figure 4: Omni Stereo and Omni Flow apparatus using two coaxial conical mirrors.

A Catadioptric Sensor with Multiple Viewpoints

A Catadioptric Sensor with Multiple Viewpoints A Catadioptric Sensor with Multiple Viewpoints Libor Spacek Department of Computer Science, University of Essex Wivenhoe Park, Colchester, CO4 3SQ, UK tel. +44 1206 872343, fax. +44 1206 872788 Abstract

More information

Catadioptric camera model with conic mirror

Catadioptric camera model with conic mirror LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación

More information

arxiv:cs/ v1 [cs.cv] 24 Mar 2003

arxiv:cs/ v1 [cs.cv] 24 Mar 2003 Differential Methods in Catadioptric Sensor Design with Applications to Panoramic Imaging Technical Report arxiv:cs/0303024v1 [cs.cv] 24 Mar 2003 R. Andrew Hicks Department of Mathematics Drexel University

More information

Structure from Small Baseline Motion with Central Panoramic Cameras

Structure from Small Baseline Motion with Central Panoramic Cameras Structure from Small Baseline Motion with Central Panoramic Cameras Omid Shakernia René Vidal Shankar Sastry Department of Electrical Engineering & Computer Sciences, UC Berkeley {omids,rvidal,sastry}@eecs.berkeley.edu

More information

Precise Omnidirectional Camera Calibration

Precise Omnidirectional Camera Calibration Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional

More information

Equi-areal Catadioptric Sensors

Equi-areal Catadioptric Sensors Equi-areal Catadioptric Sensors R. Anew Hicks Ronald K. Perline Department of Mathematics and Computer Science Drexel University Philadelphia, PA 904 ahicks, rperline @mcs.exel.edu Abstract A prominent

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Stereo with Mirrors*

Stereo with Mirrors* Stereo with Mirrors* Sameer A. Nene and Shree K. Nayar Department of Computer Science Columbia University New York, NY 10027 Abstract In this paper, we propose the use of mirrors and a single camera for

More information

Ego-Mot ion and Omnidirectional Cameras*

Ego-Mot ion and Omnidirectional Cameras* Ego-Mot ion and Omnidirectional Cameras* Joshua Gluckman and Shree K. Nayar Department of Computer Science Columbia University New York, New York 10027 Abstract Recent research in image sensors has produced

More information

Mathematics of a Multiple Omni-Directional System

Mathematics of a Multiple Omni-Directional System Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba

More information

OMNIDIRECTIONAL STEREOVISION SYSTEM WITH TWO-LOBE HYPERBOLIC MIRROR FOR ROBOT NAVIGATION

OMNIDIRECTIONAL STEREOVISION SYSTEM WITH TWO-LOBE HYPERBOLIC MIRROR FOR ROBOT NAVIGATION Proceedings of COBEM 005 Copyright 005 by ABCM 18th International Congress of Mechanical Engineering November 6-11, 005, Ouro Preto, MG OMNIDIRECTIONAL STEREOVISION SYSTEM WITH TWO-LOBE HYPERBOLIC MIRROR

More information

Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera

Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 29 St. Louis, USA Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera A. Salazar-Garibay,

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Non-isotropic Omnidirectional Imaging System for an Autonomous Mobile Robot

Non-isotropic Omnidirectional Imaging System for an Autonomous Mobile Robot Non-isotropic Omnidirectional Imaging System for an Autonomous Mobile Robot Kazuaki Kondo and Yasushi Yagi The Institute of Scientific and Industrial Research Osaka University Mihogaoka 8- Ibaragishi Osaka,

More information

A Computer Vision Sensor for Panoramic Depth Perception

A Computer Vision Sensor for Panoramic Depth Perception A Computer Vision Sensor for Panoramic Depth Perception Radu Orghidan 1, El Mustapha Mouaddib 2, and Joaquim Salvi 1 1 Institute of Informatics and Applications, Computer Vision and Robotics Group University

More information

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Proc. 2001 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems pp. 31-36, Maui, Hawaii, Oct./Nov. 2001. Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Hiroshi

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

Development of Low-Cost Compact Omnidirectional Vision Sensors and their applications

Development of Low-Cost Compact Omnidirectional Vision Sensors and their applications Development of Low-Cost Compact Omnidirectional Vision Sensors and their applications Hiroshi ISHIGURO Department of Electrical & Computer Engineering, University of California, San Diego (9500 Gilman

More information

Omni Stereo Vision of Cooperative Mobile Robots

Omni Stereo Vision of Cooperative Mobile Robots Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)

More information

SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR

SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR Juho Kannala, Sami S. Brandt and Janne Heikkilä Machine Vision Group, University of Oulu, Finland {jkannala, sbrandt, jth}@ee.oulu.fi Keywords:

More information

Homography-based Tracking for Central Catadioptric Cameras

Homography-based Tracking for Central Catadioptric Cameras Homography-based Tracking for Central Catadioptric Cameras Christopher MEI, Selim BENHIMANE, Ezio MALIS and Patrick RIVES INRIA Icare Project-team Sophia-Antipolis, France Email : {firstname.surname@sophia.inria.fr}

More information

Para-catadioptric Camera Auto Calibration from Epipolar Geometry

Para-catadioptric Camera Auto Calibration from Epipolar Geometry Para-catadioptric Camera Auto Calibration from Epipolar Geometry Branislav Mičušík and Tomáš Pajdla Center for Machine Perception http://cmp.felk.cvut.cz Department of Cybernetics Faculty of Electrical

More information

Stereo SLAM. Davide Migliore, PhD Department of Electronics and Information, Politecnico di Milano, Italy

Stereo SLAM. Davide Migliore, PhD Department of Electronics and Information, Politecnico di Milano, Italy Stereo SLAM, PhD migliore@elet.polimi.it Department of Electronics and Information, Politecnico di Milano, Italy What is a Stereo Camera? Slide n 2 Do you remember the pin-hole camera? What is a Stereo

More information

Monitoring surrounding areas of truck-trailer combinations

Monitoring surrounding areas of truck-trailer combinations Monitoring surrounding areas of truck-trailer combinations Tobias Ehlgen 1 and Tomas Pajdla 2 1 Daimler-Chrysler Research and Technology, Ulm tobias.ehlgen@daimlerchrysler.com 2 Center of Machine Perception,

More information

Camera Calibration for a Robust Omni-directional Photogrammetry System

Camera Calibration for a Robust Omni-directional Photogrammetry System Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,

More information

3D Metric Reconstruction from Uncalibrated Omnidirectional Images

3D Metric Reconstruction from Uncalibrated Omnidirectional Images CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY 3D Metric Reconstruction from Uncalibrated Omnidirectional Images Branislav Mičušík, Daniel Martinec and Tomáš Pajdla micusb1@cmp.felk.cvut.cz,

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,900 116,000 10M Open access books available International authors and editors Downloads Our authors

More information

3D Metric Reconstruction from Uncalibrated Omnidirectional Images

3D Metric Reconstruction from Uncalibrated Omnidirectional Images CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY 3D Metric Reconstruction from Uncalibrated Omnidirectional Images Branislav Mičušík, Daniel Martinec and Tomáš Pajdla {micusb1, martid1, pajdla}@cmp.felk.cvut.cz

More information

Omnivergent Stereo-panoramas with a Fish-eye Lens

Omnivergent Stereo-panoramas with a Fish-eye Lens CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Omnivergent Stereo-panoramas with a Fish-eye Lens (Version 1.) Hynek Bakstein and Tomáš Pajdla bakstein@cmp.felk.cvut.cz, pajdla@cmp.felk.cvut.cz

More information

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily

More information

Epipolar Geometry in Stereo, Motion and Object Recognition

Epipolar Geometry in Stereo, Motion and Object Recognition Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,

More information

A New Method and Toolbox for Easily Calibrating Omnidirectional Cameras

A New Method and Toolbox for Easily Calibrating Omnidirectional Cameras A ew Method and Toolbox for Easily Calibrating Omnidirectional Cameras Davide Scaramuzza 1 and Roland Siegwart 1 1 Swiss Federal Institute of Technology Zurich (ETHZ) Autonomous Systems Lab, CLA-E, Tannenstrasse

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Calibration of a fish eye lens with field of view larger than 180

Calibration of a fish eye lens with field of view larger than 180 CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Calibration of a fish eye lens with field of view larger than 18 Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek

More information

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra) Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate

More information

Towards Generic Self-Calibration of Central Cameras

Towards Generic Self-Calibration of Central Cameras Towards Generic Self-Calibration of Central Cameras Srikumar Ramalingam 1&2, Peter Sturm 1, and Suresh K. Lodha 2 1 INRIA Rhône-Alpes, GRAVIR-CNRS, 38330 Montbonnot, France 2 Dept. of Computer Science,

More information

Using RANSAC for Omnidirectional Camera Model Fitting

Using RANSAC for Omnidirectional Camera Model Fitting CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Using RANSAC for Omnidirectional Camera Model Fitting Branislav Mičušík and Tomáš Pajdla {micusb,pajdla}@cmp.felk.cvut.cz REPRINT Branislav Mičušík

More information

Two-View Geometry of Omnidirectional Cameras

Two-View Geometry of Omnidirectional Cameras CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Two-View Geometry of Omnidirectional Cameras PhD Thesis Branislav Mičušík micusb1@cmp.felk.cvut.cz CTU CMP 2004 07 June 21, 2004 Available at ftp://cmp.felk.cvut.cz/pub/cmp/articles/micusik/micusik-thesis-reprint.pdf

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan An Application of Vision-Based Learning for a Real Robot in RoboCup - A Goal Keeping Behavior for a Robot with an Omnidirectional Vision and an Embedded Servoing - Sho ji Suzuki 1, Tatsunori Kato 1, Hiroshi

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,

More information

Analytical Forward Projection for Axial Non-Central Dioptric & Catadioptric Cameras

Analytical Forward Projection for Axial Non-Central Dioptric & Catadioptric Cameras Analytical Forward Projection for Axial Non-Central Dioptric & Catadioptric Cameras Amit Agrawal, Yuichi Taguchi, and Srikumar Ramalingam Mitsubishi Electric Research Labs (MERL), Cambridge, MA, USA Abstract.

More information

A dioptric stereo system for robust real-time people tracking

A dioptric stereo system for robust real-time people tracking Proceedings of the IEEE ICRA 2009 Workshop on People Detection and Tracking Kobe, Japan, May 2009 A dioptric stereo system for robust real-time people tracking Ester Martínez and Angel P. del Pobil Robotic

More information

Multibody Motion Estimation and Segmentation from Multiple Central Panoramic Views

Multibody Motion Estimation and Segmentation from Multiple Central Panoramic Views Multibod Motion Estimation and Segmentation from Multiple Central Panoramic Views Omid Shakernia René Vidal Shankar Sastr Department of Electrical Engineering & Computer Sciences Universit of California

More information

Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors

Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Keith Forbes 1 Anthon Voigt 2 Ndimi Bodika 2 1 Digital Image Processing Group 2 Automation and Informatics Group Department of Electrical

More information

DEPTH ESTIMATION USING STEREO FISH-EYE LENSES

DEPTH ESTIMATION USING STEREO FISH-EYE LENSES DEPTH ESTMATON USNG STEREO FSH-EYE LENSES Shishir Shah and J. K. Aggamal Computer and Vision Research Center Department of Electrical and Computer Engineering, ENS 520 The University of Texas At Austin

More information

Region matching for omnidirectional images using virtual camera planes

Region matching for omnidirectional images using virtual camera planes Computer Vision Winter Workshop 2006, Ondřej Chum, Vojtěch Franc (eds.) Telč, Czech Republic, February 6 8 Czech Pattern Recognition Society Region matching for omnidirectional images using virtual camera

More information

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation Introduction CMPSCI 591A/691A CMPSCI 570/670 Image Formation Lecture Outline Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic

More information

Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints

Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints Wei Jiang Japan Science and Technology Agency 4-1-8, Honcho, Kawaguchi-shi, Saitama, Japan jiang@anken.go.jp

More information

Mobile robot localization using laser range scanner and omnicamera

Mobile robot localization using laser range scanner and omnicamera Mobile robot localization using laser range scanner and omnicamera Mariusz Olszewski Barbara Siemiatkowska, * Rafal Chojecki Piotr Marcinkiewicz Piotr Trojanek 2 Marek Majchrowski 2 * Institute of Fundamental

More information

On the Calibration of Non Single Viewpoint Catadioptric Sensors

On the Calibration of Non Single Viewpoint Catadioptric Sensors On the Calibration of Non Single Viewpoint Catadioptric Sensors Alberto Colombo 1, Matteo Matteucci 2, and Domenico G. Sorrenti 1 1 Università degli Studi di Milano Bicocca, Dipartimento di Informatica,

More information

Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images

Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Gian Luca Mariottini and Domenico Prattichizzo Dipartimento di Ingegneria dell Informazione Università di Siena Via Roma 56,

More information

1 Introduction. 2 Real-time Omnidirectional Stereo

1 Introduction. 2 Real-time Omnidirectional Stereo J. of Robotics and Mechatronics, Vol. 14, 00 (to appear) Recognizing Moving Obstacles for Robot Navigation using Real-time Omnidirectional Stereo Vision Hiroshi Koyasu, Jun Miura, and Yoshiaki Shirai Dept.

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

UNIFYING IMAGE PLANE LIFTINGS FOR CENTRAL CATADIOPTRIC AND DIOPTRIC CAMERAS

UNIFYING IMAGE PLANE LIFTINGS FOR CENTRAL CATADIOPTRIC AND DIOPTRIC CAMERAS UNIFYING IMAGE PLANE LIFTINGS FOR CENTRAL CATADIOPTRIC AND DIOPTRIC CAMERAS Jo~ao P. Barreto Dept. of Electrical and Computer Engineering University of Coimbra, Portugal jpbar@deec.uc.pt Abstract Keywords:

More information

A Factorization Based Self-Calibration for Radially Symmetric Cameras

A Factorization Based Self-Calibration for Radially Symmetric Cameras A Factorization Based Self-Calibration for Radially Symmetric Cameras Srikumar Ramalingam, Peter Sturm, Edmond Boyer To cite this version: Srikumar Ramalingam, Peter Sturm, Edmond Boyer. A Factorization

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

Radial Multi-focal Tensors

Radial Multi-focal Tensors International Journal of Computer Vision manuscript No. (will be inserted by the editor) Radial Multi-focal Tensors Applications to Omnidirectional camera calibration SriRam Thirthala Marc Pollefeys Received:

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

Generic Self-Calibration of Central Cameras

Generic Self-Calibration of Central Cameras MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Generic Self-Calibration of Central Cameras Srikumar Ramalingam TR2009-078 December 2009 Abstract We consider the self-calibration problem

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993 Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting Announcements Motion Introduction to Computer Vision CSE 152 Lecture 20 HW 4 due Friday at Midnight Final Exam: Tuesday, 6/12 at 8:00AM-11:00AM, regular classroom Extra Office Hours: Monday 6/11 9:00AM-10:00AM

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential Matrix

Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential Matrix J Math Imaging Vis 00 37: 40-48 DOI 0007/s085-00-09-9 Authors s version The final publication is available at wwwspringerlinkcom Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential

More information

Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera

Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera Ryosuke Kawanishi, Atsushi Yamashita and Toru Kaneko Abstract Map information is important

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Surround Structured Lighting for Full Object Scanning

Surround Structured Lighting for Full Object Scanning Surround Structured Lighting for Full Object Scanning Douglas Lanman, Daniel Crispell, and Gabriel Taubin Brown University, Dept. of Engineering August 21, 2007 1 Outline Introduction and Related Work

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Fusing Optical Flow and Stereo in a Spherical Depth Panorama Using a Single-Camera Folded Catadioptric Rig

Fusing Optical Flow and Stereo in a Spherical Depth Panorama Using a Single-Camera Folded Catadioptric Rig Fusing Optical Flow and Stereo in a Spherical Depth Panorama Using a Single-Camera Folded Catadioptric Rig Igor Labutov, Carlos Jaramillo and Jizhong Xiao, Senior Member, IEEE Abstract We present a novel

More information

Conformal Rectification of Omnidirectional Stereo Pairs

Conformal Rectification of Omnidirectional Stereo Pairs Conformal Rectification of Omnidirectional Stereo Pairs Christopher Geyer Kostas Daniilidis Department of EECS, UC Berkeley GRASP Laboratory, U. of Pennsylvania Berkeley, CA 94720 Philadelphia, PA 19104

More information

Light source estimation using feature points from specular highlights and cast shadows

Light source estimation using feature points from specular highlights and cast shadows Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light.

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light. Chapter 7: Geometrical Optics The branch of physics which studies the properties of light using the ray model of light. Overview Geometrical Optics Spherical Mirror Refraction Thin Lens f u v r and f 2

More information

Caustics of Catadioptric Cameras *

Caustics of Catadioptric Cameras * Caustics of Catadioptric Cameras * Rahul Swaminathan, Michael D. Grossberg and Shree K. Nayar Department of Computer Science, Columbia University New York, New York 10027 Email: { srahul, mdog, nayar}

More information

Free-Form Mirror Design Inspired by Photometric Stereo

Free-Form Mirror Design Inspired by Photometric Stereo Free-Form Mirror Design Inspired by Photometric Stereo Kazuaki Kondo, Yasuhiro Mukaigawa, Yasushi Yagi To cite this version: Kazuaki Kondo, Yasuhiro Mukaigawa, Yasushi Yagi. Free-Form Mirror Design Inspired

More information

Stereo Observation Models

Stereo Observation Models Stereo Observation Models Gabe Sibley June 16, 2003 Abstract This technical report describes general stereo vision triangulation and linearized error modeling. 0.1 Standard Model Equations If the relative

More information

Multi-view stereo. Many slides adapted from S. Seitz

Multi-view stereo. Many slides adapted from S. Seitz Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

SPECIAL TECHNIQUES-II

SPECIAL TECHNIQUES-II SPECIAL TECHNIQUES-II Lecture 19: Electromagnetic Theory Professor D. K. Ghosh, Physics Department, I.I.T., Bombay Method of Images for a spherical conductor Example :A dipole near aconducting sphere The

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

Announcements. Stereo

Announcements. Stereo Announcements Stereo Homework 2 is due today, 11:59 PM Homework 3 will be assigned today Reading: Chapter 7: Stereopsis CSE 152 Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where relative

More information

Minimal Solutions for Generic Imaging Models

Minimal Solutions for Generic Imaging Models Minimal Solutions for Generic Imaging Models Srikumar Ramalingam Peter Sturm Oxford Brookes University, UK INRIA Grenoble Rhône-Alpes, France Abstract A generic imaging model refers to a non-parametric

More information

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Douglas R. Heisterkamp University of South Alabama Mobile, AL 6688-0002, USA dheister@jaguar1.usouthal.edu Prabir Bhattacharya

More information