Projection Model, 3D Reconstruction and Rigid Motion Estimation from Non-central Catadioptric Images
|
|
- Abner Hoover
- 5 years ago
- Views:
Transcription
1 Projection Model, 3D Reconstruction and Rigid Motion Estimation rom Non-central Catadioptric Images Nuno Gonçalves and Helder Araújo Institute o Systems and Robotics - Coimbra University o Coimbra Polo II - Pinhal de Marrocos 3030 COIMBRA - PORTUGAL nunogon,helderg@isr.uc.pt Abstract This paper addresses the problem o rigid motion estimation and 3D reconstruction in vision systems where it is possible to recover the incident light ray direction rom the image points. Such systems include pinhole cameras and catadioptric cameras. Given two images o the same scene acquired rom two dierent positions, the transormation is estimated by means o an iterative process. The estimation process aims at having corresponding incident rays intersecting at the same 3D point. Geometrical relationships are derived to support the estimation method. Furthermore, this paper also addresses the problem o the mapping rom 3D points to image points, or non-central catadioptric cameras with mirror suraces given by quadrics. The projection model presented can be expressed in a non-linear equation o only one variable, being more stable and easier to solve than the classical Snell s law. Experiments with real images are presented, by using simulated annealing as estimation method. 1. Introduction 3D reconstruction rom images has been extensively studied in the past and several methods exist providing results with good accuracy (e.g. [10, 25]). In the case o catadioptric images, 3D reconstruction is more complex. In this paper we study the problem o reconstruction in the case o non-central catadioptric systems. The pinhole camera model that is widely used in computer vision applications is still the most important camera model. However there are several new camera models and designs that do not comply with central projection. These new camera designs have an increasing interest and applicability. In this context the generalized camera model is important because it models these new designs. Our interest is in developing a model, as general as possible, that can be applied to cameras with curved mirrors. In the case o central catadioptric systems the epipolar geometry has already been derived and studied. Geyer and Daniilidis [5] have studied these issues as well as Svoboda and Pajdla [23]. In this case, it is possible to use most o the results obtained or 3D reconstruction with pinhole cameras epipolar geometry, bundle adjustment or example. Several algorithms have been proposed and implemented [1, 6]. For non-central catadioptric vision systems there is a viewpoint surace instead o a single viewpoint which implies other solutions or the correspondence and reconstruction rather than the epipolar geometry. The study o the viewpoints suraces called caustics has been perormed by Nayar et al. [24] and the geometrical properties o generalized catadioptric systems were studied [9, 16, 17]. In [8] the mirror surace is recovered or a catadioptric system with quadric mirrors. In the ield o structure rom motion several works have been done [7,19], using two or more cameras or using one moving camera. New camera designs have also been proposed to optimize the extraction o speciic kinds o inormation rom images [15, 21]. The combination o several cameras or a camera with several mirrors has also been studied in order to build a non central generalized vision system [7, 18]. The projection model or non-central cameras (that use mirrors) mapping 3D world points into the image plane can be derived using only the specular relection laws - Snell s law - [2] - so that the image point is a unction o the 3D world point, the mirror surace and the location and orientation o the optical axis o the camera. However, when projecting a 3D world point, the main eort is inding out the relection point in the mirror surace. Furthermore, the projection is highly non linear in several variables resulting in a diicult problem to solve. This issue is important to image
2 ormation, computer graphics applications, and simulation purposes. In this paper we address the problem o 3D reconstruction as well as the estimation o rigid motion. This problem has been addressed, or example, by using bundle adjustment. However, this paper makes no central projection assumption and thus there is no closed loop or implicit projection unction that could be used in the bundle model. Thereore, our aim is to study the 3D reconstruction and motion estimation or a catadioptric vision system, assuming a generalized camera as in [9]. We show that this ramework can be extended to any catadioptric vision system. As an example o the general nature o the method we also address the case o a pinhole camera. We also address the problem o 3D to image mapping in non-central catadioptric cameras using quadric mirror suraces. The method presented derives restrictions on the coordinates o the relection point, and the solution is obtained by solving a nonlinear equation in only one parameter. optical center C(c1,c2,c3,1) image plane x2 P(p1,p2,p3,1) O x1 Vr N x3 Vi mirror surace R Figure 1. Relection through a specular mirror 2. Problem statement The relection through a generic curved mirror (see igure 1) is in accordance to the well known Snell s law ( [2]). The relection law is given by equation 1 where! V i is the incident light ray,! V r is the relected light ray and! N is the normal vector to the mirror surace.! Vi =! V r 2 <! V r ;! N >! N (1) The two problems that have to be addressed are (1) how to project a 3D world point in the image plane and (2) given an image point how can we recover the incident direction (direction o! V i ) - back projection (notice that the relected ray! V r can be computed rom the image as long as the intrinsic parameters are known). The problem o how to recover the actual 3D point that was projected is even harder. The solution or the irst problem stated - the projection model - is actually equivalent to solving the problem o estimating the relection point! R. However, even i the analytical expression o the specular surace is known, it is diicult to estimate the coordinates o the relection point! R due to the non-linearity o the estimation o the surace normal. To back project a light ray let us consider an image point. The back projection solution is the incident light ray direction! V i. Since we assume that the coordinates o the camera optical center are known as well as the image point (the intrinsic parameters are assumed to be known), the relected ray! V r can be computed easily. This problem is easier to solve i the mirror surace is known, since in that case intersecting the surace with the relected ray is straightorward. Next the normal vector at the relection point is estimated and equation 1 can be used to estimate! V r. However, i the mirror surace is not known, the solution can only be approximated. In [8] an algorithm to solve nonlinear equations has been used to estimate not only the relection point! R but also the coeicients o the mirror analytical expression. The algorithm considered the mirror to be a ruled quadric and it assumed the knowledge o the coordinates o 3D points. The reerence rame was located in the center o the quadric. As we shall see in the next section, when a general quadric is used and there is no assumption on the origin o the coordinate system, the estimates o the parameters are very diicult to obtain with good accuracy since the nonlinear equation to solve has several unknowns. The 3D reconstruction o a scene point rom the incident direction can only be perormed using additional inormation. In the next section we propose an analytical method to project scene to image points in a catadioptric system composed by a perspective camera and a quadric surace mirror. We assume the knowledge o the quadric and o the 3D coordinates o the scene point. In section 4 we propose a method to recover the 3D coordinates o a scene point rom a pair o images as well as the camera rigid motion. All points are represented by their 4 1 vector o homogeneous coordinates X Λ = x 1 x 2 x 3 x 4 such that its corresponding Λ cartesian point is X c = x1 =x 4 x 2 =x 4 x 3 =x 4. Quadrics are represented by a symmetric 4 4 matrix Q and any point X is on the quadric surace i and only i X T QX =0.
3 3. Projection model In this section we present a projection model that can be applied to non-central catadioptric cameras composed by a quadric surace mirror and a perspective projection pinhole camera. The camera intrinsic parameters, the quadric and the pose o the camera relative to the mirror are assumed to be known Some geometrical properties Planes are deined by three points A, B and C (generating points). Consider a plane Π and deine an auxiliary matrix W = X A B C Λ with those three points and a generic point X. It can be easily shown that the vector o the coordinates o the plane can be expressed as a linear combination o one o its generating points given by equation 2, since the determinant o matrix W must be zero. where M = Π=MC (2)» 0 a 3 b 4 b 3 a 4 a 2 b 4 + b 2 a 4 a 2 b 3 b 2 a 3 a 3 b 4 + a 3 b 4 0 a 1 b 4 b 1 a 4 a 1 b 3 + b 1 a 3 a 2 b 4 b 2 a 4 a 1 b 4 + b 1 a 4 0 a 1 b 2 b 1 a 2 a 2 b 3 + b 2 a 3 a 1 b 3 b 1 a 3 a 1 b 2 + b 1 a 2 0 (3) Another relevant property is relative to the angle between two planes. Although the sign o the angle cannot be deined in a projective space P 3, as pointed out by Stoli [22], the cosine o the angle is well deined and is given by equation 4, where Q Λ 1 is the absolute dual quadric. cos = Π T A QΛ 1 Π B q (Π T A QΛ 1 Π A)(Π T B QΛ 1 Π B) 3.2. Restrictions imposed by specular relection The solution o the problem is point R. R is the relection point on the mirror surace that projects the 3D point P into the image plane. For such point the ollowing restrictions must be imposed: 1. R T QR = 0! the point is on the quadric o the mirror surace. 2. R T SR = 0! the point is on the quadric given by S = M T Q Λ 1Q ( proposition 1). (4) Proposition 1 The relection point R o a catadioptric camera with quadric mirror Q is on the quadric S, given by S = M T Q Λ 1Q, where Q Λ 1 is the absolute dual quadric, the 4 4 matrix M and the plane Π B are deined by the 3D world point P, the camera optical center C and the relection point R are such that Π B = MR. Proo: Let us consider two concurrent planes: Π A and Π B. Π A is the tangent plane to the quadric Q in the relection point R. Its representation is given by Π A = QR. The plane Π B is the plane deined by three points: the camera optical center C, the 3D point P and the relection point R in the mirror surace. Using equation 2 the plane coordinates vector can be deined by a linear equation in the relected point R as stated by Π B = M (P; C) R = MR. Since the normal to the quadric is perpendicular to the tangent plane and must be on the plane deined by the three points C, P and R, then the two planes, Π A and Π B, must be perpendicular. The angle between two planes is given by equation 4. Since = ß=2 and substituting equations o planes Π A and Π B into equation 4 it yields equation 5 which tells that the point R belongs to a quadric given by S = M T Q Λ 1Q. Π T AQ Λ 1 Π B =0, R T Q T Q Λ 1MR =0,, R T M T Q Λ 1QR =0 (5) Notice that matrix S is not symmetric as the generic quadric. However, without loss o generality, matrix S can be substituted by another matrix whose entries are related by S ij ψ 0:5S ij +0:5S ji. With this change the quadric remains the same and its representing matrix becomes symmetric. Λ 3. The incidence and relected angles are equal. Since the angles o incidence and relection are equal, the angle between the incident and relected rays and the tangent line to the quadric in the relected point are also equal. Ater some simpliications, one obtains expression 6 or the third restrictions (see [22] or details in the angle between two lines). In this expression, the line l RQ is the tangent line to the quadric that pass in R and that is on the plane deined by C, P and R. dir(l RC ) T dir(l RQ ) p dir(lrc ) T dir(l RC ) = dir(l PR) T dir(l RQ ) p dir(lpr ) T dir(l PR ) (6) The directions l RC and l PR are deined by the join o two points [22]. The line l RQ is deined as the intersection o the planes Π A and Π B so its direction dir(l RQ ) is computed rom the Plücker matrix o equation 7.
4 L Λ RQ =Π AΠ T B Π B Π T A = = QR (MR) T MR (QR) T = (7) = QRR T M T MRR T Q T P 1 p 2 p 1 image incident direction P Computing the relection point R camera Given the three restrictions imposed to the relection point R, the problem is now how to ind that point. The irst and second restrictions are much similar since they restrict the point R to be on quadric Q (restriction (1)) and to be also on quadric S (restriction (2)). This is obviously the problem o inding the intersection o those two quadrics (a curve in space). Since the third restriction constrains the point so that the incident and relection angles are equal, point R must be located in the intersection curve. The general method or computing an explicit parametric representation o the intersection between two quadrics is due to Joshua Levin [13,14]. However, the parametric representation o this method is hard to compute and is less reliable due to the high number o irrational numbers needed. Hence some alternative methods can be used in the computation o the intersection curve [3, 4]. The parametric curve given by the intersection algorithm is a unction o only one parameter, say. Let us represent the parameterized curve by the 4 1 vector X( ). Although nonlinear, the curve can be searched or the point where incident and relected angles are equal. Let us call 0 to the value o the parameter that solves equation 6. The resulting relection point is given by R = X( 0 ) This method to ind the relection point R in a noncentral catadioptric vision system presents a major advantage over the method o using explicitly the Euclidean expressions o the mirror (quadric mirrors - conic section) and its normal vector expressions. This advantage is the act that once intersected the quadrics Q and S, the solution is given by a nonlinear equation in only one parameter. Instead o that, i the explicit Euclidean expressions were used, the problem was to solve a nonlinear equation in several unknowns without good guesses or the initial search point. This is very important or the accuracy o the solution. 4. 3D Reconstruction and Rigid Motion Recovery 4.1. General Framework To develop our ramework or this section several relationships involving lines, planes and points in P 3 were derived. The basic idea presented here is the ollowing. Let us consider an arbitrary black box camera such that one is able Figure 2. Black box camera with incident direction. to calculate the incident direction o the light rays that are projected into a given image point (see igure 2). The incident direction is represented by the Plücker matrix L. There is no assumption o central projection. I the camera moves to another position (this motion is represented by rigid transormation T ) and tracking an image point along both rames, it is possible to calculate the incident ray in both positions: `1 and `2. Each o the Plücker matrices o the lines `1 and `2, L 1 and L 2, are in the proper reerence rame, beore and ater the motion. Since both are related by the rigid transormation T, the Plücker line matrix L 2 is given by T T L 2 T in the initial one. The 3D point reconstructed is the intersection o both lines L 1 and T T L 2 T. However, the transormation T is not known. Given the lines o the incident directions L 1 and T T L 2 T in the initial reerence rame, one is able to intersect them and recover the point P. The problem then becomes how to estimate a transormation T such that both lines intersect in point P. Our proposal is to split the problem in two parts. We propose irst the estimation o a transormation T that meets a particular condition and then the intersection o the lines in 3-space to estimate point P Geometric relationships In this subsection some geometric relationships in projective 3-space P 3 are derived (see [11, 20] or background support). Proposition 2 Given the lines `1 and `2 in P 3, represented by their corresponding Plücker matrices L 1 and L 2, they intersect each other i and only i L 1 L Λ 2 L 1 =0, where L Λ 2 is the dual representation o the L 2 Plücker matrix. Proo: To prove that the condition is necessary we assume that the lines `1 and `2 intersect. Let us consider an arbitrary plane Π a. I the plane contains `1 one has L 1 Π a = 0 and then nothing can be concluded about the matrix L 1 L Λ 2L 1. I the plane contains the intersection point but not line `1 nothing either can be concluded since one has L 1 Π a = X 1a which is the point o intersection o the
5 plane Π a and `1 and since it is also the intersection o `1 and `2 and then is on L 2, one has L Λ 2X 1a = L Λ 2L 1 Π a =0. However, i the plane Π a does not contains any o the two lines nor their intersection point, X 1a = L 1 Π a is the intersection point o plane Π a with line `1. This point isn t on the line `2 and then Π b = L Λ 2 X 1a = L Λ 2 L 1Π a is the plane deined by line `2 and point X 1a. And by hypothesis, the two lines intersect each other and then the line ` 1 is on the plane Π b (notice that i `1 isn t on the plane Π b the only common point with this plane is X 1a which is not on the line `2 by deinition). We thus have L 1 Π b = L 1 L Λ 2 L 1Π a = 0. Since the plane Π a is arbitrary one thus conclude the thesis, that is L 1 L Λ 2 L 1 =0. The counterpart should be now proved. Assume that L 1 L Λ 2 L 1 =0and then let us try to prove that the lines intersect. Consider again an arbitrary plane Π c not containing any o the two lines nor their intersection point. Multiplying the plane Π c in the right o both sides o the condition we obtain L 1 L Λ 2 L 1Π c = 0, where L 1 Π c represents the intersection point o the line `1 and the plane Π c. Let s say X c. Since this point is not on `2, L Λ 2 X c = L Λ 2 L 1Π a represents the plane deined by `2 and X c, say Π d. L 1 Π d = 0 and then line `1 is on this plane. Since both `1 and `2 are on the same plane Π d, they intersect each other. This proves the suiciency o the condition. Λ Corollary 1 Consider two intersecting lines `1 and `2. I line `1 is transormed by a general point transormation, given by H, the transormed line `0 1 and `2 no longer intersect in the general case. Proo: By proposition 2 one has L 1 L Λ 2 L 1 =0. Line `1 is transormed and the new line is given by L 0 1 = HL 1H T. Computing the intersection condition o proposition 2, it yields C = L 0 1 LΛ 2 L0 1 = HL 1H T L Λ 2 HL 1H T. In the general case C 6= 0and so the two lines no longer intersect. Λ Proposition 3 The intersection point o two arbitrary 3- space intersecting lines `1 and `2 in P 3 is given by P = L 1 L Λ 2A, where A is an arbitrary point not belonging to the plane deined by `1 and `2. Proo: Consider plane Π 2A deined by an arbitrary point A and `2 so that Π 2A = L Λ 2A. Since A does not belong to the plane deined by `1 and `2, the intersection o `1 with Π 2A is point P, given by P = L 1 Π 2A = L 1 L Λ 2 A. Λ 4.3. Rigid Motion Estimation and 3D Reconstruction To recover the rigid transormation matrix T we propose the use o the geometric relations o the previous subsection. Ater recovering the incident direction rom a single Figure 3. The intersection o incident directions o 4 points. point in the image in both reerence rames, the problem becomes the recovery o the transormation T between both coordinate systems. Any nonlinear minimization algorithm can be applied to this problem in order to estimate the transormation T subject to the intersection condition (proposition 2). However, there are multiple solutions guaranteeing the intersection between both lines, as long as the transormed line is on the pencil o planes deined by the line in the irst rame. The goal is to ind out which rigid transormation T should be chosen so that both lines intersect in the correct point P. The estimation o such transormation is not possible i only one or two pairs o lines are used. However i we use three or more arbitrary points the only transormation that assures the intersection o all the pairs o lines is the rigid transormation T we are seeking. Figure 3 represents a catadioptric vision system in two positions and the corresponding incident directions intersecting in our 3-space points. The second and last step in the overall algorithm is the estimation o the 3-space points that project on the image. Since all pairs o the estimated incident directions intersect, the 3D points to reconstruct are the intersections o the lines. The expression or the intersection is given by proposition 3. We presented a method to estimate the rigid motion o a camera ater a general rigid transormation rom an initial position. This method also estimates the 3D coordinates o the points. The main contribution o this method is the introduction o a way to recover both the pose and 3D points when no projection model exists (in which case bundle adjustment or any other known method could be used). In the next two sections we parameterize the method in two particular conigurations: the pinhole camera and the catadioptric vision system with quadric shape mirrors.
6 4.4. Pinhole Camera The application o this ramework to a pinhole camera can serve as a reerence to compare the implementation with other camera models. The camera projection model used is p image = K Proj P 3D, where K is the intrinsic parameters matrix. Given K and any image point tracked along at least two rames, one has to recover the incident direction o light rays or all rames. The line that represents the incident direction is recovered in the local reerence rame, not taking into account the motion between rames. To invert the projection model we arbitrate the value o one o the coordinates, say Z =1, and one can thus obtain a point in the incident direction. Expressing the other two coordinates in relation to the third, it yields the ollowing expression or the point D in the incident light ray: 8 >< X = u u0 k v v 0 Y = v v0 >: Z =1 2 u u 0 k v v 0 v v 0 =) D ο (8) The coordinates o two points are then known rom the incident direction, the origin o the reerence rame (O = Λ ) and point D. The corresponding Plücker matrix that we search or, representing the incident direction, is given by: d 1 L i = O D T D O T = d d 3 5 (9) d 1 d 2 d 3 0 where d i is the i th coordinate o point D. This recovers the incident direction given the intrinsic parameters and one image point tracked along the rames Catadioptric Camera The advantages o a wide ield o view justiy the use o catadioptric vision systems in many applications. Although some particular conigurations o the camera in relation to the mirror produces a central projection system, in the general case the central projection constraint is relaxed to enable other important characteristics (e.g. zooming). The relaxation o the central projection constraint has many implications in the model and in the algorithms used. There is no general projection model since each point has its own viewpoint (see [24]). This turns the method presented in this paper into an important tool to reconstruct the scene. C relected ray surace R θ θ E G incident direction Figure 4. Geometric construction used to ind out the incident direction. The catadioptric vision system used is one o the most common - a curved mirror given by a quadric and a pinhole camera. Consider a quadric mirror represented by the 4 4 matrix Q so that X T QX = 0 or all points X on its surace. To calculate the incoming light ray that relects to the camera, one should irst calculate the relected ray direction and intersect it with the mirror surace. Using the same reasoning applied to the pinhole camera, the direction L i must be intersected with the quadric. Proposition 4 The intersection points R o the ull rank quadric Q with the line ` given by the Plücker matrix L is given by the eigenvectors o matrix LQ. Proo: The tangent plane Π N to the quadric surace in a point R is Π N = QR. Since the intersection o line ` with the quadric is the same point o intersection o line ` with the plane Π N, it yields R = LΠ N = LQR which is the same to say that R is an eigenvector o LQ. Since matrix L has rank 2 and matrix Q has ull rank, matrix LQ has two eigenvalues that correspond to the two points o intersection. Λ Proposition 5 The normal line to the quadric given by its 4 4 matrix Q at point R is given by L N = RR T Q T Q Λ 1 Q Λ 1 QRRT, where Q Λ 1 is the dual absolute quadric. Proo: The tangent plane to the quadric through R is given by Π N = QR and the direction o this plane is given by dir(π N ) = Q Λ 1 Π N = Q Λ 1QR. Since the direction o a plane also represents the intersection o its normal line with the plane at ininity [22], the normal line is the join o the points R and dir(π N ), given by L N = Rdir(Π N ) T dir(π N )R T or L N = RR T Q T Q Λ 1 Q Λ 1QRR T. Λ To ind out the incident direction o the incoming ray the relection law is used. It is known that the angle between incident and relected rays with the normal vector is equal. The geometric construction represented in igure 4 is used to calculate one point in the incoming ray. As shown in igure 4, G is the point on the incoming ray such that the
7 Figure 5. Image taken with an hyperbolic mirror. angle between line `i (L i, join o points R and G) and the normal `N (L N, join o points R and E) is the same as the angle between the relected ray `r (L r, join o point R and C) and the normal line `N. E is any point o the normal line yielding G = μe C. The value o the parameter μ is such that the angles o incidence and relection are equal. It can be solved with a second degree linear equation. Having calculated the value o μ, the incident direction is computed with points R and G. The method presented in section 4.1 can then be used. 5. Experiments Images o the real world with structured indoor scenes are used to test the rigid motion estimation and 3D reconstruction models presented. The images were taken by a catadioptric system composed by a resolution pinhole camera with zoom and an hyperbolic mirror. Beore applying the algorithms proposed, the system was calibrated in two steps: pinhole camera calibration and mirror calibration. The pinhole camera parameters were calibrated using Heikkilä and Silvén s algorithm [12]. The mirror parameters and the distance rom pinhole to mirror were calibrated using a chess board panel, and by iterating the parameters until the corners orm the real chess board. Figure 5 shows one shot taken by the system described. A similar system was modelled and some synthetical data were used to test rigid motion and 3D reconstruction. The projection in images were determined by using the Snell s Law and the model presented in section 3. To test the method presented in section 4 using synthetic data we irst recovered the rigid motion transormation matrix and then the 3D scene points. The nonlinear estimation algorithm used was simulated annealing, since it uses a stochastic jump to iterate to the actual coniguration. Although the minimization algorithm is not compared to others, we experienced good results with simulated annealing. Two dierent motions are used: only translation along the coordinate axes and rotation and translation along the optical axis. Table 1 presents some o the results ob- x y z t x t y t z Table 2. Results o the experiments with real images. Rigid motion estimation ( i are rotation angles in radians and t i are translations in mm). Upper line has ground truth values and bottom line estimated ones. tained compared with the ground truth values. The results o the experiments with real images are shown in table 2. The motion transormation is recovered or each pair using the corners o a chess board panel. Although 3D reconstruction is perormed, since the correct 3D coordinates o the points are not known, the estimated values are not shown. Five dierent motion pairs were used, including only translation along the reerence axis, only rotation and both translation and rotation. We can observe rom the experiments with both synthetical or real images, that the method nearly converges to the actual transormation and that the errors in 3D coordinates o the points are low as expected. 6. Conclusion and directions This paper presents a projection model or non central catadioptric image ormation. Since no central projection assumption is made, the problem o mapping 3D world points to image plane is diicult to solve and we show that it is possible to model this problem as a non linear equation in only one variable. The paper also presents a model to estimate the motion transormation o a moving camera rom rame to rame and subsequently make 3D reconstruction. The results with real images show that it is possible with the theory presented to compute accurately the motion o the camera rom rame to rame and then to estimate the 3D positions o corresponding points. The experiments however use a minimization method that doesn t run in real time due to the stochastic nature o the optimization. The enhancement o the method perormance and the extension o the theory to non calibrated cameras and mirrors are the next problems to be addressed.
8 x y z t x t y t z X Y Z Ground truth Mean % error Estimated Std. error Ground truth Mean % error Estimated Std. error Table 1. Experiments with synthetical data. Rigid motion estimation ( i are rotation angles in radians and t i are translations in mm) and 3D reconstruction o space coordinates X, Y and Z. Reerences [1] João Barreto and Helder Araújo. Issues on the geometry o central catadioptric imaging. In CVPR 01, [2] Max Born and Emil Wol. Principles o Optics. Pergamon Press, [3] Laurent Dupont, Daniel Lazard, Sylvain Lazard, and Sylvain Petitjean. Near-optimal parameterization o the intersection o quadrics. In SoCG 03, San Diego, USA, June [4] Laurent Dupont, Sylvain Lazard, Sylvain Petitjean, and Daniel Lazard. Towards the Robust Intersection o Implicit Quadrics, volume 704, chapter 5, pages Kluwer Academic Publishers, [5] Christhopher Geyer and Kostas Daniilidis. Properties o the catadioptric undamental matrix. In ECCV02, pages , [6] Christopher Geyer and Kostas Daniilidis. Catadioptric projective geometry. IJCV, 45(3): , [7] Joshua Gluckman and Shree Nayar. Rectiied catadioptric stereo sensors. In CVPR, [8] Nuno Gonçalves and Helder Araújo. Mirror shape recovery rom image curves and intrinsic parameters: Rotationally symmetric and conic mirrors. In OM- NIVIS 03, Madison, USA, June [9] Michael Grossberg and Shree Nayar. A general imaging model and a method or inding its parameters. In ICCV, Vancouver, Canada, July [10] R. Haralick, C. Lee, K. Ottenberg, and M. Nölle. Analysis and solutions o the three point perspective pose estimation problem. In CVPR 91. [11] Richard Hartley and Andrew Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, [12] J. Heikkilä and O. Silvén. A our-step camera calibration procedure with implicit image correction. In CVPR, pages , [13] Joshua Levin. A parametric algorithm or drawing pictures o solid objects composed o quadric suraces. Communications o the ACM, 19(10): , [14] Joshua Levin. Mathematical models or determining the intersection o quadric suraces. Computer Graphics and Image Processing, 11(1), [15] Jan Neumann, Cornelia Fermüller, and Yiannis Aloimonos. Eye design in the plenoptic space o light rays. In ICCV, [16] Michael Oren and Shree Nayar. A theory o specular surace geometry. IJCV, [17] Tomas Pajdla. Stereo with oblique cameras. IJCV, [18] Robert Pless. Using many cameras as one. In CVPR, [19] Y. Pritch, M. Ben-Ezra, and Shmuel Peleg. Optics or OmniStereo Imaging, pages Kluwer Academic, July [20] J. Semple and G. Kneebone. Algebraic Projective Geometry. Oxord University Press, London, [21] Mandyam Srinivasan. New class o mirros or wide angle imaging. In OMNIVIS 03, [22] Jorge Stoli. Oriented Projective Geometry. Academic Press, [23] Tomas Svoboda and Tomas Pajdla. Epipolar geometry or central catadioptric cameras. IJCV, [24] Rahul Swaminathan, Michael Grossberg, and Shree Nayar. Caustics o catadioptric cameras. In ICCV, July [25] Bill Triggs, Philip McLauchlan, Richard Hartley, and Andrew Fitzgibbon. Bundle adjustment - a modern synthesis. In Vision Algorithms, 1999.
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de
More informationPartial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric
More informationA Calibration Algorithm for POX-Slits Camera
A Calibration Algorithm for POX-Slits Camera N. Martins 1 and H. Araújo 2 1 DEIS, ISEC, Polytechnic Institute of Coimbra, Portugal 2 ISR/DEEC, University of Coimbra, Portugal Abstract Recent developments
More informationMAPI Computer Vision. Multiple View Geometry
MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry
More information521466S Machine Vision Exercise #1 Camera models
52466S Machine Vision Exercise # Camera models. Pinhole camera. The perspective projection equations or a pinhole camera are x n = x c, = y c, where x n = [x n, ] are the normalized image coordinates,
More informationDegeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential Matrix
J Math Imaging Vis 00 37: 40-48 DOI 0007/s085-00-09-9 Authors s version The final publication is available at wwwspringerlinkcom Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More informationReflection and Refraction
Relection and Reraction Object To determine ocal lengths o lenses and mirrors and to determine the index o reraction o glass. Apparatus Lenses, optical bench, mirrors, light source, screen, plastic or
More informationMinimal Solutions for Generic Imaging Models
Minimal Solutions for Generic Imaging Models Srikumar Ramalingam Peter Sturm Oxford Brookes University, UK INRIA Grenoble Rhône-Alpes, France Abstract A generic imaging model refers to a non-parametric
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationMathematics of a Multiple Omni-Directional System
Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba
More informationCatadioptric camera model with conic mirror
LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación
More informationMulti-View Omni-Directional Imaging
Multi-View Omni-Directional Imaging Tuesday, December 19, 2000 Moshe Ben-Ezra, Shmuel Peleg Abstract This paper describes a novel camera design or the creation o multiple panoramic images, such that each
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a
More informationTangency of Conics and Quadrics
Proc. of the 6th WSEAS Int. Conf. on Signal Processing, Computational Geometry & Artificial Vision, Elounda, Greece, August 21-23, 26 (pp21-29) Tangency of Conics and Quadrics SUDANTHI N. R. WIJEWICKREMA,
More information2 DETERMINING THE VANISHING POINT LOCA- TIONS
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.??, NO.??, DATE 1 Equidistant Fish-Eye Calibration and Rectiication by Vanishing Point Extraction Abstract In this paper we describe
More informationPrecise Omnidirectional Camera Calibration
Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional
More informationTowards Generic Self-Calibration of Central Cameras
Towards Generic Self-Calibration of Central Cameras Srikumar Ramalingam 1&2, Peter Sturm 1, and Suresh K. Lodha 2 1 INRIA Rhône-Alpes, GRAVIR-CNRS, 38330 Montbonnot, France 2 Dept. of Computer Science,
More informationEuclidean Reconstruction Independent on Camera Intrinsic Parameters
Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean
More informationGeometry of a single camera. Odilon Redon, Cyclops, 1914
Geometr o a single camera Odilon Redon, Cclops, 94 Our goal: Recover o 3D structure Recover o structure rom one image is inherentl ambiguous??? Single-view ambiguit Single-view ambiguit Rashad Alakbarov
More informationHow to Compute the Pose of an Object without a Direct View?
How to Compute the Pose of an Object without a Direct View? Peter Sturm and Thomas Bonfort INRIA Rhône-Alpes, 38330 Montbonnot St Martin, France {Peter.Sturm, Thomas.Bonfort}@inrialpes.fr Abstract. We
More informationStructure from Motion
Structure from Motion Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Extensions Structure from motion Recover both 3D scene geoemetry and camera positions SLAM: Simultaneous
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationGeneric and Real-Time Structure from Motion
Generic and Real-Time Structure from Motion E. Mouragnon 1,2, M. Lhuillier 1, M. Dhome 1, F. Dekeyser 2 and P. Sayd 2 1 LASMEA UMR 6602, Université Blaise Pascal/CNRS, 63177 Aubière Cedex, France 2 CEA,
More informationRigid Body Motion and Image Formation. Jana Kosecka, CS 482
Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3
More information3D Geometry and Camera Calibration
3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often
More informationA 3D Pattern for Post Estimation for Object Capture
A 3D Pattern for Post Estimation for Object Capture Lei Wang, Cindy Grimm, and Robert Pless Department of Computer Science and Engineering Washington University One Brookings Drive, St. Louis, MO, 63130
More informationUnit 3 Multiple View Geometry
Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover
More informationCamera model and multiple view geometry
Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationRectification and Distortion Correction
Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationMachine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy
1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:
More informationA Theory of Multi-Layer Flat Refractive Geometry
A Theory of Multi-Layer Flat Refractive Geometry Axis Amit Agrawal Srikumar Ramalingam Yuichi Taguchi Visesh Chari Mitsubishi Electric Research Labs (MERL) INRIA Imaging with Refractions Source: Shortis
More informationEpipolar Geometry and the Essential Matrix
Epipolar Geometry and the Essential Matrix Carlo Tomasi The epipolar geometry of a pair of cameras expresses the fundamental relationship between any two corresponding points in the two image planes, and
More informationProjective geometry for Computer Vision
Department of Computer Science and Engineering IIT Delhi NIT, Rourkela March 27, 2010 Overview Pin-hole camera Why projective geometry? Reconstruction Computer vision geometry: main problems Correspondence
More informationStructure from Small Baseline Motion with Central Panoramic Cameras
Structure from Small Baseline Motion with Central Panoramic Cameras Omid Shakernia René Vidal Shankar Sastry Department of Electrical Engineering & Computer Sciences, UC Berkeley {omids,rvidal,sastry}@eecs.berkeley.edu
More informationComputer Vision I - Appearance-based Matching and Projective Geometry
Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation
More informationUncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images
Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Gian Luca Mariottini and Domenico Prattichizzo Dipartimento di Ingegneria dell Informazione Università di Siena Via Roma 56,
More informationUNIFYING IMAGE PLANE LIFTINGS FOR CENTRAL CATADIOPTRIC AND DIOPTRIC CAMERAS
UNIFYING IMAGE PLANE LIFTINGS FOR CENTRAL CATADIOPTRIC AND DIOPTRIC CAMERAS Jo~ao P. Barreto Dept. of Electrical and Computer Engineering University of Coimbra, Portugal jpbar@deec.uc.pt Abstract Keywords:
More informationMultiple View Geometry in Computer Vision Second Edition
Multiple View Geometry in Computer Vision Second Edition Richard Hartley Australian National University, Canberra, Australia Andrew Zisserman University of Oxford, UK CAMBRIDGE UNIVERSITY PRESS Contents
More informationCamera Pose Estimation Using Images of Planar Mirror Reflections
Camera Pose Estimation Using Images of Planar Mirror Reflections Rui Rodrigues, João P. Barreto, and Urbano Nunes Institute of Systems and Robotics, Dept. of Electrical and Computer Engineering, University
More informationMultiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Projective 3D Geometry (Back to Chapter 2) Lecture 6 2 Singular Value Decomposition Given a
More informationMultiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 More on Single View Geometry Lecture 11 2 In Chapter 5 we introduced projection matrix (which
More informationThe Graph of an Equation Graph the following by using a table of values and plotting points.
Calculus Preparation - Section 1 Graphs and Models Success in math as well as Calculus is to use a multiple perspective -- graphical, analytical, and numerical. Thanks to Rene Descartes we can represent
More informationLab 9 - GEOMETRICAL OPTICS
161 Name Date Partners Lab 9 - GEOMETRICAL OPTICS OBJECTIVES Optics, developed in us through study, teaches us to see - Paul Cezanne Image rom www.weidemyr.com To examine Snell s Law To observe total internal
More informationMetric Rectification for Perspective Images of Planes
789139-3 University of California Santa Barbara Department of Electrical and Computer Engineering CS290I Multiple View Geometry in Computer Vision and Computer Graphics Spring 2006 Metric Rectification
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera
More informationEpipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz
Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com
More informationMultiple-View Geometry of the Refractive Plane
Multiple-View Geometry of the Refractive Plane Visesh Chari, Peter Sturm To cite this version: Visesh Chari, Peter Sturm. Multiple-View Geometry of the Refractive Plane. Andrea Cavallaro and Simon Prince
More informationRecovering structure from a single view Pinhole perspective projection
EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their
More informationCoplanar circles, quasi-affine invariance and calibration
Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationAnalytical Forward Projection for Axial Non-Central Dioptric & Catadioptric Cameras
Analytical Forward Projection for Axial Non-Central Dioptric & Catadioptric Cameras Amit Agrawal, Yuichi Taguchi, and Srikumar Ramalingam Mitsubishi Electric Research Labs (MERL), Cambridge, MA, USA Abstract.
More informationMidterm Exam Solutions
Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow
More informationTwo-view geometry Computer Vision Spring 2018, Lecture 10
Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationA 3D Shape Constraint on Video
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 6, JUNE 2006 1 A 3D Shape Constraint on Video Hui Ji and Cornelia Fermuller, Member, IEEE Abstract We propose to combine the
More informationA General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras
A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan
More informationA Factorization Method for Structure from Planar Motion
A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College
More informationCamera models and calibration
Camera models and calibration Read tutorial chapter 2 and 3. http://www.cs.unc.edu/~marc/tutorial/ Szeliski s book pp.29-73 Schedule (tentative) 2 # date topic Sep.8 Introduction and geometry 2 Sep.25
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera
More informationGeneral Imaging Geometry for Central Catadioptric Cameras
General Imaging Geometry for Central Catadioptric Cameras Peter Sturm, Joao Barreto To cite this version: Peter Sturm, Joao Barreto. General Imaging Geometry for Central Catadioptric Cameras. David Forsyth
More informationCritical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length
Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Peter F Sturm Computational Vision Group, Department of Computer Science The University of Reading,
More informationEpipolar Geometry Based On Line Similarity
2016 23rd International Conference on Pattern Recognition (ICPR) Cancún Center, Cancún, México, December 4-8, 2016 Epipolar Geometry Based On Line Similarity Gil Ben-Artzi Tavi Halperin Michael Werman
More informationVisual Tracking of Planes with an Uncalibrated Central Catadioptric Camera
The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 29 St. Louis, USA Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera A. Salazar-Garibay,
More informationRecovering light directions and camera poses from a single sphere.
Title Recovering light directions and camera poses from a single sphere Author(s) Wong, KYK; Schnieders, D; Li, S Citation The 10th European Conference on Computer Vision (ECCV 2008), Marseille, France,
More informationarxiv: v4 [cs.cv] 8 Jan 2017
Epipolar Geometry Based On Line Similarity Gil Ben-Artzi Tavi Halperin Michael Werman Shmuel Peleg School of Computer Science and Engineering The Hebrew University of Jerusalem, Israel arxiv:1604.04848v4
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More information3D Reconstruction with two Calibrated Cameras
3D Reconstruction with two Calibrated Cameras Carlo Tomasi The standard reference frame for a camera C is a right-handed Cartesian frame with its origin at the center of projection of C, its positive Z
More informationGEOMETRICAL OPTICS OBJECTIVES
Geometrical Optics 207 Name Date Partners OBJECTIVES OVERVIEW GEOMETRICAL OPTICS To examine Snell s Law and observe total internal relection. To understand and use the lens equations. To ind the ocal length
More informationStructure from Motion. Introduction to Computer Vision CSE 152 Lecture 10
Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley
More informationGeneric Self-Calibration of Central Cameras
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Generic Self-Calibration of Central Cameras Srikumar Ramalingam TR2009-078 December 2009 Abstract We consider the self-calibration problem
More informationComputer Vision I - Algorithms and Applications: Multi-View 3D reconstruction
Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View
More information3-D TERRAIN RECONSTRUCTION WITH AERIAL PHOTOGRAPHY
3-D TERRAIN RECONSTRUCTION WITH AERIAL PHOTOGRAPHY Bin-Yih Juang ( 莊斌鎰 ) 1, and Chiou-Shann Fuh ( 傅楸善 ) 3 1 Ph. D candidate o Dept. o Mechanical Engineering National Taiwan University, Taipei, Taiwan Instructor
More informationOmni Flow. Libor Spacek Department of Computer Science University of Essex, Colchester, CO4 3SQ, UK. Abstract. 1. Introduction
Omni Flow Libor Spacek Department of Computer Science University of Essex, Colchester, CO4 3SQ, UK. Abstract Catadioptric omnidirectional sensors (catadioptric cameras) capture instantaneous images with
More informationCompositing a bird's eye view mosaic
Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition
More informationMei Han Takeo Kanade. January Carnegie Mellon University. Pittsburgh, PA Abstract
Scene Reconstruction from Multiple Uncalibrated Views Mei Han Takeo Kanade January 000 CMU-RI-TR-00-09 The Robotics Institute Carnegie Mellon University Pittsburgh, PA 1513 Abstract We describe a factorization-based
More informationMultiple Motion Scene Reconstruction from Uncalibrated Views
Multiple Motion Scene Reconstruction from Uncalibrated Views Mei Han C & C Research Laboratories NEC USA, Inc. meihan@ccrl.sj.nec.com Takeo Kanade Robotics Institute Carnegie Mellon University tk@cs.cmu.edu
More informationOutline F. OPTICS. Objectives. Introduction. Wavefronts. Light Rays. Geometrical Optics. Reflection and Refraction
F. OPTICS Outline 22. Spherical mirrors 22.2 Reraction at spherical suraces 22.3 Thin lenses 22. Geometrical optics Objectives (a) use the relationship = r/2 or spherical mirrors (b) draw ray agrams to
More informationGENERAL CENTRAL PROJECTION SYSTEMS
GENERAL CENTRAL PROJECTION SYSTEMS MODELING, CALIBRATION AND VISUAL SERVOING PHD THESIS SUBMITED TO DEPT. OF ELECTRICAL AND COMPUTER ENGINEERING UNIVERSITY OF COIMBRA João Pedro de Almeida Barreto October
More informationwhere ~n = ( ) t is the normal o the plane (ie, Q ~ t ~n =,8 Q ~ ), =( ~ X Y Z ) t and ~t = (t X t Y t Z ) t are the camera rotation and translation,
Multi-Frame Alignment o Planes Lihi Zelnik-Manor Michal Irani Dept o Computer Science and Applied Math The Weizmann Institute o Science Rehovot, Israel Abstract Traditional plane alignment techniques are
More informationCOMPUTER AND ROBOT VISION
VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California
More informationRobot Vision: Projective Geometry
Robot Vision: Projective Geometry Ass.Prof. Friedrich Fraundorfer SS 2018 1 Learning goals Understand homogeneous coordinates Understand points, line, plane parameters and interpret them geometrically
More informationEpipolar Geometry in Stereo, Motion and Object Recognition
Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,
More informationIndex. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263
Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine
More information1 Projective Geometry
CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and
More informationEECS 442: Final Project
EECS 442: Final Project Structure From Motion Kevin Choi Robotics Ismail El Houcheimi Robotics Yih-Jye Jeffrey Hsu Robotics Abstract In this paper, we summarize the method, and results of our projective
More informationCamera Geometry II. COS 429 Princeton University
Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence
More informationCS231A Course Notes 4: Stereo Systems and Structure from Motion
CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance
More informationGeneral specular Surface Triangulation
General specular Surface Triangulation Thomas Bonfort, Peter Sturm, and Pau Gargallo MOVI - GRAVIR - INRIA - 38330 Montbonnot, FRANCE http://perception.inrialpes.fr Abstract. We present a method for the
More informationPara-catadioptric Camera Auto Calibration from Epipolar Geometry
Para-catadioptric Camera Auto Calibration from Epipolar Geometry Branislav Mičušík and Tomáš Pajdla Center for Machine Perception http://cmp.felk.cvut.cz Department of Cybernetics Faculty of Electrical
More informationOn the Geometry of Visual Correspondence
International Journal o Computer Vision 21(3), 223 247 (1997) c 1997 Kluwer Academic Publishers. Manuactured in The Netherlands. On the Geometry o Visual Correspondence CORNELIA FERMÜLLER AND YIANNIS ALOIMONOS
More informationMultiple-View Structure and Motion From Line Correspondences
ICCV 03 IN PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, NICE, FRANCE, OCTOBER 003. Multiple-View Structure and Motion From Line Correspondences Adrien Bartoli Peter Sturm INRIA
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More information