Using RANSAC for Omnidirectional Camera Model Fitting
|
|
- Darcy Poole
- 5 years ago
- Views:
Transcription
1 CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Using RANSAC for Omnidirectional Camera Model Fitting Branislav Mičušík and Tomáš Pajdla REPRINT Branislav Mičušík and Tomáš Pajdla. Using RANSAC for Omnidirectional Camera Model Fitting. Computer Vision Winter Workshop, Valtice, Czech Republic, February 3. Available at ftp://cmp.felk.cvut.cz/pub/cmp/articles/micusik/micusik-cvww3.pdf Center for Machine Perception, Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University Technická, 66 7 Prague 6, Czech Republic fax , phone , www:
2
3 Using RANSAC for Omnidirectional Camera Model Fitting Branislav Mičušík and Tomáš Pajdla Center for Machine Perception, Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University in Prague 66 7 Prague 6, Technicka, Czech Republic micusb@cmp.felk.cvut.cz, pajdla@cmp.felk.cvut.cz Abstract We introduce robust technique based on RANSAC for simultaneous estimation of central omnidirectional camera (view angle above 8 ) model and its epipolar geometry. It is shown that points near the center of view field circle satisfy the camera model for almost any degree of image non-linearity. Therefore, they are often selected in RANSAC based estimation as inliers while the most informative points near the border of the view field circle are rejected and incorrect camera model is estimated. We show that a remedy to this problem is achieved by not using points close to the center of view field circle. The camera calibration is done from image correspondences only, without any calibration objects or any assumption about the scene. We demonstrate our method in real experiments with high quality, but cheap and widely available, Nikon FC E8 fish-eye lens. In practical situations, the proposed method allows to estimate the camera model from 9 correspondences and can be thus used in an efficient RANSAC based estimation technique. Introduction Recently, high quality, but cheap and widely available, lenses, e.g. Nikon FC E8 or Sigma 8mm-f4-EX fish-eye converters, and curved mirrors, e.g. [5], providing the view angle above 8 have appeared. Cameras with so large view angle, called omnidirectional cameras, are especially appropriate in application (e.g. surveillance, tracking, structure from motion, navigation, etc.) where more stable egomotion estimation is required. Using such cameras in a stereo pair calls for searching of correspondences, camera model calibration, epipolar geometry estimation, and 3D reconstruction analogically as for standard directional cameras [7]. In this work we concentrate on robust technique based on RANSAC for simultaneous estimation of camera model and epipolar geometry for omnidirectional cameras preserving central projection. We assume that point correspondences, information about the view field of the lens, and its corresponding view angle are available. Previous work on the estimation of camera model with lens nonlinearity lens includes methods that use some knowledge about the observed scene, e.g. calibration patterns [3, 3] and plumb line methods [4, 6, 9], methods based on the fact that a lens nonlinearity introduces specific higher- Figure : Inliers detection. 8 4 images were acquired by Nikon FC E8 fish-eye converter. Correspondences were obtained by []. Wrong model. All points were used in model estimation using RANSAC. The model, however, suits only to the points near the center of the view field circle since other points are marked as outliers. Correct model. Only points near the boundary of the view field circle were used for computing of the model. The model suits to points near the center as well as to points near the boundary. order correlation in the frequency domain [5], or calibrate cameras from point correspondences only, e.g. [6, 4, 8]. Fitzgibbon [6] deals with the problem of lens nonlinearity estimation in the context of camera self-calibration and structure from motion. His method, however, cannot be directly used for omnidirectional cameras with view angle above 8 because it represents images by points in which rays of a camera intersect an image plane. We extended [] the method [6] for omnidirectional cameras, derived appropriate omnidirectional camera model incorporating lens nonlinearity, and suggested an algorithm for estimation of the model from epipolar geometry. In this work we show, see Figure, how the points should be sampled in RANSAC to obtain correct unbiased estimate of the camera model and epipolar geometry. Our method is useful for lenses as well for mirrors [5] providing view angle above 8 and possessing central projection. The structure of the paper is the following. The omnidirectional camera model and its simultaneous estimation with epipolar geometry is reviewed in Section. The properties of the camera model and the robust bucketing technique based on RANSAC are introduced in Section 3. An algorithm for the camera model estimation is generalized in Section 4.
4 PSfrag replacements PSfrag replacements optical axis θ p ρ optical axis θ (u, v ) sensor (c) Figure : Nikon FC E8 fish-eye converter. The lens possesses central projection, thus all rays emanate from its optical center, which is shown as a dot. (c) Notice, that the image taken by the lens to the planar sensor π can be represented by intersecting camera rays with a spherical retina ρ. Experiments and summary are given in Sections 5 and 6. π PSfrag replacements p ρ C w(u, v) - opt.axis θ (u, v, ) (u, v, w) (p, q, s) f(r) r Figure 3: The diagram of the construction of mapping f from the sensor plane π to the spherical retina ρ. The point (u, v, ) in the image plane π is transformed by f(.) to (u, v, w) and then normalized to unit length, and thus projected on the sphere ρ. π Omnidirectional camera model For cameras with view angle above 8, see Figure, images of all scene points X cannot be represented by intersections of camera rays with a single image plane. Every line passing through cameras optical center intersects the image plane in one point. However, two scene points can lie on one such line and they can be seen in the image at the same time, see rays p and p in Figure c. For that reason, we will represent rays of the image as a set of unit vectors in R 3 such that one vector corresponds just to one image of a scene point. Let us assume that u = (u, v) are coordinates of a point in an image with the origin of the coordinate system in the center of the view field circle (u, v ). Remember, that it is not always the center of the image. Let us further assume that a nonlinear function g, which assigns D image coordinates to 3D vectors, can be expressed as g(u) = g(u, v) = (u v f(u, v)), () where f(u) is rotationally symmetric function w.r.t. the point (u, v ). Function f can have various forms determined by lens or mirror construction [3, 9]. For Nikon FC E8 fish-eye lens we use the division model [] θ = ar + br, r = a a 4bθ, () bθ where θ is the angle between a ray and the optical axis, and r = u + v is the radius of a point in the image plane w.r.t. (u, v ), and a, b are parameters of the model. Using f(u) = r, see Figure 3, 3D vector p with unit length can tan θ be expressed up to scale as ( ) ( ) u u p = w f(u, a, b) = ( u r tan θ ) ( u = r tan ar +br (3) The equation (3) captures the relationship between the image point u and the 3D vector p emanating from the optical center towards a scene point.. Model estimation from epipolar geometry Function f(u, a, b) in (3) is a two parametric non-linear function, which can be expanded to Taylor series with respect to a and b in a and b, see [] for more details. The ). vector p can be then written, using (3), as [( ) ( ) u p + a f(.) a f a(.) b f b (.) f a(.) x + as + bt, ( )] + b f b (.) where x, s, and t are known vectors computed from image coordinates, a and b are unknown parameters, and f a, f b are partial derivatives of f(.) w.r.t. a and b. The epipolar constraint for vectors p in the left and p in the right image that correspond to the same scene point reads as p Fp = (x + as + bt ) F(x + as + bt) =. After arranging of unknown parameters into the vector h we obtain (D + ad + a D 3 )h =, (4) where matrices D i are known [] and vector h is h = [ f f f 3 f 4 f 5 f 6 f 7 f 8 f 9 bf 3 bf 6 bf 7 bf 8 bf 9 b f 9 ], with f i being elements of the fundamental matrix. Equation (4) represents Quadratic Eigenvalue Problem (QEP) [, 7], which can be solved by MATLAB using the function polyeig. Parameters a, b, and matrix F can be thus computed simultaneously. We recover parameters of model (3), and thus angles between rays and the optical axis, which is equivalent to recovering an essential matrix, and therefore calibrated camera. We used angular error, i.e. angle between a ray and an corresponding epipolar plane [], to measure the quality of the estimate of epipolar geometry instead of the distance of a point from its epipolar line [8]. Knowing that the field of view is circular, the view angle equals θ m, the radius of the view field circle equals R, and from (), parameter a can be then expressed as a = (+br )θ m R. Thus (3) can be linearized to a one parametric model, and a 9-points RANSAC as a pre-test to detect most of outliers can be used like in [6]. To obtain better estimate, two parametric model with a priori knowledge a = θm R, b =, can be used in a 5-points RANSAC estimation.
5 .5 4 x θ [rad].5 3 radius [mm] θ [rad] 4 6 radius [mm] 3 Figure 4: Comparison of various lens models with ground truth data. The proposed model (black dots), nd order polynomial (red circles), and 3 rd order polynomial (blue crosses) are fitted to data measured in an optical laboratory. The angle between 3D vector and the optical axis as a function of the radius of a point in the image plane. Approximation errors θ = θ θ gt for all models. θ gt means the ground truth angle. 3 Camera model fitting In this section we want to investigate our proposed division model, fit it to ground truth data, compare it with other commonly used models, and observe the prediction error, i.e. how many points are needed and where they should be located in the image to fit the model from minimum subset of points with a sufficient accuracy. 3. Precision of the division model We compare our division model () with commonly used polynomial models of the nd order (θ = a + a r ), and of the 3 rd order (θ = a + a r + a 3 r 3 ). The constants a i represent parameters of the models, r is the radius of an image point w.r.t. (u, v ) and θ is the angle between the corresponding 3D vector and the optical axis. As a ground truth we used data measured in an optical laboratory. The uncertainty of ground truth data measurement in angle were ±. and in radius were ±.5mm. We fit all three models to all ground truth points, see Figure 4. Angular error θ between the angle computed from fitted model and the ground truth angle is shown in Figure 4b. The accuracies of the approximations are: RMS (3) = rad, RMS pol = 36 4 rad, RMS pol3 = rad. As can be seen in Figure 4, our proposed two-parametric model reaches much better fit than the two-parametric nd order polynomial model, and it has comparable (a little bit better) fit than the three-parametric 3 rd order polynomial model. We are interested in prediction error of the models, i.e. error on complete ground truth data for the models that were fitted only from some of them. We want to investigate how points selection can affect the final error of model estimate. We ordered ground truth points into a sequence by their radius computed w.r.t. (u, v ). First, we computed all three models from first three ground truth points in the sequence (a minimal subset to compute parameters of the models), and then tested fitted models on all ground truth points, i.e. computed RMS errror. Then we added points from the sequence gradually into the subset from which the models are estimated and computed RMS error on all ground truth points. We repeated adding of points until all points in the sequence were used for models.4 RMS number of points 3 RMS number of points Figure 5: Prediction error, i.e. the influence of the position of points and the number of points used on model fitting. Gaussian noise with σ = pixel was added to the ground truth data, trials were performed. Error bars with the mean, th and 9th percentile values are shown. The x-axis represents the number of ground truth points used for model fitting. Points are being added to the subset from the center (u, v ) to the boundary of the view field circle. Points are being added from the boundary to the center. The proposed model (black line labeled by ), nd order polynomial (red line labeled by ), and 3 rd order polynomial (blue line labeled by 3) are considered. Graphs for nd and 3 nd order polynomials are shifted to the rigth to show noise bars. fittings. Gaussian noise with σ = pixel was added to the ground truth data and trials were performed in order to see influence of noise on model fitting. Secondly, we repeated the same procedure but ground truth points were added from the end of the sequence (i.e. from the boundary to the center of the view field circle). Figure 5 shows both experiments. As can be seen, noise has smaller effect on model fitting when the number of points, from which the model is computed, increases. It can be seen from Figure 5a, that RMS error is very high for the minimal set of three points and decreases significantly only when points close to the boundary of the view field circle are included. On the other hand, when points are added from the boundary, see Figure 5b, the RMS error of our model already starts with a low value and adding of more points that are closer to the center does not change the RMS error dramatically. It is clear that points near the boundary of the view field circle are more important than points in the center. Thus in order to obtain a good lens model it is important to use points near the boundary preferentially. Equations () show that, our model is easily invertible. It allows us to recompute image points to its corresponding 3D vectors and 3D vectors to its corresponding image points without using any iterative methods. 3. Using bucketing in RANSAC There are outliers and noise in correspondences. We used RANSAC [7] for robust model estimation and outliers detection. We propose a strategy for point sampling, similiar to bucketing [], in order to obtain a good estimate in a reasonable time. As it was described before, angle between a ray and its corresponding epipolar plane is used as the criterion of the estimation quality, call it the angular error. Ideally it should be zero, but we admit some tolerance in real situation. The tolerance in the angular error propagates into the tolerance 3
6 f f.6. θ [rad].5.4. θ θ [rad] radius [pxl/] radius [pxl/] radius [pxl/] (c) Figure 6: Model fitting with a tolerance θ. The graph θ = f(r) for ground truth data (black thick curve) and two models satisfying the tolerance (red and blue curves). Parameters a and b can vary for models satisfying the tolerance. The area between dashed curves is determined by the error. In this area, all models satisfying the tolerance must lie. (c) The angular error for both models with respect to the ground truth. θ [rad] θ Figure 7: Image zones used for correct model estimation based on RANSAC. Points near the center (u, v ), i.e. points with radius smaller than.4 r max, are discarded. The rest of the image is divided into three zones with equal areas from which the points are randomly sampled by RANSAC. in camera model parameters, see Figure 6. The region, in which lie models that satisfy a certain tolerance is narrowing with increasing the radius of points in the image, see Figure 6b. Since f() = a [], the points near the center (u, v ) will affect only parameter a. There is a large tolerance in parameter a since the tolerance region near the center (u, v ) is large. Since RANSAC looks for a model fitting the highest number of points within a certain tolerance, it may fit only points near the center (u, v ) in order to obtain the highest number of inliers, see Figure a. On the other hand, there may exist model, with less inliers, but suiting as to the points near the center as to the points in the boundary, see Figure b. As it was shown before, points near the center (u, v ) have no special contribution to the final model fitting and the most informative points lie near the boundary of the view field circle. Therefore, to obtain the correct model, it is necessary to reject a priori points near the center (u, v ). The rest of the image, as Figure 7 shows, is split into three zones with equal areas from which the same number of points are randomly chosen by RANSAC. This helps to avoid the degenerate configurations, strongly biased estimates, and it decreases the number of RANSAC iteration. As it was mentioned before, our model can be reduced to a one-parametric model using a = (+br )θ m R, where R is radius corresponding to the maximum view angle θ m. can be obtained by fitting circle to the view field boundary in the image information from manufacturer radius [pxl/] radius [pxl/] Figure 8: Model fitting with maximum defined error θ for oneparametric model. See Figure 6 for the explanation. Notice that models labeled as and end in the same point. It can be seen from Figure 8 that a priori known values R and θ m fix all models with various b to the point [R θ m ]. The resulting model has only one degree of freedom and thus smaller possibility to fit outliers. Using the approximate knowledge of a reduces the minimal set to be sampled by RANSAC from 5 to 9 correspondences. It is natural to use a 9-points RANSAC as a pre-test that excludes most disturbing outliers before the full and more accurate a 5-points RANSAC is applied. 4 Algorithm Algorithm for computing 3D rays and an essential matrix:. Find an ellipse corresponding to the lens field of view. Transform the image so that the ellipse becomes a circle. Find correspondences {u u } between two images. Use only correspondences with u + v >.4 R, where R is the radius of the view field circle.. Scale image points u := u/ to obtain better numerical stability. Choose a = R θ m and b =. 3. Create matrices D, D, D 3 R N 5, where N is the number of correspondences. Solve equation (4) with inverted QEP due to singularity of D 3 []. Use MATLAB: [H a] = polyeig(d D 3, D D, D D ), H is a 5 3 matrix with columns h, a is a 3 vector with elements /a. Six possible solutions of b from last six elements of h appear. 4. Choose a and a < (other solutions seem never be correct), 4 solutions remain. For every a there are
7 Figure 9: Nikon FC E8 fish-eye converter mounted on the PULNIX TM digital camera with resolution 7 8 pixels is rotated along a circle. Correspondences between two consecutive images. Circles mark points in first image, lines join them to the matches in the next one. The images are superimposed in red and green channel. 6 solutions of b. Create 3D rays using a and b and compute F using a standard method [7]. The set of possible solutions {a i b i,...6 F i,...6 } arises. 5. Compute the angular error for all triples {a b F} as a sum of errors for all correspondences. The triple with the minimal error is the solution of a, b, and the essential matrix F. 5 Real data In this section, the method is applied to real data. Correspondences were obtained by commercial program boujou []. Parameters of camera models and cameras trajectories (up to magnitudes of translation vectors), were estimated. Relative camera rotations and directions of translations used for trajectories estimations were computed from essential matrices [7]. For obtaining the magnitudes we would need to reconstruct the observed scene. It was not the task of this paper. Instead, we assumed unit length of translation vectors. The first experiment shows a rotating omnidirectional camera, see Figure 9. The camera was mounted on a turntable such that the final trajectory of its optical center was circular. Images were acquired every, 36 images in total. Three approaches for the estimation of parameters a, b, and essential matrices F were used. The first approach used all correspondences and the essential matrix F was computed for every pair independently from a, b estimated for the given pair, see Figure a. The second approach estimates one ā and one b as the median of all a, b s computed for every consecutive pair of images in the whole sequence. Matrices F were then computed for each pair using the same ā, b, see Figure b. The third approach differs from the second one that a 9-points RANSAC as a pre-test to detect most of outliers and then a 5-points RANSAC were performed to compute parameters a, b for every pair, see Figure c. The next experiment calibrates the omnidirectional camera from its translation in the direction perpendicular to its optical axis, see Figure. The estimated trajectory is shown in Figure a. The angular differences between esti- (c) Figure : Motion estimation for the circle sequence. Red depicts the starting position, depicts the end position. Essential matrix F is computed from actual estimate of a and b. F is computed from a and b that are determined from whole sequence. (c) F is computed from a and b that are determined from whole sequence using RANSAC for detecting outliers. rad rad.. Figure : Side motion. Nikon FC E8 fish-eye converter with COOLPIX digital camera with resolution 6 pixels was used. On the left hand side, a diagram of the camera motion is depicted and on the right hand side a picture of the real setup is shown. Below the diagram the estimated trajectory is shown. Angular error between the direction of motion and the optical axis for each pair, and 3σ circle. Figure : General motion of Nikon FC E8 fish-eye converter with COOLPIX digital camera. Setup of the experiment. A mobile tripod with the camera. The correctly estimated trajectory. mated and true motion directions for every pair are depicted in Figure b. Average angular error is.4. The next experiment shows the calibration from a general planar motion. Figure shows a mobile tripod with an omnidirectional camera and estimated real U-shaped trajectory with right angles. The last experiments, see Figure 3, applied our model and model introduced in [6] to an omnidirectional image. It can be seen that the model [6] does not sufficiently capture lens nonlinearity. 6 Conclusion The paper presented robust simultaneous estimation of the omnidirectional camera model and epipolar geometry. As the main contribution, the paper shows how the points should be sampled in RANSAC to avoid degenerate configurations and biased estimates. It was shown that the points
8 (c) (d) (e) Figure 3: Comparison of two camera models applied to an omnidirectional image acquired by Nikon FC E8 fish-eye converter. Part of the omnidirectional image is linearized and projected to a plane. Input image corresponds to 83 angle of view. Red dashed circle represents image with 6 angle of view. Camera model from [6] is used. (c) Notice that not all lines are straight and parallel. The model does not sufficiently capture lens nonlinearity. (d) Our proposed model. (e) Notice that with our model all lines are straight and parallel. near the center of the view field circle can be discarded and the final model computed only from points near the boundary of the view field circle. The suggested technique allows to incorporate an omnidirectional camera model into a 9points RANSAC followed by a 5-points RANSAC for camera model, essential matrix estimation, and outliers detection. Real experiments suggest that our method is useful for structure from motion with sufficient accuracy as a starting point for bundle adjustment. Acknowledgement This research was supported by the following projects: CTU ˇ //97, MSM 33, MSMT ˇ 953, GACR KONTAKT -3-4, BeNoGo IST References [] d3 Ltd. Boujou.. [] Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors. Templates for the Solution of Algebraic Eigenvalue Problems : A Practical Guide. SIAM, Philadelphia,. [3] H. Bakstein and T. Pajdla. Panoramic mosaicing with a 8 field of view lens. In Proc. of the IEEE Workshop on Omnidirectional Workshop, pages 6 67,. [4] C. Br auer-burchardt and K. Voss. A new algorithm to correct fish-eye- and strong wide-angle-lens-distortion from single images. In Proc. ICIP, pages 5 8,. [5] H. Farid and A. C. Popescu. Blind removal of image non-linearities. In Proc. ICCV, volume, pages 76 8,. [6] A. Fitzgibbon. Simultaneous linear estimation of multiple view geometry and lens distortion. In Proc. CVPR,. [7] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge, UK,. [8] R. I. Hartley and P. Sturm. Triangulation. Computer Vision and Image Understanding: CVIU, 68():46 57, 997. [9] J. Kumler and M. Bauer. Fisheye lens designs and their relative performance. fisheyep.pdf. [] J. Matas, O. Chum, M. Urban, and T. Pajdla. Robust wide baseline stereo from maximally stable extremal regions. In P. L. Rosin and D. Marshall, editors, Proc. of the British Machine Vision Conference, volume, pages , UK, September. BMVA. [] B. Miˇcuˇs ık and T. Pajdla. Estimation of omnidirectional camera model from epipolar geometry. Research Report CTU CMP, Center for Machine Perception, K333 FEE Czech Technical University, Prague, Czech Republic, June. [] J. Oliensis. Exact two image structure from motion. PAMI,. [3] S. Shah and J. K. Aggarwal. Intrinsic parameter calibration procedure for a (high distortion) fish-eye lens camera with distortion model and accuracy estimation. Pattern Recognition, 9(): , November 996. [4] G. P. Stein. Lens distortion calibrating using point correspondences. In Proc. CVPR, pages 6 69, 997. [5] T. Svoboda and T. Pajdla. Epipolar geometry for central catadioptric cameras. International Journal of Computer Vision, 49():3 37, August. [6] R. Swaminathan and S. K. Nayar. Nonmetric calibration of wide-angle lenses and polycameras. PAMI, ():7 78,. [7] F. Tisseur and K. Meerbergen. The quadratic eigenvalue problem. SIAM Review, 43():35 86,. [8] Y.Xiong and K.Turkowski. Creating image-based VR using a self-calibrating fisheye lens. In Proc. CVPR, pages 37 43, 997. [9] Z. Zhang. On the epipolar geometry between two images with lens distortion. In Proc. ICPR, pages 47 4, 996. [] Z. Zhang, R. Deriche, O. Faugeras, and Q.-T. Luong. A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Artificial Intelligence, 78(-):87 9, 995.
3D Metric Reconstruction from Uncalibrated Omnidirectional Images
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY 3D Metric Reconstruction from Uncalibrated Omnidirectional Images Branislav Mičušík, Daniel Martinec and Tomáš Pajdla micusb1@cmp.felk.cvut.cz,
More informationPara-catadioptric Camera Auto Calibration from Epipolar Geometry
Para-catadioptric Camera Auto Calibration from Epipolar Geometry Branislav Mičušík and Tomáš Pajdla Center for Machine Perception http://cmp.felk.cvut.cz Department of Cybernetics Faculty of Electrical
More information3D Metric Reconstruction from Uncalibrated Omnidirectional Images
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY 3D Metric Reconstruction from Uncalibrated Omnidirectional Images Branislav Mičušík, Daniel Martinec and Tomáš Pajdla {micusb1, martid1, pajdla}@cmp.felk.cvut.cz
More informationOmnivergent Stereo-panoramas with a Fish-eye Lens
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Omnivergent Stereo-panoramas with a Fish-eye Lens (Version 1.) Hynek Bakstein and Tomáš Pajdla bakstein@cmp.felk.cvut.cz, pajdla@cmp.felk.cvut.cz
More informationCalibration of a fish eye lens with field of view larger than 180
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Calibration of a fish eye lens with field of view larger than 18 Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek
More informationPartial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric
More informationTwo-View Geometry of Omnidirectional Cameras
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Two-View Geometry of Omnidirectional Cameras PhD Thesis Branislav Mičušík micusb1@cmp.felk.cvut.cz CTU CMP 2004 07 June 21, 2004 Available at ftp://cmp.felk.cvut.cz/pub/cmp/articles/micusik/micusik-thesis-reprint.pdf
More informationCamera Calibration for a Robust Omni-directional Photogrammetry System
Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,
More informationThe Radial Trifocal Tensor: A tool for calibrating the radial distortion of wide-angle cameras
The Radial Trifocal Tensor: A tool for calibrating the radial distortion of wide-angle cameras SriRam Thirthala Marc Pollefeys Abstract We present a technique to linearly estimate the radial distortion
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationMultiple View Geometry. Frank Dellaert
Multiple View Geometry Frank Dellaert Outline Intro Camera Review Stereo triangulation Geometry of 2 views Essential Matrix Fundamental Matrix Estimating E/F from point-matches Why Consider Multiple Views?
More informationConstraints on perspective images and circular panoramas
Constraints on perspective images and circular panoramas Marc Menem Tomáš Pajdla!!"# $ &% '(# $ ) Center for Machine Perception, Department of Cybernetics, Czech Technical University in Prague, Karlovo
More informationRegion matching for omnidirectional images using virtual camera planes
Computer Vision Winter Workshop 2006, Ondřej Chum, Vojtěch Franc (eds.) Telč, Czech Republic, February 6 8 Czech Pattern Recognition Society Region matching for omnidirectional images using virtual camera
More informationCatadioptric camera model with conic mirror
LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación
More informationPartial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de
More informationStructure from Motion. Introduction to Computer Vision CSE 152 Lecture 10
Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationSELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR
SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR Juho Kannala, Sami S. Brandt and Janne Heikkilä Machine Vision Group, University of Oulu, Finland {jkannala, sbrandt, jth}@ee.oulu.fi Keywords:
More informationRadial Multi-focal Tensors
International Journal of Computer Vision manuscript No. (will be inserted by the editor) Radial Multi-focal Tensors Applications to Omnidirectional camera calibration SriRam Thirthala Marc Pollefeys Received:
More informationFast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation
Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Engin Tola 1 and A. Aydın Alatan 2 1 Computer Vision Laboratory, Ecóle Polytechnique Fédéral de Lausanne
More informationVisual Odometry for Non-Overlapping Views Using Second-Order Cone Programming
Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming Jae-Hak Kim 1, Richard Hartley 1, Jan-Michael Frahm 2 and Marc Pollefeys 2 1 Research School of Information Sciences and Engineering
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,
More informationTowards Generic Self-Calibration of Central Cameras
Towards Generic Self-Calibration of Central Cameras Srikumar Ramalingam 1&2, Peter Sturm 1, and Suresh K. Lodha 2 1 INRIA Rhône-Alpes, GRAVIR-CNRS, 38330 Montbonnot, France 2 Dept. of Computer Science,
More informationA Summary of Projective Geometry
A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x
More informationCalibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern
Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook
More informationCamera Calibration with a Simulated Three Dimensional Calibration Object
Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek
More informationCamera Calibration from the Quasi-affine Invariance of Two Parallel Circles
Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese
More informationRobust Camera Calibration from Images and Rotation Data
Robust Camera Calibration from Images and Rotation Data Jan-Michael Frahm and Reinhard Koch Institute of Computer Science and Applied Mathematics Christian Albrechts University Kiel Herman-Rodewald-Str.
More informationVision par ordinateur
Epipolar geometry π Vision par ordinateur Underlying structure in set of matches for rigid scenes l T 1 l 2 C1 m1 l1 e1 M L2 L1 e2 Géométrie épipolaire Fundamental matrix (x rank 2 matrix) m2 C2 l2 Frédéric
More informationA General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras
A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan
More informationGeneric and Real-Time Structure from Motion
Generic and Real-Time Structure from Motion E. Mouragnon 1,2, M. Lhuillier 1, M. Dhome 1, F. Dekeyser 2 and P. Sayd 2 1 LASMEA UMR 6602, Université Blaise Pascal/CNRS, 63177 Aubière Cedex, France 2 CEA,
More informationStructure from Small Baseline Motion with Central Panoramic Cameras
Structure from Small Baseline Motion with Central Panoramic Cameras Omid Shakernia René Vidal Shankar Sastry Department of Electrical Engineering & Computer Sciences, UC Berkeley {omids,rvidal,sastry}@eecs.berkeley.edu
More informationA Calibration Algorithm for POX-Slits Camera
A Calibration Algorithm for POX-Slits Camera N. Martins 1 and H. Araújo 2 1 DEIS, ISEC, Polytechnic Institute of Coimbra, Portugal 2 ISR/DEEC, University of Coimbra, Portugal Abstract Recent developments
More informationMathematics of a Multiple Omni-Directional System
Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba
More informationObject and Motion Recognition using Plane Plus Parallax Displacement of Conics
Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Douglas R. Heisterkamp University of South Alabama Mobile, AL 6688-0002, USA dheister@jaguar1.usouthal.edu Prabir Bhattacharya
More informationRectification and Distortion Correction
Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification
More informationWeek 2: Two-View Geometry. Padua Summer 08 Frank Dellaert
Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential
More informationTwo-view geometry Computer Vision Spring 2018, Lecture 10
Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of
More informationRANSAC RANdom SAmple Consensus
Talk Outline importance for computer vision principle line fitting epipolar geometry estimation RANSAC RANdom SAmple Consensus Tomáš Svoboda, svoboda@cmp.felk.cvut.cz courtesy of Ondřej Chum, Jiří Matas
More informationGeometry of Multiple views
1 Geometry of Multiple views CS 554 Computer Vision Pinar Duygulu Bilkent University 2 Multiple views Despite the wealth of information contained in a a photograph, the depth of a scene point along the
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationVisual Tracking of Planes with an Uncalibrated Central Catadioptric Camera
The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 29 St. Louis, USA Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera A. Salazar-Garibay,
More informationEpipolar Geometry in Stereo, Motion and Object Recognition
Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationPrecise Omnidirectional Camera Calibration
Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X
More informationMultiple-View Structure and Motion From Line Correspondences
ICCV 03 IN PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, NICE, FRANCE, OCTOBER 003. Multiple-View Structure and Motion From Line Correspondences Adrien Bartoli Peter Sturm INRIA
More informationA Factorization Method for Structure from Planar Motion
A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationEpipolar geometry. x x
Two-view geometry Epipolar geometry X x x Baseline line connecting the two camera centers Epipolar Plane plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More information1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License
Title D camera geometry and Its application to circular motion estimation Author(s Zhang, G; Zhang, H; Wong, KKY Citation The 7th British Machine Vision Conference (BMVC, Edinburgh, U.K., 4-7 September
More informationExternal camera calibration for synchronized multi-video systems
External camera calibration for synchronized multi-video systems Ivo Ihrke Lukas Ahrenberg Marcus Magnor Max-Planck-Institut für Informatik D-66123 Saarbrücken ihrke@mpi-sb.mpg.de ahrenberg@mpi-sb.mpg.de
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationEstimation of common groundplane based on co-motion statistics
Estimation of common groundplane based on co-motion statistics Zoltan Szlavik, Laszlo Havasi 2, Tamas Sziranyi Analogical and Neural Computing Laboratory, Computer and Automation Research Institute of
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationA Minimal Solution to Relative Pose with Unknown Focal Length and Radial Distortion
A Minimal Solution to Relative Pose with Unknown Focal Length and Radial Distortion Fangyuan Jiang 1 Yubin Kuang Jan Erik Solem 1, Kalle Åström 1 1 Centre for Mathematical Sciences, Lund University, Sweden
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationHow to Compute the Pose of an Object without a Direct View?
How to Compute the Pose of an Object without a Direct View? Peter Sturm and Thomas Bonfort INRIA Rhône-Alpes, 38330 Montbonnot St Martin, France {Peter.Sturm, Thomas.Bonfort}@inrialpes.fr Abstract. We
More informationMeasuring camera translation by the dominant apical angle
Measuring camera translation by the dominant apical angle Akihiko Torii 1 Michal Havlena 1 Tomáš Pajdla 1 1 CMP, Czech Technical University Prague, Czech Republic {torii,havlem1,pajdla}@cmp.felk.cvut.cz
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix
More informationAgenda. Rotations. Camera models. Camera calibration. Homographies
Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate
More informationComputer Vision I - Algorithms and Applications: Multi-View 3D reconstruction
Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View
More informationVision Review: Image Formation. Course web page:
Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some
More informationA Factorization Based Self-Calibration for Radially Symmetric Cameras
A Factorization Based Self-Calibration for Radially Symmetric Cameras Srikumar Ramalingam, Peter Sturm, Edmond Boyer To cite this version: Srikumar Ramalingam, Peter Sturm, Edmond Boyer. A Factorization
More informationA COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES
A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES Yuzhu Lu Shana Smith Virtual Reality Applications Center, Human Computer Interaction Program, Iowa State University, Ames,
More informationFast and stable algebraic solution to L 2 three-view triangulation
Fast and stable algebraic solution to L 2 three-view triangulation Zuzana Kukelova, Tomas Pajdla Czech Technical University, Faculty of Electrical Engineering, Karlovo namesti 13, Prague, Czech Republic
More informationRigid Body Motion and Image Formation. Jana Kosecka, CS 482
Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3
More informationInvariant Features from Interest Point Groups
Invariant Features from Interest Point Groups Matthew Brown and David Lowe {mbrown lowe}@cs.ubc.ca Department of Computer Science, University of British Columbia, Vancouver, Canada. Abstract This paper
More informationIndex. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263
Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationTracking in image sequences
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Tracking in image sequences Lecture notes for the course Computer Vision Methods Tomáš Svoboda svobodat@fel.cvut.cz March 23, 2011 Lecture notes
More informationIndex. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253
Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine
More informationWide Baseline Matching using Triplet Vector Descriptor
1 Wide Baseline Matching using Triplet Vector Descriptor Yasushi Kanazawa Koki Uemura Department of Knowledge-based Information Engineering Toyohashi University of Technology, Toyohashi 441-8580, JAPAN
More informationTowards a visual perception system for LNG pipe inspection
Towards a visual perception system for LNG pipe inspection LPV Project Team: Brett Browning (PI), Peter Rander (co PI), Peter Hansen Hatem Alismail, Mohamed Mustafa, Joey Gannon Qri8 Lab A Brief Overview
More informationLast lecture. Passive Stereo Spacetime Stereo
Last lecture Passive Stereo Spacetime Stereo Today Structure from Motion: Given pixel correspondences, how to compute 3D structure and camera motion? Slides stolen from Prof Yungyu Chuang Epipolar geometry
More informationA New Method and Toolbox for Easily Calibrating Omnidirectional Cameras
A ew Method and Toolbox for Easily Calibrating Omnidirectional Cameras Davide Scaramuzza 1 and Roland Siegwart 1 1 Swiss Federal Institute of Technology Zurich (ETHZ) Autonomous Systems Lab, CLA-E, Tannenstrasse
More informationSurface Normal Aided Dense Reconstruction from Images
Computer Vision Winter Workshop 26, Ondřej Chum, Vojtěch Franc (eds.) Telč, Czech Republic, February 6 8 Czech Pattern Recognition Society Surface Normal Aided Dense Reconstruction from Images Zoltán Megyesi,
More informationAgenda. Rotations. Camera calibration. Homography. Ransac
Agenda Rotations Camera calibration Homography Ransac Geometric Transformations y x Transformation Matrix # DoF Preserves Icon translation rigid (Euclidean) similarity affine projective h I t h R t h sr
More informationCamera Geometry II. COS 429 Princeton University
Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence
More informationSummary Page Robust 6DOF Motion Estimation for Non-Overlapping, Multi-Camera Systems
Summary Page Robust 6DOF Motion Estimation for Non-Overlapping, Multi-Camera Systems Is this a system paper or a regular paper? This is a regular paper. What is the main contribution in terms of theory,
More informationRadial Multi-focal Tensors
International Journal of Computer Vision manuscript No. (will be inserted by the editor) Radial Multi-focal Tensors Applications to Omnidirectional Camera Calibration SriRam Thirthala Marc Pollefeys Received:
More informationRobust Geometry Estimation from two Images
Robust Geometry Estimation from two Images Carsten Rother 09/12/2016 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation Process 09/12/2016 2 Appearance-based
More informationConstruction of Precise Local Affine Frames
Construction of Precise Local Affine Frames Andrej Mikulik, Jiri Matas, Michal Perdoch, Ondrej Chum Center for Machine Perception Czech Technical University in Prague Czech Republic e-mail: mikulik@cmp.felk.cvut.cz
More informationToward Flexible 3D Modeling using a Catadioptric Camera
Toward Flexible 3D Modeling using a Catadioptric Camera Maxime Lhuillier LASMEA UMR 6602 CNRS, Université Blaise Pascal 24 avenue des Landais, 63177 Aubiere Cedex, France. maxime.lhuillier.free.fr Abstract
More informationMulti-View Geometry Part II (Ch7 New book. Ch 10/11 old book)
Multi-View Geometry Part II (Ch7 New book. Ch 10/11 old book) Guido Gerig CS-GY 6643, Spring 2016 gerig@nyu.edu Credits: M. Shah, UCF CAP5415, lecture 23 http://www.cs.ucf.edu/courses/cap6411/cap5415/,
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationFactorization Methods in Multiple View Geometry
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Factorization Methods in Multiple View Geometry PhD Thesis Proposal Daniel Martinec martid1@cmp.felk.cvut.cz CTU CMP 2003 04 February 3, 2003 RESEARCH
More informationStereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world
More informationUnit 3 Multiple View Geometry
Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover
More informationMatching of Line Segments Across Multiple Views: Implementation Description (memo)
Matching of Line Segments Across Multiple Views: Implementation Description (memo) Tomas Werner Visual Geometry Group Department of Engineering Science University of Oxford, U.K. 2002 1 Introduction This
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More informationPara-catadioptric camera auto-calibration from epipolar geometry
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Para-catadioptric camera ato-calibration from epipolar geometry Branislav Mičšík and Tomáš Pajdla micsb1@cmp.felk.cvt.cz, pajdla@cmp.felk.cvt.cz
More informationURBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES
URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment
More information