Using Pedestrians Walking on Uneven Terrains for Camera Calibration

Size: px
Start display at page:

Download "Using Pedestrians Walking on Uneven Terrains for Camera Calibration"

Transcription

1 Machine Vision and Applications manuscript No. (will be inserted by the editor) Using Pedestrians Walking on Uneven Terrains for Camera Calibration Imran N. Junejo Department of Computer Science, University of Sharjah, P.O.Box 27272, Sharjah, U.A.E Received: date / Revised version: date Abstract A calibrated camera is essential for computer vision systems. The prime reason being that such a camera acts as an angle measuring device. Once the camera is calibrated, applications like 3D reconstruction or metrology or other applications requiring real world information from the video sequences can be envisioned. Motivated by this, we address the problem of calibrating multiple cameras, with an overlapping field of view (FoV) observing pedestrians in a scene walking on an uneven terrain. This problem of calibration on an uneven terrain has so far not been addressed in the vision community. We automatically estimate vertical and horizontal vanishing points by observing pedestrians in each camera and use the corresponding vanishing points to estimate the infinite homography existing between the different cameras. This homography provides constraints on intrinsic (or interior) camera parameters while also enabling us to estimate the extrinsic (or exterior) camera parameters. We test the proposed method on real as well as synthetic data, in addition to motion capture dataset and compare our results with the state of the art. 1 Introduction Due to an exponential increase in the computational power of the present day computers, real-time applications of computer vision algorithms has not only been possible, but has acquired a great deal of interest from governments, commercial companies, security agencies and even general public. The area that has attracted the most attention is camera surveillance. Most of the video surveillance systems involves monitoring people (or pedestrians) in a scene. The system can be monitoring, for instance, a building entrance, an airport lobby, stairways, or mall escalators) using stationary or rotating cameras. The goal for such a system can be to model the behavior of objects (e.g. cars or pedestrians, depending on the situation), event reconstruction, or action recognition etc. In this paper, we present a novel method to auto-calibrate each camera in a system of multiple cameras monitoring a particular area of interest by observing only pedestrians in the scene.

2 2 Imran N. Junejo Camera calibration has now become an essential step for any meaningful computer vision based system. For a surveillance system, it is known that due to the effects of perspective projection, the measurements made from the images do not represent metric data. This is evident from a simple observation: the objects grow larger and move faster as they approach the camera center, or two objects moving in parallel direction seem to converge at a point in the image. The projective camera thus makes it difficult to characterize objects - in terms of their sizes, motion characteristics, length ratios and so on - unless more information is available about the camera being Reference object-based calibration: A traditional approach to camera calibration, is performed by observing a calibration object, typically a calibration rig, whose 3D geometry in space is known in advance with a very good precision. The basic idea is to establish correspondences between known points in the world coordinate frame and their respective projection onto the image plane, so that both the internal and external geometry of the imaging system can be determined (cf. Tsai [24]). Recently some research has focused on using a planar pattern for an accurate calibration as well [22,25,26]. The technique requires elaborate setup and expensive calibration apparatus. Auto-calibration: Self-Calibration or Auto Calibration refers to camera calibration from uncalibrated images. Techniques in this category do not require any calibration object and require only the image point correspondence. Just by moving a camera in the static scene, the rigidity of the scene is used to provide constraints on the internal parameters. Since the seminal work of Faugeras et al. [8], auto-calibration has appealed to the vision community due to its simplicity and ease of use [9, 23]. used. Another application for such a surveillance system is making measurements in the image plane i.e. metrology [7]. Measurements like a person s height or true walking speed can be estimated easily once the camera is calibrated [3]. In terms of camera auto-calibration (or self-calibration), these techniques can be classified into two categories: Reviews and surveys of these methods can be found in [11]. More classically, auto-calibration may be performed from sets of the vanishing points corresponding to the directions which are orthogonal in the scene. Caprile and Torre [5] describe a method that requires three vanishing points corresponding to orthogonal direction for camera calibration with known aspect ratio and skew. Objects in the real world typically contain orthogonal directions, for example buildings, giving this method an edge over the object-based calibration approach. Other works on auto-calibration under various configuration include [6], [17], and [4]. The proposed method belongs to this category of solutions. In this paper, we propose a novel solution to the problem of camera calibration when the pedestrians are walking on an uneven terrain. The setup includes multiple cameras looking at the area of interest. We do not restrict pedestrians to walk in a certain manner or with a constant velocity. See Fig. 1 for

3 Using Pedestrians Walking on Uneven Terrains for Camera Calibration 3 an example of the scenario. The detected top (head) and bottom (feet) locations on a person, over at least two instances, are used to estimate the vanishing points in each view. These vanishing points are then used to estimate infinite homography matrix between the cameras. We estimate three camera parameters i.e. the focal length (f), and the principal point (u o, v o ). The noise in data points is minimized by using all the vanishing points obtained from all detected people and all the frames. We demonstrate results on synthetic as well as on the real data. The rest of the paper is organized as follow: Section 2 lays down the founding principles of the proposed method. The method for estimating the vanishing points by observing pedestrians and estimating camera parameters is detailed in Section 3. We rigourously test the proposed method on synthetic data, motion capture dataset and on two real sequences, as described in Section 4 before conclusion. We discuss the related work on camera auto-calibration by observing pedestrians. The first work to deal with camera calibration from pedestrians was that of Lv et al. [18]. They recover vertical vanishing point by observing the scene as well as walking people. The horizon line is estimated by observing the human motion in different directions. However, their formulation does not handle robustness issues. Similarly, Krahnstoever and Mendonça [15,16] proposed a Bayesian approach for auto-calibration by observing pedestrians. Footto-head homology is decomposed to extract the vanishing point and the horizon line for calibration. They also incorporate measurement uncertainties and outlier models. However, their method requires prior knowledge about some unknown calibration parameters and prior knowledge about the location of people; and their algorithm is also non-linear. Recently, Junejo and Foroosh [13, 14] presented two methods for camera calibration from pedestrians. They decompose the 1.1 Related Work Often in real world scenarios, people do not walk on flat surfaces but rather on uneven terrains. For example, people fundamental matrix induced by different instances of a walking pedestrians to impose linear constraints on camera intrinsics [13]. However, all of these methods only apply to people walking on a smooth horizontal surface. on escalators at the airports or at the shopping malls, people climbing stairs, or walking from road to a sidewalk or on the grass etc. Recently, some researchers have addressed 2 Background the problem of calibration from observing pedestrians walking on a flat surface (i.e. ground plane), but they fail to handle this general behavior of the objects. However, we do not restrict pedestrian movements to any particular direction or even require a constant velocity. [ ] T The projection of a 3D scene point X X Y Z 1 onto [ ] T a point in the image plane x x y 1, for a perspective camera can be modeled by the central projection equation:

4 4 Imran N. Junejo Writing P [ p 1 p 2 p 3 p 4 ], the vanishing points are given as: v p 1 and v p 3. Moreover, two families of parallel lines in mutually orthogonal direction intersect on a line H ij p v i,t v p i,t Fig. 1 Setup: Top-Bottom locations of two different instances of a walking pedestrians provides a vanishing point in each observing camera. There also exists a homography H i,j that maps vanishing points between the different cameras. ` called the horizon line. The horizon lines is given by l = p 1 p 2. The aim of camera calibration is to determine the calibration matrix K. Instead of directly determining K, it is common practice [11] to compute the symmetric matrix ω = K T K 1 referred to as Image of the Absolute Conic(IAC). IAC is then decomposed uniquely using the Cholesky Decomposition [21] to obtain K. λf γ u o [ ] x K R RC X, K = 0 f v o }{{} P (1) 3 Method The scenario that we address in this work is when multiple cameras, having an overlapping FoV, observe pedestrians where indicates equality up to a non-zero scale factor [ ] T and C = C x C y C z represents camera center. Here R = ] R x R y R z = [r 1 r 2 r 3 is the rotation matrix and RC is the relative translation between the world origin and the camera center. The upper triangular 3 3 matrix K encodes the five intrinsic camera parameters: focal length f, aspect ratio λ, skew γ and the principal point at (u o, v o ). As argued by [2, 20], it is safe to assume λ = 1 and γ = 0. Image of a family of parallel lines pass through a common point in the image. This point is referred to as the vanishing point. Since the proposed method uses only two vanishing points, without loss of generality, we refer to them as v for the vertical direction direction and v for the x-direction. walking in an area of interest, indoors or outdoors. Although the method applies to n number of cameras, for sake of simplicity, we consider the case of two cameras i and j from here on. The various steps involved in the proposed method are: (1) foreground object extraction, (2) estimation of the top and the bottom locations, (3) estimation of vanishing points obtained from observing a pedestrian, (4) estimating the infinite homography matrix between different cameras, and (5) camera calibration, i.e. estimating the intrinsic and extrinsic camera parameters. A pedestrian needs to be detected and tracked in the video sequence. Note that we are not solving the background sub-

5 Using Pedestrians Walking on Uneven Terrains for Camera Calibration 5 traction and tracking problem, and therefore use one of the well know methods for performing this task [12]. time, i denotes camera i, and p = 1, 2,..., k denotes label of a person in the scene. Similarly, at any two time instances during the walk of a person, the line joining the head to the 3.1 Estimating Top and Bottom Locations We do require that top and bottom locations be correctly detected from the tracked pedestrians. In this regard, we adapt the approach proposed by [13] and [15]. Whereas Lv et al. [18] performs eigendecomposition of the detected blob to extract top and bottom locations, we calculate the center of mass and the second order moment of the lower and the upper portion of the bounding box of the foreground region to detect these points for each instance of a walking person seen from each camera (cf. Fig. 1). For a camera i at time instance t, we denote the detected top point for a pedestrian p as T p i,t and similarly the bottom point as B p i,t. 3.2 Estimating The Infinite Homography Under projective transformation, parallel lines in the world intersect at a point on the image plane, called the vanishing point. While observing a pedestrian of height h at different time instances in a camera, and assuming that h does not change between different instances: if we draw a line between the head (or the top) position of a pedestrian at these two time instances then this line is parallel to the line obtained by joining the feet (or bottom) positions at these same corresponding time instances. This is shown in Fig. 1. Let the intersection of these parallel lines be projected in camera i as: v p i,t = T p i,t T p i,t+1 Bp i,t Bp i,t+1, where t = 1, 2,..., denotes feet at time instance t should be parallel to the line joining the head to feet in time instance t + 1. These lines intersect at another vanishing point lying on the plane at infinity, as: v p i,t = T p i,t Bp i,t T p i,t+1 Bp i,t+1. All these vanishing points lie on the plane at infinity. In the situation where people are walking only on a flat surface, as in [13, 15] or [18], the obtained vanishing points v and v are orthogonal to each other and satisfy the polepolar relationship with respect to ω. However, this is not the case when the pedestrians are walking on an uneven terrain and hence we cannot use these methods to obtain any constraints on the unknown parameters. We overcome this problem by estimating the infinite homography between different cameras viewing the scene. Consider two cameras i and j observing walking pedestrians. The mapping of points from camera i to camera j over the plane at infinity π is given by H i,j = K j R i,j K 1 i, (2) where R i,j is the relative rotation between the cameras and H i,j, the infinite homography, maps the points lying on the plane at infinity in camera i to their corresponding points lying on the plane at infinity in camera j. In practice, H i,j is often estimated between two images by matching corresponding points. Since H i,j has eight degrees of freedom, a minimum of four point correspondences are necessary to com-

6 6 Imran N. Junejo pute it [11]. However, for a robust solution to H i,j, we use all the estimated vanishing points extracted from the observed pedestrians. Once these vanishing points are estimated, they 18, 20], it is safe to assume that the aspect ratio λ = 1 and also the skew γ = 0. This gives us two linear constraints on the unknown parameters: satisfy: [ ] [ ] v p j,t v p j,t... = H i,j v p i,t v p i,t... (3) c 1,2 = 0 (7) c 1,1 = c 2,2 (8) which relates the obtained vanishing points in camera i to their corresponding vanishing points in camera j. Once sufficient number of such vanishing points are obtained, H i,j is estimated by the DLT (Direct Linear Transform) algorithm [11]. 3.3 Camera Calibration Using the property R i,j = R T i,j, we transform (2) to: ω j = H T i,j ω ih i,j 1 where ω j is the IAC for camera j having the form: 1 τ 0 uo τ ω j λ2 τ λ2 v o τ 1 where represent duplicate symmetric values, τ = ( λ 2 f λ 2 v o 2 + u o 2 ), and f 2 is the focal length of camera 2. by: (4) (5) Due to the symmetry of ω j, (4) can be simply represented ω 1,1 ω 1,2 ω 1,3 ω 2,2 ω 2,3 1 j c 1,1 c 1,2 c 1,3 = c 2,2 c 2,3 1 where * again indicates symmetric values, and c a,b contain the unknown parameters of the RHS. As argued by [2,13,15, i (6) corresponding to ω 1,2 = 0 (from γ = 0) and ω 1,1 = ω 2,2 (from λ = 1), respectively. Using (7) and (8) we express ω 1,3 and ω 2,3 in terms of ω 1,1. Here we introduce a cost function on the algebraic distance of the principal point from the center of the image (I x, I y ), which gives an extra weak constraint on ω: [u o v o ] = arg min ( ω 1,3 ω 1,1 + I x ) 2 + ( ω 2,3 ω 2,2 + I y ) 2 (9) We solve this constraint by an application of Levenberg- Marquardt algorithm [21]. By substituting ω 1,3 and ω 2,3 obtained from solving (7) and (8) into (9) and minimizing it, ω 1,1 can be estimated, which in turn determines ω 1,3 and ω 2,3 (note that ω 1,1 = ω 2,2 ). The relative rotation between the two cameras is obtained by R i,j = K 1 j H i,j K i. Given two vanishing points v and v from each view of a single camera, the absolute rotation of camera i (i.e. its extrinsic parameters) with respect to a common world coordinate system can be computed as: r 3 = ± K 1 i v K 1 i 1 i v K 1 v, r 1 = ± K i v, r 2 = r 3 r 1 r 3 r 1, (10) where r 1, r 2 and r 3 represent three columns of the rotation matrix. The sign ambiguity can be resolved by the cheiral-

7 Using Pedestrians Walking on Uneven Terrains for Camera Calibration 7 ity constraint [11] or by known world information, like the maximum rotation possible for the camera. 4 Results Relative Error (%) Relative Error in f Noise Level (pixels) Relative Error (%) Error in f Noise Level (pixels) In this section, we show an extensive set of evaluations on both synthetic and real data to test the proposed solutions and compare with the state of the art. To simulate multiple Relative Error (%) (a) Error in u o Error in v o Absolute Error in Angle R x R y R z (b) and controlled view settings, in addition to the real dataset, we have used 3D motion capture data from CMU dataset (mocap.cs.cmu.edu). 4.1 Synthetic data Noise Level (pixels) (c) Frobenius Norm ( R true R est ) Noise Level (Pixels) Frobenius Norm (d) We performed detailed experimentation on the effect of noise on camera parameter estimation. In order to preform this simulation, we randomly generated 50 vertical synthetic objects in the 3D synthetic world. These vertical objects were generated on an uneven terrain i.e. care was taken so that z-axis values are not same but rather are all randomly generated. Once these objects are constructed, we project the 3D data on two views and compute the unknown camera parameters. For the first camera, we set the parameters: principal point at Noise Level (pixels) (e) Fig. 2 Performance of auto-calibration method VS. Noise level in pixels. (a) and (b) show the relative error for the estimated focal length in camera 1 and camera 2, respectively. (c) depicts the error rate in computing the principal point (u o, v o). The extrinsic parameters are also computed and (d) depicts the averaged absolute error obtained per noise level. (e) shows the Frobenius norm obtained between the estimated relative orientations and the ground-truth relative orientations. (u o, v o ) = (320, 240) for an image size of , focal length f 1 = 1000, the skew γ = 0, and the aspect ratio as λ = 1. We subjectively rotate this camera around the vertical object at a different location with relative rotation angles (R x, R y, R z ) = (20, 32, 82 ). Also, the focal length for the second camera was set to be f 2 = In order to simulate practical situations where image points are generally affected by the noise in camera, tracking errors, or the image sampling, we gradually add a Gaussian noise with µ = 0 and σ 1.5 pixels to the data-points making up the vertical lines and 1000 trials per noise level were then

8 8 Imran N. Junejo performed. Taking two vertical lines at a time, the vertical vanishing point v is obtained by intersecting the two lines the Frobenius norm, shown in Fig. 2(e). As can be seen in the figure, the error rate increases linearly as well. that are obtained by joining the head and bottom points of the observations, as described in Section 3. Similarly, H i,j is recovered by intersecting the two lines obtained by joining the heads locations and the bottom locations of the two observations, respectively. Once enough vanishing points are obtained (at least 4), the infinite homography is estimated [11] by direct linear transformation algorithm. The image points are normalized first so that their centroid is at the origin and their mean distance from the origin in 2 [10]. Error in the estimated calibration parameters is shown in Fig. 2. The y-axis for Fig. 2(a)-(c) depicts the relative error (in percentage) with respect to the focal length, as argued by [23,25]. For a maximum noise level of 1.5 pixels [25], the error rate for f 1 and f 2 is less than 1%. Similarly the error rate for the estimated principal point is found to be less than 0.4%. The curves show that the relative error increases linearly as a function of the increasing noise. Note that the downward trend in the error curves is due to minimizing (9) in the vicinity of the image center. In addition to the camera intrinsic parameters, the relative orientations angles are also estimated, as shown in Fig. 2(d). The red curve shows the absolute error in the estimated relative rotation angle around the x-axis, green curve as the relative rotation error around the y-axis, and blue as the relative rotation around the z-axis, respectively. In order to measure the accuracy of the computed relative rotation matrix between the different viewpoints of the cameras, we compute 4.2 Motion Capture (mocap) data To simulate multiple and controlled view settings we have used 3D motion capture data from CMU dataset. Trajectories of 3 points on the body are projected on two different views with pre-defined orientation w.r.t. to the human body. The camera intrinsic parameters for the synthetic camera are set to f = 1000, λ = 1, γ = 0, and (u o, v o ) = (320, 240). The setup for this process is shown in Fig. 3(a). One point is the head, and the other two points are the left and the right foot. From these settings, we obtain the top and bottom points as required for our method as follows: we make the observation head-lfoot when the left foot is in line with the head of the person, and another observation as head-rfoot when the right foot is in line with the head location [19]. All these corresponding points are collectively used in estimation of the camera parameters. Some of the sample images from the dataset are shown in the Fig. 3(b)-(e) and their synthetic reconstruction is shown in Fig. 3(f)-(i). In the sequence, a person is walking on an uneven surfaces made from cardboard stacked on the ground. Using the proposed method, the estimated parameters are: f = , and (u o, v o ) = (336.06, ). Even though we do get the exact location of the head and the feet locations (as this is a mocap data), the error arises due to the movement of the head and also at some instances the non-verticalness of the body, as can be seen in Fig. 3(d)(e). However, even in presence of such noise, the

9 Using Pedestrians Walking on Uneven Terrains for Camera Calibration 9 cam6 cam4 cam3 cam1 cam2 cam5 cam1 (a) (b) (c) (d) (e) (f) (g) (h) (i) Fig. 3 Experiments with motion capture data: (a) shows our setup for the experiment, two synthetic cameras looking at the person performing the action of walking on a uneven surface. (b)-(e) shows different instances from the sequence. (f)-(i) shows the synthetic reconstruction of the sequence. error is within acceptable limits and very close to the groundtruth values mentioned above. 4.3 Real data We test on two sequences of real data. The first data was captured from cameras having an image resolution of We employed two such cameras looking at a scene with a number of people walking on a uneven/slanted ground plane. Fig. 4 shows two synchronized instances from the dataset consisting of more than 6000 frames. Each column of the figure represents a camera. Walking people are extracted, by applying the background subtraction method and the head/feet locations are determined. The top and the bottom points are marked with a red circle in the figure. The homography was extracted after estimating the vanishing points from the moving pedestrians, as described in Section 3. The extracted intrinsic parameter for camera one are: K 1 =

10 10 Imran N. Junejo by applying the classic camera calibration method of [24]. In comparison to their computed parameters, f = and (u o, v o ) = (376.05, ), we obtained f = and (a) (b) (u o, v o ) = (388, 284). The results obtained from our method are very closed to the given solution. 5 Conclusion (c) (d) Fig. 4 Instances from real data sequence: Each column represents a unique camera when a number of people walk on an uneven surface. The head/bottom points are estimated from the background subtraction method. See text for more details. Real world scenarios involve people walking on uneven terrains. We propose a novel method to obtain camera intrinsic and extrinsic parameters for such scenarios when multiple cameras are looking at the area of interest. To the best of our knowledge, no work exists that deals with camera calibration for this specific scenario. Thus, this method can be very where as the focal length for the second camera is found to be f 2 = Note that the estimated principal point is useful for many of the existing multi-camera video surveillance systems that observe people indoors or outdoors. In the very close to the center of the image. Also note that f 1 and f 2 are very close to each other - indicating, qualitatively, the correctness of the estimated parameters. Another test sequence was obtained from the ETISEO dataset [1]. We test on the sequence ETI-VS2-BE-19. The dataset contains multiple cameras installed at an entrance of a building. Some samples from this dataset are shown in Fig. 5. Each row in the figure represents different views of the scene. In the sequence shown, a person comes out of the building holding a package in his hand and descends the stairs and then continues walking on the road. In addition, a car arrives and parks in the parking lot and the driver walks out. In summary, this is a very interesting and relevant sequence. The dataset also contains the camera parameters estimated proposed method, tracking each walking pedestrian between the frames enables us to obtain horizontal and vertical vanishing points in each camera view. These corresponding vanishing points are then used to estimate the infinite homography that exists between any two cameras viewing the scene. We present novel constraints to solve for three of the camera intrinsic as well as the extrinsic parameters. We put the proposed method to rigorous tests on synthetic data in the presence of considerable noise. We show results and experimentation on motion capture dataset. We also test the method on two real data sequences of multiple cameras observing pedestrians and compare our results with the state-of-the-art calibration method. The encouraging results demonstrate the practicality and the utility of the proposed method.

11 Using Pedestrians Walking on Uneven Terrains for Camera Calibration 11 Camera #2 Camera #1 Fig. 5 ETISEO dataset: A person walks out of the building and descends down the stairs and continues walking on the road. The sequences is view from multiple views, and each of the two rows shows different vies of the scene. A tracked person is show by a blue surrounding rectangle. References 1. Video understanding evaluation project: L. D. Agapito, E. Hayman, and I. Reid. Self-calibration of rotating and zooming cameras. Int. J. Comput. Vision, 45(2): , X. Cao and H. Foroosh. Metrology from vertical objects. Proc. of BMVC, X. Cao and H. Foroosh. Simple calibration without metric information using an isosceles trapezoid. Proc. ICPR, pages , B. Caprile and V. Torre. Using vanishing points for camera calibration. Int. J. Comput. Vision, 4(2): , R. Cipolla, T. Drummond, and D. Robertson. Camera calibration from vanishing points in images of architectural scenes. Proc. of BMVC, pages , A. Criminisi, I. Reid, and A. Zisserman. Single view metrology. Int. J. Comput. Vision, 40(2): , O. Faugeras, T. Luong, and S. Maybank. Camera selfcalibration: theory and experiments. In Proc. of ECCV, pages , R. I. Hartley. Self-calibration from multiple views with a rotating camera. In Proc. ECCV, pages , R. I. Hartley. In defense of the eight-point algorithm. IEEE Trans. Pattern Anal. Mach. Intell., 19(6): , R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: , second edition, O. Javed and M. Shah. Tracking and object classification for automated surveillance. In the seventh European Conference on Computer Vision (ECCV), I. Junejo and H. Foroosh. Trajectory rectification and path modeling for video surveillance. In Eleventh IEEE International Conference on Computer Vision (ICCV), I. Junejo and H. Foroosh. Euclidean path modeling for video surveillance. Elsevier Journal of Image and Vision Computing (IVC), 26(4): , N. Krahnstoever and P. R. S. Mendonca. Bayesian autocalibration for surveillance. Tenth IEEE International Conference on Computer Vision, 2005.

12 12 Imran N. Junejo 16. N. Krahnstoever and P. R. S. Mendonca. Autocalibration from tracks of walking people. In British Machine Vision Conference, D. Liebowitz and A. Zisserman. Combining scene and autocalibration constraints. Proc. IEEE ICCV, pages , F. Lv, T. Zhao, and R. Nevatia. Self-calibration of a camera from video of a walking human. IEEE International Conference of Pattern Recognition, F. Lv, T. Zhao, and R. Nevatia. Camera calibration from video of a walking human. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(9): , M. Pollefeys, R. Koch, and L. V. Gool. Self-calibration and metric reconstruction in spite of varying and unknown internal camera parameters. Int. J. Comput. Vision, 32(1):7 25, W. Press, B. Flannery, S. Teukolsky, and W. Vetterling. Numerical Recipes in C. Cambridge University Press, P. Sturm. Critical motion sequences for the self-calibration of cameras and stereo systems with variable focal length. British Machine Vision Conference, Nottingham, England, pages 63 72, Sep B. Triggs. Autocalibration from planar scenes. In Proc. ECCV, pages , R. Tsai. A versatile camera calibration technique for highaccuracy 3D machine vision metrology using off-the-shelf tv cameras and lenses. IEEE J. of Robotics and Automation, 3(4): , Z. Zhang. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell., 22(11): , Z. Zhang. Camera calibration with one-dimensional objects. IEEE Trans. Pattern Anal. Mach. Intell., 26(7): , 2004.

Camera Calibration and Light Source Estimation from Images with Shadows

Camera Calibration and Light Source Estimation from Images with Shadows Camera Calibration and Light Source Estimation from Images with Shadows Xiaochun Cao and Mubarak Shah Computer Vision Lab, University of Central Florida, Orlando, FL, 32816 Abstract In this paper, we describe

More information

Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance

Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance Zhaoxiang Zhang, Min Li, Kaiqi Huang and Tieniu Tan National Laboratory of Pattern Recognition,

More information

Auto-Configuration of a Dynamic Non-Overlapping Camera Network

Auto-Configuration of a Dynamic Non-Overlapping Camera Network SUBMITTED TO IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, PART B: CYBERNETICS Auto-Configuration of a Dynamic Non-Overlapping Camera Network Imran N. Junejo, Student Member, IEEE, Xiaochun Cao,

More information

Camera Calibration and Light Source Orientation Estimation from Images with Shadows

Camera Calibration and Light Source Orientation Estimation from Images with Shadows Camera Calibration and Light Source Orientation Estimation from Images with Shadows Xiaochun Cao and Mubarak Shah Computer Vision Lab University of Central Florida Orlando, FL, 32816-3262 Abstract In this

More information

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 37, NO. 4, AUGUST

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 37, NO. 4, AUGUST IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 37, NO. 4, AUGUST 2007 803 Autoconfiguration of a Dynamic Nonoverlapping Camera Network Imran N. Junejo, Member, IEEE, Xiaochun

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Trajectory Rectification and Path Modeling for Video Surveillance

Trajectory Rectification and Path Modeling for Video Surveillance Trajectory Rectification and Path Modeling for Video Surveillance Imran N. Junejo and Hassan Foroosh School of Electrical Engineering and Computer Science, University of Central Florida Abstract Path modeling

More information

Fundamental Matrices from Moving Objects Using Line Motion Barcodes

Fundamental Matrices from Moving Objects Using Line Motion Barcodes Fundamental Matrices from Moving Objects Using Line Motion Barcodes Yoni Kasten (B), Gil Ben-Artzi, Shmuel Peleg, and Michael Werman School of Computer Science and Engineering, The Hebrew University of

More information

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Multiple Motion Scene Reconstruction from Uncalibrated Views

Multiple Motion Scene Reconstruction from Uncalibrated Views Multiple Motion Scene Reconstruction from Uncalibrated Views Mei Han C & C Research Laboratories NEC USA, Inc. meihan@ccrl.sj.nec.com Takeo Kanade Robotics Institute Carnegie Mellon University tk@cs.cmu.edu

More information

Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices

Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices Toshio Ueshiba Fumiaki Tomita National Institute of Advanced Industrial Science and Technology (AIST)

More information

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Euclidean Reconstruction Independent on Camera Intrinsic Parameters Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds Marta Wilczkowiak Edmond Boyer Peter Sturm Movi Gravir Inria Rhône-Alpes, 655 Avenue de l Europe, 3833 Montbonnot, France

More information

Ground Plane Rectification by Tracking Moving Objects

Ground Plane Rectification by Tracking Moving Objects Ground Plane Rectification by Tracking Moving Objects Biswajit Bose and Eric Grimson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 2139, USA

More information

Coplanar circles, quasi-affine invariance and calibration

Coplanar circles, quasi-affine invariance and calibration Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Linear Auto-Calibration for Ground Plane Motion

Linear Auto-Calibration for Ground Plane Motion Linear Auto-Calibration for Ground Plane Motion Joss Knight, Andrew Zisserman, and Ian Reid Department of Engineering Science, University of Oxford Parks Road, Oxford OX1 3PJ, UK [joss,az,ian]@robots.ox.ac.uk

More information

Mei Han Takeo Kanade. January Carnegie Mellon University. Pittsburgh, PA Abstract

Mei Han Takeo Kanade. January Carnegie Mellon University. Pittsburgh, PA Abstract Scene Reconstruction from Multiple Uncalibrated Views Mei Han Takeo Kanade January 000 CMU-RI-TR-00-09 The Robotics Institute Carnegie Mellon University Pittsburgh, PA 1513 Abstract We describe a factorization-based

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Metric Rectification for Perspective Images of Planes

Metric Rectification for Perspective Images of Planes 789139-3 University of California Santa Barbara Department of Electrical and Computer Engineering CS290I Multiple View Geometry in Computer Vision and Computer Graphics Spring 2006 Metric Rectification

More information

Automatic Calibration of Stationary Surveillance Cameras in the Wild

Automatic Calibration of Stationary Surveillance Cameras in the Wild Automatic Calibration of Stationary Surveillance Cameras in the Wild Guido M.Y.E. Brouwers 1, Matthijs H. Zwemer 1,2, Rob G.J. Wijnhoven 1 and Peter H.N. de With 2 1 ViNotion B.V., The Netherlands, 2 Eindhoven

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Multi-Camera Calibration with One-Dimensional Object under General Motions

Multi-Camera Calibration with One-Dimensional Object under General Motions Multi-Camera Calibration with One-Dimensional Obect under General Motions L. Wang, F. C. Wu and Z. Y. Hu National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences,

More information

Multiple View Geometry in Computer Vision Second Edition

Multiple View Geometry in Computer Vision Second Edition Multiple View Geometry in Computer Vision Second Edition Richard Hartley Australian National University, Canberra, Australia Andrew Zisserman University of Oxford, UK CAMBRIDGE UNIVERSITY PRESS Contents

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Robust Camera Calibration from Images and Rotation Data

Robust Camera Calibration from Images and Rotation Data Robust Camera Calibration from Images and Rotation Data Jan-Michael Frahm and Reinhard Koch Institute of Computer Science and Applied Mathematics Christian Albrechts University Kiel Herman-Rodewald-Str.

More information

On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications

On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications ACCEPTED FOR CVPR 99. VERSION OF NOVEMBER 18, 2015. On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications Peter F. Sturm and Stephen J. Maybank Computational Vision Group,

More information

Camera Calibration and 3D Scene Reconstruction from image sequence and rotation sensor data

Camera Calibration and 3D Scene Reconstruction from image sequence and rotation sensor data Camera Calibration and 3D Scene Reconstruction from image sequence and rotation sensor data Jan-Michael Frahm and Reinhard Koch Christian Albrechts University Kiel Multimedia Information Processing Hermann-Rodewald-Str.

More information

Visualization 2D-to-3D Photo Rendering for 3D Displays

Visualization 2D-to-3D Photo Rendering for 3D Displays Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information

More information

An Overview of Matchmoving using Structure from Motion Methods

An Overview of Matchmoving using Structure from Motion Methods An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu

More information

A Case Against Kruppa s Equations for Camera Self-Calibration

A Case Against Kruppa s Equations for Camera Self-Calibration EXTENDED VERSION OF: ICIP - IEEE INTERNATIONAL CONFERENCE ON IMAGE PRO- CESSING, CHICAGO, ILLINOIS, PP. 172-175, OCTOBER 1998. A Case Against Kruppa s Equations for Camera Self-Calibration Peter Sturm

More information

A simple method for interactive 3D reconstruction and camera calibration from a single view

A simple method for interactive 3D reconstruction and camera calibration from a single view A simple method for interactive 3D reconstruction and camera calibration from a single view Akash M Kushal Vikas Bansal Subhashis Banerjee Department of Computer Science and Engineering Indian Institute

More information

Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles

Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles 177 Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles T. N. Tan, G. D. Sullivan and K. D. Baker Department of Computer Science The University of Reading, Berkshire

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

Camera Calibration With One-Dimensional Objects

Camera Calibration With One-Dimensional Objects Camera Calibration With One-Dimensional Objects Zhengyou Zhang December 2001 Technical Report MSR-TR-2001-120 Camera calibration has been studied extensively in computer vision and photogrammetry, and

More information

Self-Calibration of a Rotating Camera with Varying Intrinsic Parameters

Self-Calibration of a Rotating Camera with Varying Intrinsic Parameters Self-Calibration of a Rotating Camera with Varying Intrinsic Parameters L. de Agapito, E. Hayman and I. Reid Department of Engineering Science, Oxford University Parks Road, Oxford, OX1 3PJ, UK [lourdes

More information

1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License

1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License Title D camera geometry and Its application to circular motion estimation Author(s Zhang, G; Zhang, H; Wong, KKY Citation The 7th British Machine Vision Conference (BMVC, Edinburgh, U.K., 4-7 September

More information

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images Gang Xu, Jun-ichi Terai and Heung-Yeung Shum Microsoft Research China 49

More information

Camera Calibration by a Single Image of Balls: From Conics to the Absolute Conic

Camera Calibration by a Single Image of Balls: From Conics to the Absolute Conic ACCV2002: The 5th Asian Conference on Computer Vision, 23 25 January 2002, Melbourne, Australia 1 Camera Calibration by a Single Image of Balls: From Conics to the Absolute Conic Hirohisa Teramoto and

More information

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006 Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation

More information

Camera Calibration with a Simulated Three Dimensional Calibration Object

Camera Calibration with a Simulated Three Dimensional Calibration Object Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek

More information

3D reconstruction class 11

3D reconstruction class 11 3D reconstruction class 11 Multiple View Geometry Comp 290-089 Marc Pollefeys Multiple View Geometry course schedule (subject to change) Jan. 7, 9 Intro & motivation Projective 2D Geometry Jan. 14, 16

More information

Creating 3D Models with Uncalibrated Cameras

Creating 3D Models with Uncalibrated Cameras Creating D Models with Uncalibrated Cameras Mei Han Takeo Kanade Robotics Institute, Carnegie Mellon University meihan,tk@cs.cmu.edu Abstract We describe a factorization-based method to recover D models

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Camera Calibration Using Symmetric Objects

Camera Calibration Using Symmetric Objects 3614 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 11, NOVEMBER 2006 [3] M. Belkin and P. Niyogi, Laplacian eigenmaps and spectral techniques for embedding and clustering, in Advances in Neural Information

More information

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Bo Li, Kun Peng, Xianghua Ying, and Hongbin Zha The Key Lab of Machine Perception (Ministry of Education), Peking University,

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

Camera calibration with spheres: Linear approaches

Camera calibration with spheres: Linear approaches Title Camera calibration with spheres: Linear approaches Author(s) Zhang, H; Zhang, G; Wong, KYK Citation The IEEE International Conference on Image Processing (ICIP) 2005, Genoa, Italy, 11-14 September

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Plane-Based Calibration for Linear Cameras

Plane-Based Calibration for Linear Cameras Plane-Based Calibration for Linear Cameras Jamil Drareni, Peter Sturm, Sébastien Roy To cite this version: Jamil Drareni, Peter Sturm, Sébastien Roy Plane-Based Calibration for Linear Cameras OMNIVIS 28-8th

More information

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Peter F Sturm Computational Vision Group, Department of Computer Science The University of Reading,

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

Camera Calibration Using Line Correspondences

Camera Calibration Using Line Correspondences Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of

More information

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles Carlo Colombo, Dario Comanducci, and Alberto Del Bimbo Dipartimento di Sistemi ed Informatica Via S. Marta 3, I-5139

More information

Chapter 12 3D Localisation and High-Level Processing

Chapter 12 3D Localisation and High-Level Processing Chapter 12 3D Localisation and High-Level Processing This chapter describes how the results obtained from the moving object tracking phase are used for estimating the 3D location of objects, based on the

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Euclidean Reconstruction and Auto-Calibration from Continuous Motion

Euclidean Reconstruction and Auto-Calibration from Continuous Motion Euclidean Reconstruction and Auto-Calibration from Continuous Motion Fredrik Kahl and Anders Heyden Λ Centre for Mathematical Sciences Lund University Box 8, SE- Lund, Sweden {fredrik, andersp}@maths.lth.se

More information

View-Invariant Action Recognition Using Fundamental Ratios

View-Invariant Action Recognition Using Fundamental Ratios View-Invariant Action Recognition Using Fundamental Ratios Yuping Shen and Hassan Foroosh Computational Imaging Lab., University of Central Florida, Orlando, FL 3286 http://cil.cs.ucf.edu/ Abstract A moving

More information

Today s lecture. Structure from Motion. Today s lecture. Parameterizing rotations

Today s lecture. Structure from Motion. Today s lecture. Parameterizing rotations Today s lecture Structure from Motion Computer Vision CSE576, Spring 2008 Richard Szeliski Geometric camera calibration camera matrix (Direct Linear Transform) non-linear least squares separating intrinsics

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

Pin-hole Modelled Camera Calibration from a Single Image

Pin-hole Modelled Camera Calibration from a Single Image Pin-hole Modelled Camera Calibration from a Single Image Zhuo Wang University of Windsor wang112k@uwindsor.ca August 10, 2009 Camera calibration from a single image is of importance in computer vision.

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Quasi-Euclidean Uncalibrated Epipolar Rectification

Quasi-Euclidean Uncalibrated Epipolar Rectification Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report September 2006 RR 43/2006 Quasi-Euclidean Uncalibrated Epipolar Rectification L. Irsara A. Fusiello Questo

More information

Calibration of a Multi-Camera Rig From Non-Overlapping Views

Calibration of a Multi-Camera Rig From Non-Overlapping Views Calibration of a Multi-Camera Rig From Non-Overlapping Views Sandro Esquivel, Felix Woelk, and Reinhard Koch Christian-Albrechts-University, 48 Kiel, Germany Abstract. A simple, stable and generic approach

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Multiple View Geometry in computer vision

Multiple View Geometry in computer vision Multiple View Geometry in computer vision Chapter 8: More Single View Geometry Olaf Booij Intelligent Systems Lab Amsterdam University of Amsterdam, The Netherlands HZClub 29-02-2008 Overview clubje Part

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

More on single-view geometry class 10

More on single-view geometry class 10 More on single-view geometry class 10 Multiple View Geometry Comp 290-089 Marc Pollefeys Multiple View Geometry course schedule (subject to change) Jan. 7, 9 Intro & motivation Projective 2D Geometry Jan.

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM

CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM CIS 580, Machine Perception, Spring 2016 Homework 2 Due: 2015.02.24. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Recover camera orientation By observing

More information

Camera Calibration from Surfaces of Revolution

Camera Calibration from Surfaces of Revolution IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. XX, NO. Y, MONTH YEAR 1 Camera Calibration from Surfaces of Revolution Kwan-Yee K. Wong, Paulo R. S. Mendonça and Roberto Cipolla Kwan-Yee

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Camera Calibration and Shape Recovery from videos of Two Mirrors

Camera Calibration and Shape Recovery from videos of Two Mirrors Camera Calibration and Shape Recovery from videos of Two Mirrors Quanxin Chen and Hui Zhang Dept. of Computer Science, United International College, 28, Jinfeng Road, Tangjiawan, Zhuhai, Guangdong, China.

More information

Detecting vanishing points by segment clustering on the projective plane for single-view photogrammetry

Detecting vanishing points by segment clustering on the projective plane for single-view photogrammetry Detecting vanishing points by segment clustering on the projective plane for single-view photogrammetry Fernanda A. Andaló 1, Gabriel Taubin 2, Siome Goldenstein 1 1 Institute of Computing, University

More information

Camera Self-calibration with Parallel Screw Axis Motion by Intersecting Imaged Horopters

Camera Self-calibration with Parallel Screw Axis Motion by Intersecting Imaged Horopters Camera Self-calibration with Parallel Screw Axis Motion by Intersecting Imaged Horopters Ferran Espuny 1, Joan Aranda 2,andJosé I. Burgos Gil 3 1 Dépt. Images et Signal, GIPSA-Lab, Grenoble-INP Ferran.Espuny@gipsa-lab.grenoble-inp.fr

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

A Stratified Approach for Camera Calibration Using Spheres

A Stratified Approach for Camera Calibration Using Spheres IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. Y, MONTH YEAR 1 A Stratified Approach for Camera Calibration Using Spheres Kwan-Yee K. Wong, Member, IEEE, Guoqiang Zhang, Student-Member, IEEE and Zhihu

More information

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan

More information

An LMI Approach for Reliable PTZ Camera Self-Calibration

An LMI Approach for Reliable PTZ Camera Self-Calibration An LMI Approach for Reliable PTZ Camera Self-Calibration Hongdong Li 1, Chunhua Shen 2 RSISE, The Australian National University 1 ViSTA, National ICT Australia, Canberra Labs 2. Abstract PTZ (Pan-Tilt-Zoom)

More information

On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications

On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications Peter Sturm, Steve Maybank To cite this version: Peter Sturm, Steve Maybank. On Plane-Based Camera Calibration: A General

More information

Refining Single View Calibration With the Aid of Metric Scene Properties

Refining Single View Calibration With the Aid of Metric Scene Properties Goack Refining Single View Calibration With the Aid of Metric Scene roperties Manolis I.A. Lourakis lourakis@ics.forth.gr Antonis A. Argyros argyros@ics.forth.gr Institute of Computer Science Foundation

More information

CS231M Mobile Computer Vision Structure from motion

CS231M Mobile Computer Vision Structure from motion CS231M Mobile Computer Vision Structure from motion - Cameras - Epipolar geometry - Structure from motion Pinhole camera Pinhole perspective projection f o f = focal length o = center of the camera z y

More information

Camera calibration with two arbitrary coaxial circles

Camera calibration with two arbitrary coaxial circles Camera calibration with two arbitrary coaxial circles Carlo Colombo, Dario Comanducci, and Alberto Del Bimbo Dipartimento di Sistemi e Informatica Via S. Marta 3, 50139 Firenze, Italy {colombo,comandu,delbimbo}@dsi.unifi.it

More information

Structure and motion in 3D and 2D from hybrid matching constraints

Structure and motion in 3D and 2D from hybrid matching constraints Structure and motion in 3D and 2D from hybrid matching constraints Anders Heyden, Fredrik Nyberg and Ola Dahl Applied Mathematics Group Malmo University, Sweden {heyden,fredrik.nyberg,ola.dahl}@ts.mah.se

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

Projective Rectification from the Fundamental Matrix

Projective Rectification from the Fundamental Matrix Projective Rectification from the Fundamental Matrix John Mallon Paul F. Whelan Vision Systems Group, Dublin City University, Dublin 9, Ireland Abstract This paper describes a direct, self-contained method

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images

A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images Peter Sturm Steve Maybank To cite this version: Peter Sturm Steve Maybank A Method for Interactive 3D Reconstruction

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Projective geometry for Computer Vision

Projective geometry for Computer Vision Department of Computer Science and Engineering IIT Delhi NIT, Rourkela March 27, 2010 Overview Pin-hole camera Why projective geometry? Reconstruction Computer vision geometry: main problems Correspondence

More information

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt ECE 47: Homework 5 Due Tuesday, October 7 in class @:3pm Seth Hutchinson Luke A Wendt ECE 47 : Homework 5 Consider a camera with focal length λ = Suppose the optical axis of the camera is aligned with

More information

Globally Optimal Algorithms for Stratified Autocalibration

Globally Optimal Algorithms for Stratified Autocalibration Globally Optimal Algorithms for Stratified Autocalibration By Manmohan Chandraker, Sameer Agarwal, David Kriegman, Serge Belongie Presented by Andrew Dunford and Adithya Seshasayee What is Camera Calibration?

More information

Single View Metrology

Single View Metrology International Journal of Computer Vision 40(2), 123 148, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Single View Metrology A. CRIMINISI, I. REID AND A. ZISSERMAN Department

More information