Robust Image Mosaicing of Soccer Videos using Self-Calibration and Line Tracking

Size: px
Start display at page:

Download "Robust Image Mosaicing of Soccer Videos using Self-Calibration and Line Tracking"

Transcription

1 Pattern Analysis & Applications (2001)4: Springer-Verlag London Limited Robust Image Mosaicing of Soccer Videos using Self-Calibration and Line Tracking Hyunwoo Kim and Ki Sang Hong Department of Electronic and Electrical Engineering, Pohang University of Science and Technology, Pohang, Republic of Korea Abstract: In this paper we propose an accurate and robust image mosaicing method of soccer video taken from a rotating and zooming camera using line tracking and self-calibration. The mosaicing of soccer videos is not easy, because their playing fields are low textured and moving players are included in the fields. Our approach is to track line features on the playing fields. The line features are detected and tracked using a self-calibration technique for a rotating and zooming camera. To track line features efficiently, we propose a new line tracking algorithm, called camera parameter guided line tracking, which works even when the camera motion undergoes sudden changes. Since we do not need to know any model for scenes beforehand, the proposed algorithm can be easily extended to other video sources, as well as other sports videos. Experimental results show the accuracy and robustness of the algorithm. An application of mosaicing is also presented. Keywords: Inter-image homography; Line tracking; Rotating and zooming camera; Self-calibration; Soccer videos; Video mosaicing 1. INTRODUCTION The image mosaicing technique is one of the most important elements of video analysis. In earlier work, we analysed soccer video using image mosaicing results [1,2]. The known model of playing fields was used, and the mosaicing results were not accurate. However, our work showed the good application of image mosaics to soccer video analysis. The applications include determining the trajectories of players on the field models, as well as the 3D location of a soccer ball. In Reid and Zissarman [3], an accurate measurement for a soccer ball was introduced based on accurate image mosaicing results, but lines are manually matched. Irani et al [4,5] also introduced new applications, like video compression and indexing. The mosaicing of soccer videos is not easy, because the playing fields are low textured and moving players are included in the fields. Traditional mosaicing techniques [6 8] are not appropriate for these cases, because they work on the moving players, not on the playing field. A reasonable Received: 5 January 2000 Received in revised form: 2 May 2000 Accepted: 26 June 2000 approach is to track the line features of the playing field, and construct mosaics using the tracked lines. Harris [9] utilised information on known 3D objects for tracking. He tracked the 3D position of objects and cameras using a Kalman filter, but he calibrated the cameras using a specific pattern, and assumed that the focal length of the cameras was fixed. The work of Clarke et al [10] extended Harris work to the estimation of unknown 3D objects and uncalibrated cameras. They used only the structure of line objects; they did not use camera information such as focal length or relative rotation angles. In this paper, we extend their work to deal with more complex camera motion. Other related work is as follows. Szeliski and Shum [8] introduced an image mosaicing method for image-based modeling by recovering 3D camera rotations. They assumed rotating cameras with a fixed focal length, which is approximately known. Morimoto and Chellappa [11] developed a fast electronic image stabilisation system that compensates for 3D rotation, but this method did not handle camera zooming either. In contrast with those methods, our tracking method can handle camera zooming and focusing, and provides an efficient tracking algorithm, called Camera Parameter guided (CP-guided) tracking, owing to the self-calibration technique. Since we do not need to know any model for the

2 10 H. Kim and K. S. Hong scenes beforehand, the proposed algorithm can be easily extended to other video sources, as well as other sports videos. Experimental results show the accuracy and robustness of the algorithm, and we present an application of mosaics to calculate 3D ball trajectories from an unsynchronised stereo camera. This paper is organised as follows. The algorithm is outlined in Section 2, and the self-calibration method is explained as preliminary in Section 3. In Section 4, the CP-guided tracking algorithm is proposed. The initialisation and tracking stages are described in Sections 5 and 6, respectively. Experimental results and more applications are given in Sections 7 and 8, respectively. Finally, concluding remarks are given in Section OUTLINE This section gives a brief outline of our algorithm. We suppose that an image sequence S = {I 0,...,I N } is captured by a pan-tilt camera (without z-axis rotation) with varying internal parameters. A flowchart of our algorithm is shown in Fig. 1. The algorithm consists of two stages: the initialisation stage and the tracking stage. The first stage of the algorithm is the initialisation stage. Initialisation consists of initial line matching and estimation of the camera parameters. First, line features l 0 and l 1 are extracted and matched between the 0th frame I 0 and the first frame I 1 using an initial line matching method, which will be explained in Section 4. Camera parameters and inter-image homography are estimated from the matching lines using the nonlinear self-calibration method, which will be described in Section 3. In this step, false matches are rejected, and then the initial mosaic is constructed by the homography. Finally, lines l 1 are transferred to the reference frame I 0, and the reference lines l 0 are updated by registering them. The set of reference lines is called the line model, and it is updated during tracking. The line model is the set of registered lines to the reference frame, and it helps us perform image mosaicing even when there is no overlapping region between the reference image and any other frames. Fig. 1. Flowchart of our algorithm. Next, the tracking stage follows the initialisation stage. Image mosaicing is sequentially performed in this stage. To carry out the sequential image mosaicing at frame I k, the line features l k are extracted, and l 0 and lk are matched using our CP-guided tracking method, which will be introduced in Section 4. The positions of the line model l 0 in image Ik are predicted using the CP-guided prediction, and the line features are matched using a proximity rule; then, the interimage homography between I 0 and I k is computed using the least median squares (LMedS) method. To estimate current camera parameters and refine the homography, we use the nonlinear self-calibration algorithm. An image mosaic is constructed from the result, and the line model is updated. These steps are repeated for the subsequent (k 1)th frames until we reach the last frame. 3. SELF-CALIBRATION In this paper, we construct mosaics of images captured by a pan-tilt camera with varying focal lengths, and this section explains the self-calibration method. Details can be found in Kim and Hong [12]. In contrast to other algorithms [13 16], the algorithm works well even when the camera motion is almost zooming with very little rotation Camera Modelling We consider a pan-tilt camera with projection matrices P k = K k [R k O], where R k denotes the rotation of the kth camera with respect to the reference (0th) camera, and K k is the camera matrix defined by K k = diag(f k,f k,1) (1) where f k is the focal length of the kth camera. Note that we assume the principal point of the camera is at the image centre, the skew is zero and the aspect ratio can be approximated by 1. These assumptions are reasonable, because mislocating the principal points and zero-skew modelling do not seem to affect the self-calibration on the practical level. The effects of the assumptions are analysed in Seo and Hong [16]. For this camera model, there is a 2D projective transformation H k, which transfers image points u 0 on the reference frame to their matching points u k on the kth frame, whose matrix is of the form H k = K k R k K 1 0 (2) The matrix is called an inter-image homography, and it satisfies the relationship u k = H k u 0, where u k and u 0 are matching points. For matching lines l k and l 0, the relationship l k = H T k l 0 is satisfied. For pan-tilt cameras, Eq. (2) can be written as H k (f 0,f k, k, k ) = cos k sin ksin k f0 cos k sin k 0 cos k f 0 sin k sin k sin (3) k cos k f 0 cos k cos k f k f k f k

3 Robust Image Mosaicing where k and k are the rotation angles around the x-axis and the y-axis of the reference camera coordinate, respectively. We assume that the rotation angle k around the z- axis is negligibly small. In this case, the number of unknowns is four: two for rotation and two for focal length. One interimage homography H k, from which we have eight equations, is sufficient to compute the unknown parameters Linear Algorithm From Eq. (3), we get the following equations: cos k = h 11 (4) sin k sin k = h 12 (5) f 0 cos k sin k = h 13 (6) cos k = h 22 (7) f 0 sin k = h 23 (8) sin k = h f 31 k (9) sin k cos k = h f 32 (10) k f 0 cos k cos k = h f 33 (11) k where is an arbitrary nonzero value. For eliminating f 0, we use Eqs (7), (8), (10) and (11), then get the relation tan k = h 23h 32. After considering the sign of the tangent h 22 h 33 function, we have Eq. (12) for calculating k. In similar way, we can get k : k = h 23h 33 h 23 h 33 tan 1 h 23h 32 h 22 h 33, (12) k = h 13h 33 h 13 h 33 tan 1 h 13h 31 h 11 h 33 The focal lengths f 0 and f k can be also directly computed from inter-image homography: f0 = h13h33/(h11h31 + h12h32), if k k (13) f 0 = h 23 h 33 /(h 21 h 31 + h 22 h 32 ), elsewhere f k = 1 2 f2 0(h h 2 12) +h 2 13 f + f 2 0 (h h 2 32) +h (h h 2 22) 2 23 f 2 0(h h 2 32) +h33 2 (14) In contrast to other self-calibration methods, which do not work when the camera motion is almost all zooming with very little rotation [13 16], this linear algorithm works for any camera motion. In addition, there is a nonlinear algorithm that adjusts not only camera parameters, but also inter-image homography so that more accurate image registration is made possible Nonlinear Algorithm and Improvement of Inter-Image Homography 11 All previous algorithms use the following steps for selfcalibration. First, inter-image homography is computed from matching points (or matching lines), and then the camera parameters are calculated from the estimated homography. Therefore, the accuracy of self-calibration methods depend upon the inter-image homography estimation. That is, previous algorithms, including our proposed linear selfcalibration algorithm, are sensitive to the homography estimation. To improve the performance for real images, we merge the two steps into one step using a non-linear optimisation. The nonlinear algorithm can improve interimage homography due to the parameterisation using camera parameters. Our approach is to estimate the camera parameters directly from matching points, not from the result of inter-image homography estimation. The relationship between M matching points, u k = {u 1 k,...,u M k } and u 0 = {u 1 0,...,u M 0 }, is u k = H k u 0. Remember that the inter-image homography is parameterised by camera parameters f 0, f k, k and k (Eq. (3)). To solve the camera parameters and the inter-image homography simultaneously, we minimise the following error function with respect to the camera parameters: E(f 0,f k, k, k ) = M 1 M u i k H k (f 0,f k, k, k )u i 0 2 (15) i=1 That is, we minimise the Euclidean distance between corresponding points. For matching lines l 0 = {l 1 0,...,l M 0 } and l k = {l 1 k,...,l M k }, the equation has the form of E = M 1 M i=1 d(li k,h T k l i 0)+d(l i 0,H T k l i k) 2 2 (16) where d(l a,l b ) denotes the distance between two lines, l a and l b. We call it the matching error. When the two endpoints of l a are e 1 a and e 2 a, and when l b has the form (l b,1,l b,2,l b,3 ) T in the homogeneous coordinate, the distance is defined by d(l a,l b ) = lt b e 1 a 2 + l T b e 2 a 2 (l 2 b,1 + l 2 b,2) (17) The above distance measure between matching lines is only an example and other distance measures can be used [17]. The linear solution (Section 3.2) is used for the initial values, and then Eqs (15) or (16) are optimised using the Levenberg Marqurdt method [18]. This non-linear method gives stable self-calibration results for real images, and refines inter-image homography and the linear solution simultaneously.

4 12 H. Kim and K. S. Hong Fig. 2. Line matching. 4. CP-GUIDED LINE TRACKING CP-guided tracking consists of two steps: CP-guided prediction and robust line matching. The locations of the lines are predicted in the current frame using the CP-guided prediction algorithm, then the line model and the lines that are extracted in the current frame are matched through robust line matching. Normally, to predict the locations of line features, a Kalman filter is used for each line feature, but it fails when the camera motion undergoes sudden changes, e.g. when a camera stops suddenly [19]. We apply our proposed prediction algorithm to the problem and overcome it. The basic idea of our algorithm is to predict the line locations by hierarchically searching the camera parameter set with the maximum number of matchings. Suppose that the camera parameters of the (k 1)th and the kth frame are calculated in the previous frame. (f k, k and k denote the focal length, tilt angle and pan angle at the kth frame, respectively.) In this case, we want to predict the camera parameters fˆk+1, ˆ k+1 and ˆ k+1 at the (k 1)th frame, and match lines based on them. The algorithm is hierarchically performed. First, we compute the deviation of each parameter between the previous two frames (f k f k 1, k k 1 and k k 1 ), then we quantise the camera parameter space with them. The quantised parameters are Fig. 3. Original soccer video. (a) The reference image, (b) the other images. The order is from top left to bottom right.

5 Robust Image Mosaicing used as candidates. The search range of the camera parameter space is determined depending on the speed of the camera. For example, if the camera can be assumed to have constant velocity, the camera parameter candidate will be just one (fˆk+1 = f k + (f k f k 1 ), ˆ k+1 = k + ( k k 1 ) and ˆ k+1 = k + ( k k 1 )). Among the candidates, we select the best parameter set with the maximum number of matching lines as a solution. Next, we reduce the quantisation step, then we search the best parameter set as before. The procedure is repeated unless the matching number increases more than some user-given threshold. The details of the algorithm are as follows. 1. Set i = 0, = 1.0, f(0) = f k, (0) = k and (0) = k. 2. Select candidates using the following equations: fˆ(l) = f(i) +l f k f k 1, l = l min,,l max (18) ˆ (m) = (i) +m k k 1, m = m min,,m max (19) ˆ (n) = (i) +n k k 1, n = n min,,n max (20) 3. For all the candidates, calculate the number of matching lines using a proximity rule, which is explained in Section Choose a parameter with the maximum number N match (i) of matching lines, and store its corresponding camera parameters as fˆmin, ˆ min and ˆ min. Then increase i. 5. If N match (i) N match (i 1) N thresh, a user-specified value, then go to Step 7. Otherwise, go to the next step Replace f(i), (i) and (i) with fˆmin, ˆ min and ˆ min, respectively. Set l,m,n = { 1,0,1} and = 2 i, and go to Step Select fˆmin, ˆ min and ˆ min as fˆk+1, ˆ k+1 and ˆ k+1, respectively. 8. Match the previous tracked lines with the predicted lines using our proximity rule. Refine matching lines using the LMedS method [20]. Steps 1 7 correspond to the CP-guided prediction algorithm, and Step 8 corresponds to the robust line matching. The prediction algorithm gives a set of matching lines. The matching by the proximity rule matches the lines on playing fields, not from those on players, due to the predicted camera parameters. Then the LMedS method refines the matching lines by removing false matches. The threshold value N thresh should be set depending on outliers like moving players in a soccer sequence and the proximity parameters specified by users for line matching (Section 4.1). Fortunately, we found that the number of iterations is not so sensitive to the value N thresh. In our case, we set the value 0.1 number of current matched lines for the sequences used in the experiment Proximity Rule Let us explain our proximity rule for line matching. Suppose that l 0 = {l 1 0,,l M 0 } and l 1 = {l 1 1,,l N 1 } are the two different sets of line segments. We want to match each l i 0 with the best matching line that satisfies the proximity rule and has the minimum matching distance among l 1. Remember that the matching distance is given in Eq. (16). First, we check Fig. 4. Predicted feature position. (a) 3rd frame, (b) 6th frame, (c) 9th frame, (d) 12th frame.

6 14 H. Kim and K. S. Hong the following conditions between each matching pair l i 0 and l j 1: 0 1 th d 0 d 1 d th (21) max(ol x,ol y ) ol th g T 0 g 1 0 where k, d k and g k denote the slope angle, the distance from image origin, and the average of the intensity gradient on the line segments of line segments l k, respectively. ol x and ol y are the projected length onto the x-axis and y-axis of the overlapping region, respectively, and g T 0g 1 0 means that the gradient should not be in the opposite direction, which resolves matching ambiguities between line segments on linear bands (e.g. thick lines). th, d th and ol th are userspecified threshold values. (Figure 2 shows the described variables.) Among the matching candidates that satisfy all the proximity conditions, the line segment with the minimum matching distance is selected as the matching pair for each line segment. 5. THE INITIALISATION STAGE As previously mentioned, in the initialisation stage the initial line matching and the camera parameter estimation are performed for their extracted line segments in the first two frames. In this section, we describe the details of the components of the stage Initial Line Matching We extract line features l 0 and l 1, respectively, in the first two frames I 0 and I 1 using the standard Hough transform [21], and then we match the line segments as follows. Because we do not know the camera parameters in the initialisation stage, we exhaustively search the camera parameter space using the modified version of the CP-guided prediction algorithm. Steps 1 and 2 in the prediction algorithm are replaced with the following steps. In Step 1, we set f 0 to several typical values such as 500, 1000 and 2000 pixels, f 1 f 0 = 20 pixels, 1 = 1 and 1 = 1. In Step 2, candidates are selected as follows: fˆ(l) = f(i) +l f 1 f 0, l = l max,,l max (22) ˆ (m) = (i) +m 1 0, m = m max,,m max (23) ˆ (n) = (i) +n 1 0, n = n max,,n max (24) The number of the above candidates is huge, but the candidates are only used in the initialisation stage once, and they can be implemented hierarchically, as in the CPguided tracking algorithm. The camera parameter set with the minimum matching error is selected as a solution. Based on this, we match lines using the proximity rule. From the matching lines, inter-image homography is estimated using Fig. 5. Estimated camera parameters with respect to the reference image. (a) The rotation angles, (b) the focal lengths, (c) the focal length ratios. the LMedS method, and false matches and outliers are rejected [20]. Then, the camera parameters are estimated and an accurate image mosaic is constructed using the nonlinear self-calibration method. Finally, lines l 1 are transferred into the reference frame I 0, and l 0 is updated by adding them. In this paper, the updated line l 0 is called the line model. The line model is updated during tracking, and all lines are registered to it. Therefore, image mosaicing with respect to the reference image can be performed even when there is no overlapping region between the reference image and any other frames. Practically, when the zooming and rotation angles of the camera are small at the beginning of a soccer video, the translation-only alignment can be used [22]. In the translation-only alignment, the matching errors are computed between the translated images of I 1 and I 0. The translation with the minimum value is selected as a solution.

7 Robust Image Mosaicing 6. THE TRACKING STAGE The tracking stage carries out sequential image mosaicing using self-calibration and line tracking. Details are described in the following subsections CP-Guided Line Tracking First, line features l k are extracted in the current kth frame I k. Then we predict the position lˆk of the line model in the current frame in order to match them with the extracted line features. The prediction is performed using the previous estimated camera parameters. We predict the camera parameters fˆk, ˆ k and ˆ k using our CP-guided prediction algorithm (Steps 1 7 in the tracking algorithm). Using the predicted camera parameters, the lines lˆk and l k are matched by our proximity rule. Then the matching lines are refined by a robust line matching procedure. 15 In the robust line matching procedure, we use the LMedS method, and its algorithm is described as follows (refer to Zhang [20] for details of the LMedS method): 1. Randomly select five pairs from the matching lines. 2. From them, compute an inter-image homography using the singular value decomposition [18]. 3. Measure the homography quality for all matching lines. (Each matching line in the reference frame is transferred to the current frame by the homography. The distance between the transferred line and the corresponding line in the current frame is calculated using Eq. (17). We define their median value as the quality of the homography.) 4. Repeat Steps 1 3 until a sufficient number of samplings, which can be theoretically specified in the paper by Zhang [20], is reached. 5. Select the matching pairs and the homography with the Fig. 6. Image mosaics. (a) The reference frame, (b) 3rd frame, (c) 6th frame, (d) 9th frame, (e) the final 12th frame.

8 16 H. Kim and K. S. Hong best quality, i.e. the one with the smallest median value, as the solution. 6. Reject the outliers of the selected homography, and select the inliers as the final matching lines Self-Calibration and Homography Estimation From the homography with the best quality, camera parameters are estimated using the linear self-calibration method. Next, the nonlinear algorithm refines the linear solution and improves the inter-image homography. 7. EXPERIMENTAL RESULTS In this section, we apply our image mosaicing algorithm to three video sources. One is captured from a single viewpoint, and the others are stereo videos captured from two different viewpoints. The first soccer video is shown in Fig. 3. The images are captured from every fourth frame. The camera is zoomed in and rotated simultaneously, then it continues to zoom in with little rotation. Figure 3(a) is the reference image to which other images are to be registered. For each frame, line features are extracted using Hough transform. Using our CP-guided prediction algorithm and robust line matching, the line segments are matched and tracked. In Fig. 4, the predicted positions of the tracked line segments are overlaid on the frames (3rd, 6th, 9th and 12th frame). We can see that our prediction algorithm can make the predicted lines locate near the line segments in the current frame, so that our robust line matching algorithm can work. Figure 5 shows the self-calibration results after 30 iterations. Figures 5(a), (b) and (c) show the rotation angles, the focal lengths and the focal length ratio, respectively. Since the camera parameters are estimated from homographies, as presented in Section 3, good estimation of camera parameters means that the homographies, which are given as our mosaicing result, are accurately estimated. We estimate the camera parameters with respect to the reference frame, so f 0 should be constant for all frames. However, up to the third frame where the rotation angles are small, the focal lengths are unstable, but their ratio is stable, as shown in Figs 5(b) and (c). Nevertheless, our algorithm works well, as shown in Fig. 4(a). This is because, when the rotation angles are small, the displacement of features only depends upon the ratio of focal lengths. In this case, Eq. (3) can be written as H k = K k K 0 1 = diag(f k /f 0, f k /f 0,1). After the fourth frame, the focal lengths seem to be correctly estimated. For all frames, the focal length ratio and the rotation angles seem to be correctly estimated, and based on them, we can track lines. The mosaicing results of the video are shown in Fig. 6. Each image mosaic is simply merged by averaging the mosaic of the previous frames and the registered current Fig. 7. Stereo soccer video. The two top rows of video are captured by the left camera and the two bottom rows are captured by the right camera. The order is from top left to bottom right.

9 Robust Image Mosaicing frame. You can see that the lines on the playing ground, the ad board and the auditorium are sharply registered for all the frames. Stereo videos are shown in Fig. 7. As can be seen, the video pairs are not synchronised. The video captured by the left camera is played in slow motion. The motion of the two stereo cameras is almost all zooming, with little rotation. Figure 8 shows the estimated camera parameters of the stereo cameras. Since the rotation angles are small and the images are somewhat blurred, some frames give incorrect results, for example, the 8th frame captured by the left camera. Figure 9 shows the last frames and their image mosaics. The overlaid curves and lines are explained in the next section. 8. APPLICATIONS When we construct accurate image mosaics, we can extract more information from the video. In Fig. 9, ball trajectories are shown in the stereo image mosaics. Since the stereo videos are not synchronised, the stereo matching of the ball is not possible, therefore determination of the 3D ball trajectory is also not possible. However, after registering other frames to the reference images, we can obtain the 17 trajectory of the ball in the stereo pair. Therefore, although we cannot match the ball between each corresponding image pair, we can match the trajectories of the ball between the stereo image mosaics. This means that we can consider dynamic objects (the ball) as static structures (the ball trajectories) due to accurate image mosaicing. First, we manually point out the positions of the ball in the video sequences, and transfer the positions to the image mosaics using the computed inter-image homographies. When we assume that the ball trajectories are smooth, the transferred positions are interpolated using bi-cubic spline [18]. The trajectories can be seen in Fig. 9. Next, we compute the fundamental matrix between the stereo image mosaics, and then based on it, the epipolar line of each point on the right trajectory can be determined in the left image mosaic (see two epipolar lines on the left image). As a result, the intersection point between the epipolar line and the ball trajectory in the left image is the matching point. In Fig. 9, the matching pairs of two points are shown. From the matching positions, we can compute the 3D positions of the ball, and thus the 3D ball trajectory as well. The information can be used for the synthesis of new video from the viewpoint of the soccer ball. Fig. 8. Estimated camera parameters. The result of the left camera is shown in the left column and the result of the right camera is shown in the right column. (a) and (b) are the focal lengths, (c) and (d) are the ratio of the focal lengths, (e) and (f) are the rotation angles.

10 18 H. Kim and K. S. Hong Fig. 9. Image mosaics and ball trajectories. (a) Image mosaic of the left camera, (b) image mosaic of the right camera. 9. CONCLUDING REMARKS In this paper, we have proposed an accurate and robust image mosaicing method of soccer video using line tracking and self-calibration. Our approach is to track line features on the playing fields. The line features are detected, and they are tracked using self-calibration. Experimental results show the accuracy and robustness of the algorithm. We have also presented an application of mosaics to calculate 3D ball trajectories from an unsynchronised stereo camera. References 1. Kim T, Seo Y, Hong KS. Physics-based 3D position analysis of a soccer ball from monocular image sequences. Proc Int Conf on Computer Vision 1998; Seo Y, Choi S, Kim H, Hong KS. Where are the ball and players? Soccer game analysis with color-based tracking and image mosaick. Proc Int Conf on Image Analysis and Processing September Reid I, Zisserman A. Goal-directed video metrology. Proc Euro Conf on Computer Vision 1996; II: Irani M, Anandan P, Bergen J, Kumar R, Hsu S. Efficient representations of video sequences and their applications. Signal Processing: Image Commun 1996; 8: Irani M, Anandan P. Video indexing based on mosaic representations. Proc IEEE 1998; 96(5): Irani M, Rousso B, Peleg S. Computing occluding and transparent motions. Int J Computer Vision 1994; 12(1): Sawhney HS, Ayer S. Compact representations of videos through dominant and multiple motion estimation. IEEE Trans Pattern Analysis and Machine Intelligence 1996; 18(8): Szeliski R, Shum HY. Creating full view panoramic image mosaics and environment maps. Proc SIGGRAPH 1997; Harris C. Tracking with Rigid Models, in Active Vision. A Black, A Yuille (eds). MIT Press, Clarke JC, Carlsson S, Zisserman A. Detecting and tracking linear features efficiently. Proc British Machine Vision Conference Morimoto C, Chellappa R. Fast 3D stabilization and mosaic construction. Proc Int Conf on Computer Vision and Pattern Recognition 1997;

11 Robust Image Mosaicing 12. Kim H, Hong KS. A practical self-calibration method of pantilt cameras. Proc Int Conf on Pattern Recognition 2000 (or available in POSTECH Technical Report TR-9901, Pohang University of Science and Technology, October 1999) 13. de Agapito L, Hayman E, Reid I. Self-calibration of a rotating camera with varying intrinsic parameters. Proc British Machine Vision Conf 1998; de Agapito L, Hartley RI, Hayman E. Linear calibration of a rotating and zooming camera. Proc Int Conf on Computer Vision and Pattern Recognition 1999; I: Seo Y, Hong KS. Auto-calibration of a rotating and zooming camera. Proc IAPR Workshop on Machine Vision Applications 1998; Seo Y, Hong KS. About the self-calibration of a rotating and zooming camera: theory and practice. Proc Int Conf on Computer Vision 1999; Hartley RI. Projective reconstruction from line correspondences. Proc Int Conf on Computer Vision and Pattern Recognition. 1994; Press WH, Teukolsky SA, Vetterling WT, Flannery BP. Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, Faugeras O. Three-Dimensional Computer Vision. MIT Press, Zhang Z. Parameter Estimation Techniques: A Tutorial with Application to Conic Fitting. INRIA Technical Report RR- 2676, October Pitas I. Digital Image Processing Algorithms. Prentice Hall, 1993; Peleg S, Herman J. Panoramic mosaics by manifold projection. Proc Int Conf on Computer Vision and Pattern Recognition 1997; Hyunwoo Kim received a BS degree from Hangyang University, Seoul, Korea, in 1994, and an MS degree from POSTECH, Pohang, Korea, in He is currently a PhD candidate in the Department of Electronic and Electrical Engineering, POSTECH, Pohang, Korea, since His current research interests include computer vision, virtual reality, augmented reality and computer graphics. Ki-Sang Hong received a BS degree in Electronic Engineering from Seoul National University, Korea, in 1977, and MS and PhD degrees in Electrical & Electronic Engineering from KAIST, Korea, in 1979 and 1984, respectively. From he was a researcher in the Korea Atomic Energy Research Institute, and in 1986 he joined POSTECH, Korea, where he is currently an associate professor of Electrical & Electronic Engineering. From he worked in the Robotics Institute at Carnegie Mellon University, Pittsburgh, PA, as a visiting professor. His current research interests include computer vision, augmented reality and pattern recognition. Correspondence and offprint requests to: K. S. Hong, Department of Electronics and Electrical Engineering, Pohang University of Science and Technology, Pohang , Korea. hongks postech.ac.kr

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

1st frame Figure 1: Ball Trajectory, shadow trajectory and a reference player 48th frame the points S and E is a straight line and the plane formed by

1st frame Figure 1: Ball Trajectory, shadow trajectory and a reference player 48th frame the points S and E is a straight line and the plane formed by Physics-based 3D Position Analysis of a Soccer Ball from Monocular Image Sequences Taeone Kim, Yongduek Seo, Ki-Sang Hong Dept. of EE, POSTECH San 31 Hyoja Dong, Pohang, 790-784, Republic of Korea Abstract

More information

AUTOMATIC RECTIFICATION OF LONG IMAGE SEQUENCES. Kenji Okuma, James J. Little, David G. Lowe

AUTOMATIC RECTIFICATION OF LONG IMAGE SEQUENCES. Kenji Okuma, James J. Little, David G. Lowe AUTOMATIC RECTIFICATION OF LONG IMAGE SEQUENCES Kenji Okuma, James J. Little, David G. Lowe The Laboratory of Computational Intelligence The University of British Columbia Vancouver, British Columbia,

More information

Robust Camera Calibration from Images and Rotation Data

Robust Camera Calibration from Images and Rotation Data Robust Camera Calibration from Images and Rotation Data Jan-Michael Frahm and Reinhard Koch Institute of Computer Science and Applied Mathematics Christian Albrechts University Kiel Herman-Rodewald-Str.

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

1-2 Feature-Based Image Mosaicing

1-2 Feature-Based Image Mosaicing MVA'98 IAPR Workshop on Machine Vision Applications, Nov. 17-19, 1998, Makuhari, Chibq Japan 1-2 Feature-Based Image Mosaicing Naoki Chiba, Hiroshi Kano, Minoru Higashihara, Masashi Yasuda, and Masato

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

Model Refinement from Planar Parallax

Model Refinement from Planar Parallax Model Refinement from Planar Parallax A. R. Dick R. Cipolla Department of Engineering, University of Cambridge, Cambridge, UK {ard28,cipolla}@eng.cam.ac.uk Abstract This paper presents a system for refining

More information

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential

More information

Self-calibration of a pair of stereo cameras in general position

Self-calibration of a pair of stereo cameras in general position Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible

More information

Lucas-Kanade Image Registration Using Camera Parameters

Lucas-Kanade Image Registration Using Camera Parameters Lucas-Kanade Image Registration Using Camera Parameters Sunghyun Cho a, Hojin Cho a, Yu-Wing Tai b, Young Su Moon c, Junguk Cho c, Shihwa Lee c, and Seungyong Lee a a POSTECH, Pohang, Korea b KAIST, Daejeon,

More information

Feature-Based Image Mosaicing

Feature-Based Image Mosaicing Systems and Computers in Japan, Vol. 31, No. 7, 2000 Translated from Denshi Joho Tsushin Gakkai Ronbunshi, Vol. J82-D-II, No. 10, October 1999, pp. 1581 1589 Feature-Based Image Mosaicing Naoki Chiba and

More information

A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES

A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES Yuzhu Lu Shana Smith Virtual Reality Applications Center, Human Computer Interaction Program, Iowa State University, Ames,

More information

Estimation of common groundplane based on co-motion statistics

Estimation of common groundplane based on co-motion statistics Estimation of common groundplane based on co-motion statistics Zoltan Szlavik, Laszlo Havasi 2, Tamas Sziranyi Analogical and Neural Computing Laboratory, Computer and Automation Research Institute of

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images Gang Xu, Jun-ichi Terai and Heung-Yeung Shum Microsoft Research China 49

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

Multiple Motion Scene Reconstruction from Uncalibrated Views

Multiple Motion Scene Reconstruction from Uncalibrated Views Multiple Motion Scene Reconstruction from Uncalibrated Views Mei Han C & C Research Laboratories NEC USA, Inc. meihan@ccrl.sj.nec.com Takeo Kanade Robotics Institute Carnegie Mellon University tk@cs.cmu.edu

More information

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods

More information

Textureless Layers CMU-RI-TR Qifa Ke, Simon Baker, and Takeo Kanade

Textureless Layers CMU-RI-TR Qifa Ke, Simon Baker, and Takeo Kanade Textureless Layers CMU-RI-TR-04-17 Qifa Ke, Simon Baker, and Takeo Kanade The Robotics Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Abstract Layers are one of the most well

More information

Factorization Method Using Interpolated Feature Tracking via Projective Geometry

Factorization Method Using Interpolated Feature Tracking via Projective Geometry Factorization Method Using Interpolated Feature Tracking via Projective Geometry Hideo Saito, Shigeharu Kamijima Department of Information and Computer Science, Keio University Yokohama-City, 223-8522,

More information

Auto-calibration Kruppa's equations and the intrinsic parameters of a camera

Auto-calibration Kruppa's equations and the intrinsic parameters of a camera Auto-calibration Kruppa's equations and the intrinsic parameters of a camera S.D. Hippisley-Cox & J. Porrill AI Vision Research Unit University of Sheffield e-mail: [S.D.Hippisley-Cox,J.Porrill]@aivru.sheffield.ac.uk

More information

A Novel Stereo Camera System by a Biprism

A Novel Stereo Camera System by a Biprism 528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Precise Omnidirectional Camera Calibration

Precise Omnidirectional Camera Calibration Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Efficient Stereo Image Rectification Method Using Horizontal Baseline

Efficient Stereo Image Rectification Method Using Horizontal Baseline Efficient Stereo Image Rectification Method Using Horizontal Baseline Yun-Suk Kang and Yo-Sung Ho School of Information and Communicatitions Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro,

More information

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Chris J. Needham and Roger D. Boyle School of Computing, The University of Leeds, Leeds, LS2 9JT, UK {chrisn,roger}@comp.leeds.ac.uk

More information

An Algorithm for Seamless Image Stitching and Its Application

An Algorithm for Seamless Image Stitching and Its Application An Algorithm for Seamless Image Stitching and Its Application Jing Xing, Zhenjiang Miao, and Jing Chen Institute of Information Science, Beijing JiaoTong University, Beijing 100044, P.R. China Abstract.

More information

Coplanar circles, quasi-affine invariance and calibration

Coplanar circles, quasi-affine invariance and calibration Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of

More information

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE Naho INAMOTO and Hideo SAITO Keio University, Yokohama, Japan {nahotty,saito}@ozawa.ics.keio.ac.jp Abstract Recently there has been great deal of interest

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

3D Motion from Image Derivatives Using the Least Trimmed Square Regression

3D Motion from Image Derivatives Using the Least Trimmed Square Regression 3D Motion from Image Derivatives Using the Least Trimmed Square Regression Fadi Dornaika and Angel D. Sappa Computer Vision Center Edifici O, Campus UAB 08193 Bellaterra, Barcelona, Spain {dornaika, sappa}@cvc.uab.es

More information

IMPACT OF SUBPIXEL PARADIGM ON DETERMINATION OF 3D POSITION FROM 2D IMAGE PAIR Lukas Sroba, Rudolf Ravas

IMPACT OF SUBPIXEL PARADIGM ON DETERMINATION OF 3D POSITION FROM 2D IMAGE PAIR Lukas Sroba, Rudolf Ravas 162 International Journal "Information Content and Processing", Volume 1, Number 2, 2014 IMPACT OF SUBPIXEL PARADIGM ON DETERMINATION OF 3D POSITION FROM 2D IMAGE PAIR Lukas Sroba, Rudolf Ravas Abstract:

More information

Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation

Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Engin Tola 1 and A. Aydın Alatan 2 1 Computer Vision Laboratory, Ecóle Polytechnique Fédéral de Lausanne

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease Particle Tracking For Bulk Material Handling Systems Using DEM Models By: Jordan Pease Introduction Motivation for project Particle Tracking Application to DEM models Experimental Results Future Work References

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

External camera calibration for synchronized multi-video systems

External camera calibration for synchronized multi-video systems External camera calibration for synchronized multi-video systems Ivo Ihrke Lukas Ahrenberg Marcus Magnor Max-Planck-Institut für Informatik D-66123 Saarbrücken ihrke@mpi-sb.mpg.de ahrenberg@mpi-sb.mpg.de

More information

Multiview Stereo COSC450. Lecture 8

Multiview Stereo COSC450. Lecture 8 Multiview Stereo COSC450 Lecture 8 Stereo Vision So Far Stereo and epipolar geometry Fundamental matrix captures geometry 8-point algorithm Essential matrix with calibrated cameras 5-point algorithm Intersect

More information

Visualization 2D-to-3D Photo Rendering for 3D Displays

Visualization 2D-to-3D Photo Rendering for 3D Displays Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

Simultaneous surface texture classification and illumination tilt angle prediction

Simultaneous surface texture classification and illumination tilt angle prediction Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona

More information

Passive 3D Photography

Passive 3D Photography SIGGRAPH 2000 Course on 3D Photography Passive 3D Photography Steve Seitz Carnegie Mellon University University of Washington http://www.cs cs.cmu.edu/~ /~seitz Visual Cues Shading Merle Norman Cosmetics,

More information

Face Cyclographs for Recognition

Face Cyclographs for Recognition Face Cyclographs for Recognition Guodong Guo Department of Computer Science North Carolina Central University E-mail: gdguo@nccu.edu Charles R. Dyer Computer Sciences Department University of Wisconsin-Madison

More information

3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY

3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY IJDW Volume 4 Number January-June 202 pp. 45-50 3D FACE RECONSRUCION BASED ON EPIPOLAR GEOMERY aher Khadhraoui, Faouzi Benzarti 2 and Hamid Amiri 3,2,3 Signal, Image Processing and Patterns Recognition

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen Video Mosaics for Virtual Environments, R. Szeliski Review by: Christopher Rasmussen September 19, 2002 Announcements Homework due by midnight Next homework will be assigned Tuesday, due following Tuesday.

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License

1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License Title D camera geometry and Its application to circular motion estimation Author(s Zhang, G; Zhang, H; Wong, KKY Citation The 7th British Machine Vision Conference (BMVC, Edinburgh, U.K., 4-7 September

More information

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Douglas R. Heisterkamp University of South Alabama Mobile, AL 6688-0002, USA dheister@jaguar1.usouthal.edu Prabir Bhattacharya

More information

Lecture 3: Camera Calibration, DLT, SVD

Lecture 3: Camera Calibration, DLT, SVD Computer Vision Lecture 3 23--28 Lecture 3: Camera Calibration, DL, SVD he Inner Parameters In this section we will introduce the inner parameters of the cameras Recall from the camera equations λx = P

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices

Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices Toshio Ueshiba Fumiaki Tomita National Institute of Advanced Industrial Science and Technology (AIST)

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Camera Calibration with a Simulated Three Dimensional Calibration Object

Camera Calibration with a Simulated Three Dimensional Calibration Object Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection

More information

Epipolar Geometry in Stereo, Motion and Object Recognition

Epipolar Geometry in Stereo, Motion and Object Recognition Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,

More information

Video Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Video Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Final Report Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This report describes a method to align two videos.

More information

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia MERGING POINT CLOUDS FROM MULTIPLE KINECTS Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia Introduction What do we want to do? : Use information (point clouds) from multiple (2+) Kinects

More information

Multiple View Geometry in Computer Vision Second Edition

Multiple View Geometry in Computer Vision Second Edition Multiple View Geometry in Computer Vision Second Edition Richard Hartley Australian National University, Canberra, Australia Andrew Zisserman University of Oxford, UK CAMBRIDGE UNIVERSITY PRESS Contents

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM

A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM N. A. Tsoligkas, D. Xu, I. French and Y. Luo School of Science and Technology, University of Teesside, Middlesbrough, TS1 3BA, UK E-mails: tsoligas@teihal.gr,

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

Multiple Views Geometry

Multiple Views Geometry Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 2, 28 Epipolar geometry Fundamental geometric relationship between two perspective

More information

Recovering structure from a single view Pinhole perspective projection

Recovering structure from a single view Pinhole perspective projection EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Recovering light directions and camera poses from a single sphere.

Recovering light directions and camera poses from a single sphere. Title Recovering light directions and camera poses from a single sphere Author(s) Wong, KYK; Schnieders, D; Li, S Citation The 10th European Conference on Computer Vision (ECCV 2008), Marseille, France,

More information

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Euclidean Reconstruction Independent on Camera Intrinsic Parameters Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean

More information

Image correspondences and structure from motion

Image correspondences and structure from motion Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Self-Calibration of a Rotating Camera with Varying Intrinsic Parameters

Self-Calibration of a Rotating Camera with Varying Intrinsic Parameters Self-Calibration of a Rotating Camera with Varying Intrinsic Parameters L. de Agapito, E. Hayman and I. Reid Department of Engineering Science, Oxford University Parks Road, Oxford, OX1 3PJ, UK [lourdes

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information

More information

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

Self-Calibration from Multiple Views with a Rotating Camera

Self-Calibration from Multiple Views with a Rotating Camera Self-Calibration from Multiple Views with a Rotating Camera Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Email : hartley@crd.ge.com Abstract. A newpractical method is given for the self-calibration

More information

Center for Automation Research, University of Maryland. The independence measure is the residual normal

Center for Automation Research, University of Maryland. The independence measure is the residual normal Independent Motion: The Importance of History Robert Pless, Tomas Brodsky, and Yiannis Aloimonos Center for Automation Research, University of Maryland College Park, MD, 74-375 Abstract We consider a problem

More information

Rectification for Any Epipolar Geometry

Rectification for Any Epipolar Geometry Rectification for Any Epipolar Geometry Daniel Oram Advanced Interfaces Group Department of Computer Science University of Manchester Mancester, M13, UK oramd@cs.man.ac.uk Abstract This paper proposes

More information

An Overview of Matchmoving using Structure from Motion Methods

An Overview of Matchmoving using Structure from Motion Methods An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu

More information

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives

More information

Octree-Based Obstacle Representation and Registration for Real-Time

Octree-Based Obstacle Representation and Registration for Real-Time Octree-Based Obstacle Representation and Registration for Real-Time Jaewoong Kim, Daesik Kim, Junghyun Seo, Sukhan Lee and Yeonchool Park* Intelligent System Research Center (ISRC) & Nano and Intelligent

More information

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 20, NO. 1, FEBRUARY Self-Calibration of a Rotating Camera With a Translational Offset

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 20, NO. 1, FEBRUARY Self-Calibration of a Rotating Camera With a Translational Offset TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO., FEBRUARY Self-Calibration of a Rotating Camera With a Translational Offset Qiang Ji, Member,, and Songtao Dai Abstract Camera self calibration, based

More information

Global Flow Estimation. Lecture 9

Global Flow Estimation. Lecture 9 Motion Models Image Transformations to relate two images 3D Rigid motion Perspective & Orthographic Transformation Planar Scene Assumption Transformations Translation Rotation Rigid Affine Homography Pseudo

More information

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006 Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation

More information

3D Computer Vision. Structure from Motion. Prof. Didier Stricker

3D Computer Vision. Structure from Motion. Prof. Didier Stricker 3D Computer Vision Structure from Motion Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Structure

More information

Recent Trend for Visual Media Synthesis and Analysis

Recent Trend for Visual Media Synthesis and Analysis 1 AR Display for Observing Sports Events based on Camera Tracking Using Pattern of Ground Akihito Enomoto, Hideo Saito saito@hvrl.ics.keio.ac.jp www.hvrl.ics.keio.ac.jp HVRL: Hyper Vision i Research Lab.

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

3D Visualization through Planar Pattern Based Augmented Reality

3D Visualization through Planar Pattern Based Augmented Reality NATIONAL TECHNICAL UNIVERSITY OF ATHENS SCHOOL OF RURAL AND SURVEYING ENGINEERS DEPARTMENT OF TOPOGRAPHY LABORATORY OF PHOTOGRAMMETRY 3D Visualization through Planar Pattern Based Augmented Reality Dr.

More information

Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors

Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Keith Forbes 1 Anthon Voigt 2 Ndimi Bodika 2 1 Digital Image Processing Group 2 Automation and Informatics Group Department of Electrical

More information