Camera Calibration from Dynamic Silhouettes Using Motion Barcodes
|
|
- Maximillian Houston
- 5 years ago
- Views:
Transcription
1 Camera Calibration from Dynamic Silhouettes Using Motion Barcodes Gil Ben-Artzi Yoni Kasten Shmuel Peleg Michael Werman School of Computer Science and Engineering The Hebrew University of Jerusalem, Israel Abstract Computing the epipolar geometry between cameras with very different viewpoints is often problematic as matching points are hard to find. In these cases, it has been proposed to use information from dynamic objects in the scene for suggesting point and line correspondences. We propose a speed up of about two orders of magnitude, as well as an increase in robustness and accuracy, to methods computing epipolar geometry from dynamic silhouettes. This improvement is based on a new temporal signature: motion barcode for lines. Motion barcode is a binary temporal sequence for lines, indicating for each frame the existence of at least one foreground pixel on that line. The motion barcodes of two corresponding epipolar lines are very similar, so the search for corresponding epipolar lines can be limited only to lines having similar barcodes. The use of motion barcodes leads to increased speed, accuracy, and robustness in computing the epipolar geometry. 1. Introduction Calibration of multi-camera systems is normally computed by finding corresponding feature points between images taken by these cameras. When not enough feature points can be found, e.g. when the camera viewpoints vary greatly, the epipolar geometry can be computed from silhouettes of moving objects that are visible in videos captured by the two cameras. The silhouettes at one time instance are used to suggest matching epipolar lines which are used to propose a fundamental matrix that is verified over all frames. The best methods for computing the fundamental matrix use tangents to the dynamic silhouette as candidates for epipolar lines [27]. Our approach presents a speedup of about two orders of magnitude for these methods, and significantly improves accuracy and robustness. This speedup is obtained by requiring candidates for matching epipolar lines to share a temporal signature (motion barcode). Motion barcodes were first introduced for points in [5]. The motion barcode of a line is a binary temporal sequence, Figure 1. When two cameras have very different viewpoints as in this example, appearance can not be used for calibration. Instead, calibration is possible from matching pairs of epipolar lines that can be extracted efficiently from moving silhouettes. The yellow lines are the epipolar lines proposed by our method, while the red lines are the ground truth epipolar lines. The corresponding silhouettes are displayed at the bottom. indicating for each frame the existence of at least one foreground pixel on that line. We show that correlation between the motion barcodes of corresponding epipolar lines is high. By testing as possible matches only pairs of lines whose motion barcode correlation is high, a speedup by about two orders of magnitude is obtained. Figure 1 shows matching epipolar lines extracted using our approach. Following [27], we use a RANSAC approach to test possible matching pairs of epipolar lines, and compute the epipolar geometry. This paper is organized as follows. Section 1.1 describes relevant prior work. Section 2 introduces the theoretical background. Section 3 presents the motion barcodes of lines. Section 4 shows how to match epipolar lines based on dynamic silhouettes. Section 5 presents an iterative computation of the fundamental matrix based on the motion barcode. Section 6 shows our results on both synthetic and real sequences.
2 1.1. Prior Work Extracting geometrical information from the motion of silhouettes include shape-from-silhouettes [7, 14, 3] and camera calibration [18, 27, 6, 25, 33]. In shape-from silhouettes, the goal is to recover the visual hull [20, 23] of the object. If the cameras are calibrated, this task is relatively clear as each individual viewing cone [15] can be backprojected and the visual hull is the intersection of these cones. The case of uncalibrated cameras has also been investigated, where the goal is to recover epipolar geometry. The first step is to establish correspondences between special points on the silhouettes boundaries, called frontier points [9], across the different views. These points are images of object points that are tangent to an epipolar plane. Given corresponding frontier points, spatial constraints resulting from matching epipolar tangents [26] are used to recover the epipolar geometry. Matching corresponding frontier points and silhouette tangents can be found using robust estimation procedures such as RANSAC [13]. Matching frontier points, or directions of four epipolar tangent lines, are initially guessed. Furukawa et al. [16], assuming orthographic projection, match frontier points using RANSAC. They used the distances between parallel tangent lines on the silhouettes as a geometric measure for matching. Given the epipoles and the accurate tangent envelope of the silhouettes, frontier points can be easily matched using two outermost epipolar tangents. This property was deployed by Wong and Cipolla [30] for turntable motion. The most relevant previous work is Sinha and Pollefeys [27], addressing projective projection. They propose a RANSAC based search of possible epipoles, where a proposed epipole in each of the two corresponding images is generated from the intersection of two lines randomly selected from the tangent envelope. Calibration without explicitly matching tangent epipolar lines has also been considered. [6] used constraints based on the back projection of silhouettes boundaries in multiple views. [32] jointly optimized the 3D position of frontier points and the camera parameters in a bundle adjustment. However, both methods require a good initialization of silhouette boundaries and camera parameters. Hernandez [18] also proposed constraints based on the back projection of silhouettes for maximizing silhouette coherence, but his method is limited to turntable motion. Binary temporal signatures of pixels which are based on the motion of the objects in the scene have been previously introduced. Ermis et al. [12] deploy such features to find accurate correspondence between pixels across distributed cameras with the assumption of a distant, almost planar, scene. Drouin et al. [11] matched 2D points between a video projector and a digital camera. They require a planar surface and the same ordering of pixels across views. Ben- Figure 2. The geometry of two views with a tangent epipolar plane and a frontier point. Π is the tangent epipolar plane and l, l are tangent epipolar lines in Π. The epipolar plane is tangent to the object at the frontier points P. Artzi et al. [5] introduce a method to match events across different views even in the case of significant parallax and occlusions. However, that approach can not be applied to match pixels for camera calibration, as its localization is very inaccurate, as explained in Section 3. Using a temporal histogram as a temporal feature of a pixel is introduced in [19], but it is effective only for objects that are static for substantial periods. 2. Theoretical Background The geometric relation between corresponding silhouettes across views is based on frontier points and epipolar tangencies [22, 8, 21]. The geometry of two views containing silhouettes is presented in Fig. 2. For the rest of the paper let candidate points be image points that are on the boundary of the silhouette as well as on the boundary of its convex hull. C and C are the contours of the object in 3D. These contours project to silhouette boundaries S and S. The two contours intersect in the frontier point P. The projections of P onto the two views are the candidate points, p and p. The 3 points P, p, p span a tangent epipolar plane Π between the two views. The points p and p must lie on the corresponding tangent lines l and l and the point P is the location where the tangent epipolar plane Π is tangent to the surface, e, e are the epipoles. The frontier points are the only true corresponding points between the boundaries S and S. If we have accurate tangents to the silhouettes and the location of the epipole is known, then the epipolar tangent lines give the corresponding points p and p. This idea was traditionally used as a spatial cost function between corresponding tangential epipolar lines and points. See for example [16, 27]. Finding frontier points without the epipole locations is difficult [24]. For a video sequence, when the location of at
3 (a) (b) Figure 3. In dynamic scenes, the geometrical relation between pixels is characterized only up to corresponding epipolar lines. Each pixel in one video can correspond to different pixels at different times in the other video. For stationary cameras, the different pixels in the second view will always reside on the epipolar line. Figure 4. The motion barcodes of two corresponding epipolar lines, l and l, in a video of a moving person at three time instances. If a point on an epipolar line is a projection of a foreground point at time t, then there exists a point on its corresponding epipolar line which is also a projection of a foreground point at the same time. In the figure, at time t = 2, 3 the two corresponding epipolar lines contain a point from a silhouette. This can be a different 3D point due to viewpoint differences, e.g. P 1, P 2. The motion barcode of both epipolar lines in this figure, b l and b l, is [0,1,1]. least two frontier points are known, their tangent lines are epipolar lines. They can be used to calculate the epipole and the location of the other frontier points in all the other frames. Alternatively, if the epipoles are known we can use the tangent lines to the silhouettes to locate the frontier points. It follows that either the frontier points or the epipoles are needed in order to extract the epipolar lines. Here we introduce a different approach which does not require prior knowledge of either frontier points or epipoles in order to extract matching epipolar lines. We directly compute epipolar line correspondences using the motion observed simultaneously by them. Figure 5. Uniform sampling of lines from the tangent envelope. (a) Lines sampled every 1, (b) Lines sampled every 4 3. Motion Barcode: Temporal Signature of Lines Given two frames captured at the same time from different viewpoints, two corresponding pixels view a single 3D point. However, in a dynamic scene, a pixel in one view is bound to correspond to different pixels of the other view at different times, located on the corresponding epipolar line. Fig. 3 illustrates a typical case. At time t = 1, a single pixel in the right view corresponds to some pixel in the left view. At time t = 2, due to the motion of the object in scene, the same pixel corresponds to a different pixel in the other view. For video sequences captured by stationary cameras the corresponding pixels will always reside on corresponding epipolar lines. It follows that if an epipolar line contains at least one silhouette pixel at time t, then its corresponding epipolar line should contain such a pixel at the same time. This is illustrated in Fig. 4. For time t = 2, 3 there are points from objects that project onto the corresponding epipolar line. If a point on line l is part of a silhouette, this point or another silhouette point occluding it, will be seen on line l. Alternatively, the silhouette could be blocked by a background object or be out of the frame. The motion barcode of a line l, b l (t), indicates for each line l in frame t the existence of at least one foreground pixel on that line. b l (t) = 1 if the line intersects a silhouette, and b l (t) = 0 otherwise. The motion barcodes of two corresponding epipolar lines is very similar. Differences occur only in cases of occlusions. The temporal similarity between two lines l and l is defined as the correlation between their motion barcodes; d t (l, l ) = corr(b l (t), b l (t)) (1) 4. Epipolar Geometry by Matching Lines The epipolar geometry can be computed from 3 pairs of corresponding epipolar lines [17]. The search space for matching epipolar lines across views is very large if we consider all possible pairs of lines. This search can be reduced to fewer lines by using only candidate lines, lines tangent to
4 Figure 6. Finding corresponding epipolar lines by their motion barcode. Every pair of frames contributes one possible match. The dashed lines are the true epipolar lines and the green lines are the candidate pairs having highest barcode correlation. the tangent envelope of the silhouette. The tangent envelope includes points that are on the silhouette boundary as well as on the boundary of its convex hull. We follow the work of [27], checking for possible correspondence only lines on the tangent envelope. Using only candidate lines is justified as the projection of the frontier point is on the tangent envelope. We select several corresponding pairs of frames from the video sequences, so that the pairs will be sufficiently different from each other. For each pair of frames, we sample K candidate lines from the tangent envelope of its silhouettes. We compute the correlation between the motion barcodes for all pairs of candidate lines from the two corresponding images. This results in K 2 correlations per each pair of frames. From every pair of frames we select the single pair of epipolar lines with the highest barcode correlation. Fig. 5 shows all candidate lines, and Fig. 6 shows the pairs of candidate lines having highest barcode correlation. We compute the epipolar geometry from three matching pairs based on [17]. The computation is by RANSAC similarly to [27]. The fundamental matrix is then fully optimized as described in Section 5. The matching is carried out in two phases. An offline phase where the motion barcodes of the tangent lines are computed. In the online phase, the actual matching is carried out by computing the correlation between motion barcodes of pairs of lines, and computing the epipolar geometry using RANSAC. The overall efficiency depends mainly on accuracy of candidate matches as it effects the number of required iterations in the RANSAC phase. Details are in Section Temporal Optimization of the Fundamental Matrix Existing optimization techniques for computing the fundamental matrix are based on minimizing a spatial cost function without taking into account the temporal dimension. We present a technique based on both spatial and temporal cost functions (Eq. 1). We assume a set of corresponding points, presumably the projection of frontier points {(x i, x i )}M i=1, a set of corresponding epipolar lines {(l i, l i )}N i=1 and an initial estimation of the fundamental matrix F. The optimization is iterative. In the first step we optimize the point correspondences based on the lines, using the geometric reprojection error [17] as the spatial cost function. In the second step we optimize the epipolar line correspondences based on the given points, using the temporal cost function (Eq. 1). We optimize the directions of the epipolar lines for each pair of corresponding points. Based on the lines matched in this step, we estimate epipoles and an epipolar line homography. We then evaluate a set of corresponding points and obtain an estimation of the fundamental matrix. The process is described in the following: Step one: 1. Minimize reprojection error based on {(x i, x i )}M i=1. d(x i, ˆx i ) 2 + d(x i, ˆx i) 2 s.t. ˆx i ˆF ˆxi = 0 i This minimization is by the Levenberg-Marquardt procedure and gives a new set of points and fundamental matrix. 2. set l i = ˆF ˆx i, l i = ˆF T ˆx i. Step two: 1. For each pair of lines, minimize C l (ˆl, ˆl ) = d s (l, ˆl) + d s (l, ˆl ) + d t (ˆl, ˆl ) (2) d s measures the angular deviation between lines, and d t is the barcode correlation (Eq. 1). d s ensures the lines are within an angle difference of no more than Θ. The choice of Θ will be discussed next. ˆl, ˆl are sampled uniformly from [ Θ, Θ] around l, l. We take the maximal match and if we have more than one maximum, the one with the minimal angle difference is selected. 2. Estimate new epipoles e, e and epipolar line homography from {ˆl i, ˆl i }. 3. Set {x i, x i } by projecting onto the nearest l, l. Estimate F from epipoles and lines homography. The process terminates when the deviation of the estimated epipoles is small enough or a maximum number of iterations is exceeded. The choice of the angular tolerance Θ defines the region where we look for the newly estimated lines. It depends on the epipolar envelope and the required probability for
5 No. of Non-Linear Optimizations Needed to Reach a Desired Accuracy Sym Epipolar Distance Kung-Fu Street Dancer (a) (b) Dancing Girl Boxer (c) (d) Figure 7. The datasets used in the experiments. (a) The synthetic Kung-Fu girl dataset. (b) The Boxer dataset. (c) The Street Dancer dataset (d) The Dancing Girl dataset. Ours Sinha Ours Sinha Ours Sinha Ours Sinha Table 2. The expected number of non-linear optimizations required to reach a given accuracy of the fundamental matrix. Accuracy is measured using symmetric epipolar distance with respect to ground-truth points. The best hypothesis is selected every 1000 RANSAC iterations, and is further optimized using nonlinear (LM) method. In each dataset, the number of optimizations is averaged over all cameras pairs. Empty cells indicate that the required accuracy was not attained. locating the line [29]. Direct modeling of epipolar envelope is difficult and therefore it is empirically evaluated, see [10, 31, 17]. In our implementation we set Θ to 0.2 which results in an accurate estimation. This reflects our assumption that the distortion is low. The specific choice can be adjusted according to the needs. 6. Experiments Our approach was validated on synthetic and real sequences. We compared our method with the state of the art method [27], where the fundamental matrix is computed by RANSAC-based sampling of epipolar lines. The evaluation was done with the following datasets: the Kung-Fu girl [2], Boxer [4], Street Dancer [28] and Dancing Girl [1]. Fig. 7 shows images from the datasets and Table 1 gives the details. We compared the accuracy and efficiency of the two methods. The accuracy of the fundamental matrix in all experiments is measured by the symmetric epipolar distance (error) [17] using ground truth matching points. The symmetric epipolar distance is the distance between each point and the epipolar line corresponding to the other point. The acquisition of the ground truth points is discussed in Subsection 6.3. The efficiency of the methods is evaluated as follows. In both methods the fundamental matrix is computed using RANSAC sampling of epipolar lines. In each iteration, the symmetric epipolar distance of each hypothesis is evaluated. Every 1000 RANSAC iterations the best hypothesis Dataset KungFu Girl Boxer Street Dancer Dancing Girl Type Synthetic Real Real Real Camera Pairs Table 1. Dataset properties Frames Figure 8. The ratio between our method and Sinha [27] of the number of non-linear optimization procedures required to reach a given fundamental matrix accuracy. The horizontal axis is the accuracy in terms of the desired symmetric epipolar distance of ground truth points. is selected and optimized using the non-linear LevenbergMarquardt (LM) optimization procedure as in [27]. The efficiency is measured by the number of non-linear optimization procedures required to reach a given accuracy (error). The less non-linear optimization procedures the more efficient the method is. A detailed description is in Subsection 6.3. There is a difference in the error used during RANSAC and the error we use for final evaluation. During RANSAC, the quality of an hypothesis is evaluated based on inliers, as ground truth is unknown. This error is usually lower from the error of ground truth points. We used ground-truth points for a non biased evaluation. Efficiency The expected number of non-linear LM opti-
6 Symmetric Epipolar Distance RANSAC Hypotheses 1K 2K 5K 10K 20K 100K Kung-Fu Street Dancer Dancing Girl Boxer ours Sinha ours Sinha Ours Sinha Ours Sinha Table 3. Accuracy reached by each method for a fixed number of RANSAC samples. Accuracy is after a non-linear optimization phase, measured with respect to ground-truth points. Ours Sinha Kung-Fu Street Dancer Dancing Girl Boxer (a) (b) Table 4. The best accuracy reached by each method on all camera pairs in each dataset after 500K RANSAC hypotheses. The accuracy is the median over all camera pairs of the best symmetric epipolar distance reached after the non-linear optimization phase. mization procedures required to reach a fundamental matrix having a better accuracy than a predefined level is shown in Table 2. For each pair of cameras, we executed 500K RANSAC iterations resulting in 500K hypotheses. Every 1000 RANSAC iterations the best hypothesis is selected and optimized non-linearly. The accuracy of the optimized fundamental matrix, in terms of the symmetric epipolar distance of ground truth points, is recorded. The accuracy values after all non-linear optimization procedures from all camera pairs in the dataset form our samples. For example, in the Kung-Fu dataset we executed 500K 300 RANSAC iterations, performed LM optimizations, and collected 150,000 samples. We build the cumulative distribution function (cdf) of the error from all camera pairs. Given the cdf the expected number of samples is extracted. It can be seen that our method quickly converged to sub-pixel accuracy. Fig. 8 shows the ratio between the required number of non-linear optimization procedures in our method and Sinha[27]. The horizontal dashed lines are in ratios of 10, 30 and 100. For accuracy of 0.8 pixel, the median of the ratios between the required number of non-linear optimization procedures is 38, and for accuracy of 1.5 pixel the median of the ratios is 17. Accuracy We evaluated the best accuracy (minimal error) reached for a given number of RANSAC generated hypotheses. For each pair of cameras in the dataset, we generated 500K RANSAC hypotheses by each method. We subdivided the hypotheses into equal sized groups. From each group we selected the best hypothesis (lowest symmetric epipolar distance) with respect to the ground truth points. We then applied non-linear optimization and measured the accuracy of the resulting fundamental matrix. The accu- (c) Figure 9. The fraction of camera pairs whose fundamental matrices reached a given symmetric epipolar distance. The accuracy is evaluated over 500K RANSAC iterations. The x-axis is the given accuracy. The y-axis is the fraction of camera pairs that reached this accuracy. The blue bars are our method and the red bars are Sinha s method. (a) The Kung-Fu dataset. (b) Boxer dataset. (c) Street Dancer. (d) Dancing Girl. racy is the median over all optimized fundamental matrices. For example, in the Kung-Fu dataset we have 150,000 hypotheses. For evaluation of the highest accuracy reached by 5K hypotheses, we divided them into 30 equal size groups, optimized the best hypothesis from each group and evaluated the median over the symmetric epipolar distances. Table 3 shows the results. It can be seen that for the Kung-Fu dataset, our method requires approximately 2K RANSAC iterations followed with 1 non-linear optimization procedure to reach an accuracy of Using our approach, in less than 5K RANSAC iterations all datasets reached subpixel accuracy. Table 4 shows the best median accuracy reached by each method. As expected, the synthetic dataset has best accuracy, 0.26, while the worst accuracy, 0.41, was in the Dancing Girl dataset which has many errors in the silhouettes. We also evaluated the fraction of the number of camera pairs whose fundamental matrices reached a given accuracy using all the samples, after the non-linear phase. The results are shown in Fig. 9. For the Kung-Fu dataset, for 298 out of 300 camera pairs the accuracy reached 1.5, including pairs where the cameras are facing each other. This is discussed in the next subsection. On average, the number of camera pairs where a given accuracy was reached using our method is by a factor of 1.8 higher than the number of cameras with same accuracy using Sinha s method. The average is calculated over all camera pairs over all datasets. (d)
7 Figure 10. When the epipole is at the center of the image, e.g. when two cameras are facing one another, it may not be possible to find epipolar lines. In this case the epipole is often inside the convex hall. In this example the convex hull is marked in blue, and the yellow point is the epipole. The red line is a ground truth epipolar line. The green line is an hypothesized epipolar line Frames Lacking Frontier Points Frames that lack frontier points are problematic for most tangent based methods. This happens when the epipoles are inside the convex hull of the dynamic objects, a common case when the two cameras face each other. An example is illustrated in Fig. 10. Using our method, even when the pairs of cameras are facing each other, the fundamental matrix can still be recovered. This is possible as the object is moving, and there are often a few frames where the epipole if outside the convex hall. These few frames are enough for the calibration, due to the accuracy of the selected candidates for epipolar lines. For example, it can be seen in Fig. 9 that in the Kung-Fu dataset, for accuracy of 1.5, our method fails for only two camera pairs, whereas Sinha s method fails on 78 camera pairs Ground-Truth Error vs. Inlier Error The symmetric epipolar distance [17] is a quality measure for fundamental matrices, and is defined over a set of pairs of corresponding points across two images. In ordinary computations of the fundamental matrix, when no ground truth data is known, the symmetric epipolar distance is calculated based on hypothesized inlier points. Since some inliers are often wrong correspondences, there is a significant difference between the error computed on inlier points and the error computed on ground truth points (when available). As we have access to the ground truth in our datasets, we used the ground truth points to measure the symmetric epipolar distance and evaluate our experiments Implementation Details Precomputation of Motion Barcodes. The motion barcodes were computed for points on the silhouette boundaries which are also on the convex hull boundary, called candidate points. 180 angles are sampled every 2, each angle defines a tangent line to the silhouette through one of the candidate points. This results in 180 candidate lines per frame, and a motion barcode is computed for all these lines. In a video having N frames, each motion barcode is a binary sequence of length N. A barcode matrix is defined for each frame having 180 rows and N columns. Each column represents a frame, and each row represents a tangent line. Each row is the motion barcode of the corresponding candidate line. Given corresponding frames of two cameras frames, the distance between all possible pairs of candidate lines is computed by multiplying their motion barcode matrices, resulting in an affinity matrix of candidate lines. Given the N pairs of frames of two cameras, we extract for each frame the single pair of candidate lines having the highest barcode correlation. This results in N pairs of possible matching epipolar lines, each having higher barcode correlation. RANSAC Sampling. The efficiency of fundamental matrix computation can be broken into the initialization cost, the number of hypotheses needed to be generated, the cost of generating an hypothesis and the cost of hypothesis verification. In both methods the cost of the model verification phase is identical as it is indifferent to the model generation. The comparison is therefore the number of RANSAC hypotheses required by each method. In the following we provide a detailed description. Generating the hypothesized model is as follows. In our method, three matching pairs having high barcode correlation were randomly selected from the pre-computed table of barcode correlations between all pairs of candidate lines. For the Sinha method, as described in [27], two matching hypothesized lines were extracted based on sampling the directions of the tangents in one frame. The third matching pair of lines was computed using the epipole generated by the first two lines, and a tangent to a silhouette in another frame. Given three proposals for corresponding epipolar lines, the fundamental matrix was computed using the method described in [27]. The computation of the third matching pair of lines by the generated epipoles could be applied in our approach as well, requiring selection of only two matching lines instead of three. This could improve the accuracy of the method. On the other hand, it requires additional computations for finding the exact tangents in each RANSAC iteration. We empirically saw that sampling three lines is faster than sampling two lines together with the additional tangent computations. The cost of each RANSAC iteration depends on (a) lines match generation and (b) the computation of the fundamental matrix from the epipolar line homography and the epipoles. For the motion barcode method the first part is instantaneous as it involves only index selection, since matching pairs of lines are computed beforehand. For the baseline method each iteration introduces the computation of six tangents, where the computation of the last pair of tangents
8 involves finding the frontier points with respect to the hypothesized epipoles. The second part is the same for all the methods and introduces the major cost of each iteration. We assume that the first part is instantaneous also in the baseline method and consider the cost of each iteration as the cost of the second part. Computing the motion barcode distance between all pairs of candidate lines adds computation efforts to our method. This computational cost was equivalent to 35 iterations of RANSAC, which we added to the cost of our method. Ground Truth points. For accurate evaluation of the symmetric epipolar distance we extracted matching frontier points across different views, using the given ground-truth silhouettes and the given ground truth fundamental matrix. For each frame, points whose tangent line is within an angular deviation of 1 of the true epipolar line were extracted. A pair of points was considered frontier if their epipolar distance using the known fundamental matrix is less than This results in a cloud of points that might be spread out unevenly. From these points we sampled ground truth points that have a distance of at least 15 pixels from each other, resulting in several dozen point, well spread out, per view. 7. Concluding Remarks Motion barcodes were introduced as efficient temporal signatures for lines, signatures which are viewpoint invariant for matching epipolar lines. The effectiveness of motion barcodes was demonstrated in camera calibration using candidate epipolar lines. In this case, computing candidate fundamental matrices only from candidate lines that have matching motion barcodes, reduced computational costs by about two orders of magnitude. Acknowledgment. This research was supported by Google, by Intel ICRI-CI, by DFG, and by the Israel Science Foundation. References [1] Dancer dataset, inria, 4d-repository, [2] Data set, kung-fu girl, departments/irg3/kungfu/, [3] E. Aganj, J.-P. Pons, F. Ségonne, and R. Keriven. Spatiotemporal shape from silhouette using four-dimensional delaunay meshing. In ICCV 07, pages 1 8, [4] L. Ballan and G. M. Cortelazzo. Multimodal 3d shape recovery from texture, silhouette and shadow information. In 3D Data Processing, Visualization, and Transmission, Third International Symposium on, pages IEEE, [5] G. Ben-Artzi, M. Werman, and S. Peleg. Event retrieval using motion barcodes. In ICIP 15, pages , , 2 [6] E. Boyer. On using silhouettes for camera calibration. In ACCV 06, pages [7] K. Cheung, S. Baker, and T. Kanade. Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture. In CVPR 03, volume 1, pages I 77, [8] R. Cipolla, K. E. Astrom, and P. J. Giblin. Motion from the frontier of curved surfaces. In ICCV 95, pages , [9] R. Cipolla and P. Giblin. Visual motion of curves and surfaces. Cambridge University Press, [10] G. Csurka, C. Zeller, Z. Zhang, and O. D. Faugeras. Characterizing the uncertainty of the fundamental matrix. CVIU 97, 68(1):18 36, [11] M.-A. Drouin, P.-M. Jodoin, and J. Prémont. Cameraprojector matching using an unstructured video stream. In CVPR 10 Workshop, pages IEEE, [12] E. B. Ermis, P. Clarot, P.-M. Jodoin, and V. Saligrama. Activity based matching in distributed camera networks. IEEE Trans. IP, 19(10): , [13] M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6): , [14] K. Forbes, F. Nicolls, G. De Jager, and A. Voigt. Shape-fromsilhouette with two mirrors and an uncalibrated camera. In ECCV 06, pages [15] J.-S. Franco and E. Boyer. Exact polyhedral visual hulls. In BMVC 03, volume 1, pages , [16] Y. Furukawa, A. Sethi, J. Ponce, and D. Kriegman. Robust structure and motion from outlines of smooth curved surfaces. IEEE Trans. PAMI, 28(2): , [17] R. Hartley and A. Zisserman. Multiple view geometry in computer vision , 4, 5, 7 [18] C. Hernández, F. Schmitt, and R. Cipolla. Silhouette coherence for camera calibration under circular motion. IEEE Trans. PAMI, 29(2): , [19] Y. Hoshen and S. Peleg. The information in temporal histograms. In IEEE WACV 15, pages , [20] A. Laurentini. The visual hull concept for silhouette-based image understanding. IEEE Trans. PAMI, 16(2): , [21] S. Lazebnik, E. Boyer, and J. Ponce. On computing exact visual hulls of solids bounded by smooth surfaces. In CVPR 01, volume 1, pages I 156, [22] P. R. S. Mendonca, K.-Y. Wong, and R. Cippolla. Epipolar geometry from profiles under circular motion. IEEE Trans. PAMI, 23(6): , [23] G. Miller and A. Hilton. Exact view-dependent visual-hulls. In ICPR 06, pages , [24] J. Porrill and S. Pollard. Curve matching and stereo calibration. Image and Vision Computing, 9(1):45 50, [25] P. Ramanathan, E. G. Steinbach, and B. Girod. Silhouettebased multiple-view camera calibration. In VMV, pages 3 10, [26] C. Schmid and A. Zisserman. The geometry and matching of curves in multiple views. In ECCV 98, pages
9 [27] S. N. Sinha and M. Pollefeys. Camera network calibration and synchronization from silhouettes in archived video. IJCV, 87(3): , , 2, 4, 5, 6, 7 [28] J. Starck and A. Hilton. Surface capture for performancebased animation. Computer Graphics and Applications, 27(3):21 31, [29] A. Stojanovic and M. Unger. A new evaluation criterion for point correspondences in stereo images. In N. Adami, A. Cavallaro, R. Leonardi, and P. Migliorati, editors, Analysis, Retrieval and Delivery of Multimedia Content, pages Springer, [30] K.-Y. Wong and R. Cipolla. Structure and motion from silhouettes. In ICCV 01, volume 2, pages , [31] G. Xu and Z. Zhang. Epipolar geometry in stereo, motion and object recognition: a unified approach, volume 6. Springer Science & Business Media, [32] H. Yamazoe, A. Utsumi, and S. Abe. Multiple camera calibration with bundled optimization using silhouette geometry constraints. In ICPR 06, volume 3, pages IEEE, [33] H. Zhang and K.-Y. Wong. Self-calibration of turntable sequences from silhouettes. IEEE Trans. PAMI, 31(1):5 14,
arxiv: v1 [cs.cv] 14 Apr 2017
Camera Calibration by Global Constraints on the Motion of Silhouettes Gil Ben-Artzi The Faculty of Mathematics and Computer Science Weizmann Institute Of Science, Israel arxiv:1704.04360v1 [cs.cv] 14 Apr
More informationFundamental Matrices from Moving Objects Using Line Motion Barcodes
Fundamental Matrices from Moving Objects Using Line Motion Barcodes Yoni Kasten (B), Gil Ben-Artzi, Shmuel Peleg, and Michael Werman School of Computer Science and Engineering, The Hebrew University of
More informationEpipolar Geometry Based On Line Similarity
2016 23rd International Conference on Pattern Recognition (ICPR) Cancún Center, Cancún, México, December 4-8, 2016 Epipolar Geometry Based On Line Similarity Gil Ben-Artzi Tavi Halperin Michael Werman
More informationarxiv: v4 [cs.cv] 8 Jan 2017
Epipolar Geometry Based On Line Similarity Gil Ben-Artzi Tavi Halperin Michael Werman Shmuel Peleg School of Computer Science and Engineering The Hebrew University of Jerusalem, Israel arxiv:1604.04848v4
More informationSilhouette Coherence for Camera Calibration under Circular Motion
Silhouette Coherence for Camera Calibration under Circular Motion Carlos Hernández, Francis Schmitt and Roberto Cipolla Appendix I 2 I. ERROR ANALYSIS OF THE SILHOUETTE COHERENCE AS A FUNCTION OF SILHOUETTE
More informationCALIBRATING A NETWORK OF CAMERAS FROM LIVE OR ARCHIVED VIDEO. Sudipta N. Sinha, Marc Pollefeys
CALIBRATING A NETWORK OF CAMERAS FROM LIVE OR ARCHIVED VIDEO Sudipta N. Sinha, Marc Pollefeys {ssinha,marc}@cs.unc.edu Department of Computer Science, University of North Carolina at Chapel Hill, USA.
More informationVisual-Hull Reconstruction from Uncalibrated and Unsynchronized Video Streams
Visual-Hull Reconstruction from Uncalibrated and Unsynchronized Video Streams Sudipta N. Sinha and Marc Pollefeys {ssinha,marc}@cs.unc.edu Department of Computer Science University of North Carolina at
More informationMulti-view stereo. Many slides adapted from S. Seitz
Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window
More information1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License
Title D camera geometry and Its application to circular motion estimation Author(s Zhang, G; Zhang, H; Wong, KKY Citation The 7th British Machine Vision Conference (BMVC, Edinburgh, U.K., 4-7 September
More informationVisual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors
Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Keith Forbes 1 Anthon Voigt 2 Ndimi Bodika 2 1 Digital Image Processing Group 2 Automation and Informatics Group Department of Electrical
More informationSilhouette Coherence for Camera Calibration under Circular Motion
1 Silhouette Coherence for Camera Calibration under Circular Motion Carlos Hernández, Francis Schmitt and Roberto Cipolla Abstract We present a new approach to camera calibration as a part of a complete
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationCamera Network Calibration from Dynamic Silhouettes
Camera Network Calibration from Dynamic Silhouettes Sudipta N. Sinha Marc Pollefeys Leonard McMillan. Dept. of Computer Science, University of North Carolina at Chapel Hill. {ssinha, marc, mcmillan}@cs.unc.edu
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationShape-from-Silhouette with Two Mirrors and an Uncalibrated Camera
Shape-from-Silhouette with Two Mirrors and an Uncalibrated Camera Keith Forbes 1, Fred Nicolls 1, Gerhard de Jager 1, and Anthon Voigt 2 1 Department of Electrical Engineering, University of Cape Town,
More informationVisualization 2D-to-3D Photo Rendering for 3D Displays
Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationShape from Silhouettes
Shape from Silhouettes Schedule (tentative) 2 # date topic 1 Sep.22 Introduction and geometry 2 Sep.29 Invariant features 3 Oct.6 Camera models and calibration 4 Oct.13 Multiple-view geometry 5 Oct.20
More informationChaplin, Modern Times, 1936
Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections
More informationEfficiently Determining Silhouette Consistency
Efficiently Determining Silhouette Consistency Li Yi, David W. Jacobs Institute for Advanced Computer Studies University of Maryland, College Park, MD 20742 {liyi,djacobs}@umiacs.umd.edu Abstract Volume
More informationStep-by-Step Model Buidling
Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationObject and Motion Recognition using Plane Plus Parallax Displacement of Conics
Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Douglas R. Heisterkamp University of South Alabama Mobile, AL 6688-0002, USA dheister@jaguar1.usouthal.edu Prabir Bhattacharya
More informationInstance-level recognition part 2
Visual Recognition and Machine Learning Summer School Paris 2011 Instance-level recognition part 2 Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique,
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationShape from Silhouettes I
Shape from Silhouettes I Guido Gerig CS 6320, Spring 2015 Credits: Marc Pollefeys, UNC Chapel Hill, some of the figures and slides are also adapted from J.S. Franco, J. Matusik s presentations, and referenced
More informationShape from Silhouettes I
Shape from Silhouettes I Guido Gerig CS 6320, Spring 2013 Credits: Marc Pollefeys, UNC Chapel Hill, some of the figures and slides are also adapted from J.S. Franco, J. Matusik s presentations, and referenced
More informationInstance-level recognition II.
Reconnaissance d objets et vision artificielle 2010 Instance-level recognition II. Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique, Ecole Normale
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationBIL Computer Vision Apr 16, 2014
BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More informationStructure from Motion. Introduction to Computer Vision CSE 152 Lecture 10
Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley
More informationVision par ordinateur
Epipolar geometry π Vision par ordinateur Underlying structure in set of matches for rigid scenes l T 1 l 2 C1 m1 l1 e1 M L2 L1 e2 Géométrie épipolaire Fundamental matrix (x rank 2 matrix) m2 C2 l2 Frédéric
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationFast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation
Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Engin Tola 1 and A. Aydın Alatan 2 1 Computer Vision Laboratory, Ecóle Polytechnique Fédéral de Lausanne
More informationPassive 3D Photography
SIGGRAPH 2000 Course on 3D Photography Passive 3D Photography Steve Seitz Carnegie Mellon University University of Washington http://www.cs cs.cmu.edu/~ /~seitz Visual Cues Shading Merle Norman Cosmetics,
More informationImage Based Reconstruction II
Image Based Reconstruction II Qixing Huang Feb. 2 th 2017 Slide Credit: Yasutaka Furukawa Image-Based Geometry Reconstruction Pipeline Last Lecture: Multi-View SFM Multi-View SFM This Lecture: Multi-View
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationEpipolar Geometry in Stereo, Motion and Object Recognition
Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,
More informationVisual Shapes of Silhouette Sets
Visual Shapes of Silhouette Sets Jean-Sébastien Franco Marc Lapierre Edmond Boyer GRAVIR INRIA Rhône-Alpes 655, Avenue de l Europe, 38334 Saint Ismier, France {franco,lapierre,eboyer}@inrialpes.fr Abstract
More informationVolumetric stereo with silhouette and feature constraints
Volumetric stereo with silhouette and feature constraints Jonathan Starck, Gregor Miller and Adrian Hilton Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, GU2 7XH, UK.
More informationShape from Silhouettes I CV book Szelisky
Shape from Silhouettes I CV book Szelisky 11.6.2 Guido Gerig CS 6320, Spring 2012 (slides modified from Marc Pollefeys UNC Chapel Hill, some of the figures and slides are adapted from M. Pollefeys, J.S.
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationMulti-View Stereo for Static and Dynamic Scenes
Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.
More informationRANSAC: RANdom Sampling And Consensus
CS231-M RANSAC: RANdom Sampling And Consensus Roland Angst rangst@stanford.edu www.stanford.edu/~rangst CS231-M 2014-04-30 1 The Need for RANSAC Why do I need RANSAC? I know robust statistics! Robust Statistics
More informationCircular Motion Geometry by Minimal 2 Points in 4 Images
Circular Motion Geometry by Minimal 2 Points in 4 Images Guang JIANG 1,3, Long QUAN 2, and Hung-tat TSUI 1 1 Dept. of Electronic Engineering, The Chinese University of Hong Kong, New Territory, Hong Kong
More informationMultiple View Geometry
Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V
More informationTracking human joint motion for turntable-based static model reconstruction
joint angles Tracking human joint motion for turntable-based static model reconstruction Neil Birkbeck Dana Cobzas Martin Jagersand University of Alberta Edmonton, Alberta, CA {birkbeck,dana,jag}@cs.ualberta.ca
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationFinal project bits and pieces
Final project bits and pieces The project is expected to take four weeks of time for up to four people. At 12 hours per week per person that comes out to: ~192 hours of work for a four person team. Capstone:
More informationEECS 442 Computer vision. Announcements
EECS 442 Computer vision Announcements Midterm released after class (at 5pm) You ll have 46 hours to solve it. it s take home; you can use your notes and the books no internet must work on it individually
More informationOn Pencils of Tangent Planes and the Recognition of Smooth 3D Shapes from Silhouettes
On Pencils of Tangent Planes and the Recognition of Smooth 3D Shapes from Silhouettes Svetlana Lazebnik 1, Amit Sethi 1, Cordelia Schmid 2, David Kriegman 1, Jean Ponce 1, and Martial Hebert 3 1 Beckman
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationInvariant Features from Interest Point Groups
Invariant Features from Interest Point Groups Matthew Brown and David Lowe {mbrown lowe}@cs.ubc.ca Department of Computer Science, University of British Columbia, Vancouver, Canada. Abstract This paper
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera
More informationEstimation of common groundplane based on co-motion statistics
Estimation of common groundplane based on co-motion statistics Zoltan Szlavik, Laszlo Havasi 2, Tamas Sziranyi Analogical and Neural Computing Laboratory, Computer and Automation Research Institute of
More informationCamera Calibration and Shape Recovery from videos of Two Mirrors
Camera Calibration and Shape Recovery from videos of Two Mirrors Quanxin Chen and Hui Zhang Dept. of Computer Science, United International College, 28, Jinfeng Road, Tangjiawan, Zhuhai, Guangdong, China.
More informationA Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles
A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles Carlo Colombo, Dario Comanducci, and Alberto Del Bimbo Dipartimento di Sistemi ed Informatica Via S. Marta 3, I-5139
More informationA Stratified Approach for Camera Calibration Using Spheres
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. Y, MONTH YEAR 1 A Stratified Approach for Camera Calibration Using Spheres Kwan-Yee K. Wong, Member, IEEE, Guoqiang Zhang, Student-Member, IEEE and Zhihu
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationCS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence
CS4495/6495 Introduction to Computer Vision 3B-L3 Stereo correspondence For now assume parallel image planes Assume parallel (co-planar) image planes Assume same focal lengths Assume epipolar lines are
More informationPrecise Omnidirectional Camera Calibration
Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional
More information3D Photography: Stereo Matching
3D Photography: Stereo Matching Kevin Köser, Marc Pollefeys Spring 2012 http://cvg.ethz.ch/teaching/2012spring/3dphoto/ Stereo & Multi-View Stereo Tsukuba dataset http://cat.middlebury.edu/stereo/ Stereo
More informationFeature-Based Sequence-to-Sequence Matching
Feature-Based Sequence-to-Sequence Matching Yaron Caspi The Institute of Computer Science The Hebrew University of Jerusalem Jerusalem 91904, Israel Denis Simakov and Michal Irani Dept. of Computer Science
More informationComputer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16
Announcements Stereo III CSE252A Lecture 16 HW1 being returned HW3 assigned and due date extended until 11/27/12 No office hours today No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More information3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More informationLast lecture. Passive Stereo Spacetime Stereo
Last lecture Passive Stereo Spacetime Stereo Today Structure from Motion: Given pixel correspondences, how to compute 3D structure and camera motion? Slides stolen from Prof Yungyu Chuang Epipolar geometry
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points
More informationA Summary of Projective Geometry
A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]
More informationSome books on linear algebra
Some books on linear algebra Finite Dimensional Vector Spaces, Paul R. Halmos, 1947 Linear Algebra, Serge Lang, 2004 Linear Algebra and its Applications, Gilbert Strang, 1988 Matrix Computation, Gene H.
More informationOptimal Relative Pose with Unknown Correspondences
Optimal Relative Pose with Unknown Correspondences Johan Fredriksson 1,Viktor Larsson 1, Carl Olsson 1 and Fredrik Kahl 1,2 1 Lund University, Sweden 2 Chalmers University of Technology, Sweden {johanf,
More informationStructure and Motion Estimation from Apparent Contours under Circular Motion
Structure and Motion Estimation from Apparent Contours under Circular Motion Kwan-Yee K. Wong a,,paulo R. S. Mendonça b,1,roberto Cipolla b a Department of Computer Science and Information Systems, The
More informationA Factorization Method for Structure from Planar Motion
A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera
More informationCircular Motion Geometry Using Minimal Data. Abstract
Circular Motion Geometry Using Minimal Data Guang JIANG, Long QUAN, and Hung-tat TSUI Dept. of Electronic Engineering, The Chinese University of Hong Kong Dept. of Computer Science, The Hong Kong University
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationPerception and Action using Multilinear Forms
Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract
More informationFundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F
Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental
More informationCS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry
CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry Some material taken from: David Lowe, UBC Jiri Matas, CMP Prague http://cmp.felk.cvut.cz/~matas/papers/presentations/matas_beyondransac_cvprac05.ppt
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix
More informationCatadioptric camera model with conic mirror
LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación
More informationModel Selection for Automated Architectural Reconstruction from Multiple Views
Model Selection for Automated Architectural Reconstruction from Multiple Views Tomáš Werner, Andrew Zisserman Visual Geometry Group, University of Oxford Abstract We describe progress in automatically
More informationSummary Page Robust 6DOF Motion Estimation for Non-Overlapping, Multi-Camera Systems
Summary Page Robust 6DOF Motion Estimation for Non-Overlapping, Multi-Camera Systems Is this a system paper or a regular paper? This is a regular paper. What is the main contribution in terms of theory,
More informationProf. Trevor Darrell Lecture 18: Multiview and Photometric Stereo
C280, Computer Vision Prof. Trevor Darrell trevor@eecs.berkeley.edu Lecture 18: Multiview and Photometric Stereo Today Multiview stereo revisited Shape from large image collections Voxel Coloring Digital
More informationGeometry for Computer Vision
Geometry for Computer Vision Lecture 5b Calibrated Multi View Geometry Per-Erik Forssén 1 Overview The 5-point Algorithm Structure from Motion Bundle Adjustment 2 Planar degeneracy In the uncalibrated
More informationSilhouette Coherence for Camera Calibration under Circular Motion
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 29, NO. 2, FEBRUARY 2007 343 Silhouette Coherence for Camera Calibration under Circular Motion Carlos Hernández, Member, IEEE, Francis
More informationComputing the relations among three views based on artificial neural network
Computing the relations among three views based on artificial neural network Ying Kin Yu Kin Hong Wong Siu Hang Or Department of Computer Science and Engineering The Chinese University of Hong Kong E-mail:
More informationShape from the Light Field Boundary
Appears in: Proc. CVPR'97, pp.53-59, 997. Shape from the Light Field Boundary Kiriakos N. Kutulakos kyros@cs.rochester.edu Department of Computer Sciences University of Rochester Rochester, NY 4627-226
More informationShape from Silhouettes II
Shape from Silhouettes II Guido Gerig CS 6320, S2013 (slides modified from Marc Pollefeys UNC Chapel Hill, some of the figures and slides are adapted from M. Pollefeys, J.S. Franco, J. Matusik s presentations,
More informationRecap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?
Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple
More informationReal-Time Motion Analysis with Linear-Programming Λ
Real-ime Motion Analysis with Linear-Programming Λ Moshe Ben-Ezra Shmuel Peleg Michael Werman Institute of Computer Science he Hebrew University of Jerusalem 91904 Jerusalem, Israel Email: fmoshe, peleg,
More informationStereo: Disparity and Matching
CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS2 is out. But I was late. So we pushed the due date to Wed Sept 24 th, 11:55pm. There is still *no* grace period. To
More information3D Photography: Epipolar geometry
3D Photograph: Epipolar geometr Kalin Kolev, Marc Pollefes Spring 203 http://cvg.ethz.ch/teaching/203spring/3dphoto/ Schedule (tentative) Feb 8 Feb 25 Mar 4 Mar Mar 8 Mar 25 Apr Apr 8 Apr 5 Apr 22 Apr
More informationLecture 8 Active stereo & Volumetric stereo
Lecture 8 Active stereo & Volumetric stereo In this lecture, we ll first discuss another framework for describing stereo systems called active stereo, and then introduce the problem of volumetric stereo,
More information