Feature-Based Image Mosaicing
|
|
- Kevin White
- 5 years ago
- Views:
Transcription
1 Systems and Computers in Japan, Vol. 31, No. 7, 2000 Translated from Denshi Joho Tsushin Gakkai Ronbunshi, Vol. J82-D-II, No. 10, October 1999, pp Feature-Based Image Mosaicing Naoki Chiba and Hiroshi Kano Mechatronics Research Center, Sanyo Electric Co., Ltd., Hirakata, Japan Michihiko Minoh Center for Information and Multimedia Studies, Kyoto University, Kyoto, Japan Masashi Yasuda Mechatronics Research Center, Sanyo Electric Co., Ltd., Hirakata, Japan SUMMARY We propose an automatic image mosaicing method which can construct a panoramic image from a collection of digital still images. Our method is fast and robust enough to process a nonplanar scene with unrestricted camera motion. The method includes the following two techniques. First, we use a multiresolution patch-based optical flow estimation for making feature correspondences to obtain a homography automatically. This enabled us to transform images fast and accurately. Second, we have developed a technique to obtain a homography from only three points, instead of four in general, in order to divide a scene into triangles. Experiments using real images confirm the effectiveness of our method Scripta Technica, Syst Comp Jpn, 31(7): 1 9, Introduction Image mosaicing has become an active area of research in the fields of photogrammetry, computer vision, image processing, and computer graphics. Traditional applications include construction of aerial and satellite photographs. Recently, creation of virtual environments has been collecting attention [2, 9]. This is an interactive system which can provide a novel view from panoramic image mosaics. The other applications include video surveillance and image compression. However, previous methods have limitations regarding camera motion or the geometric properties of shooting scenes. On the other hand, in the research area of motion analysis, optical flow estimation techniques have been intensively investigated. Recently, a reliable optical flow technique has been introduced for 3D shape recovery [3]. In the area of stereo camera calibration, a good method has been introduced to obtain the geometric relationship, the epipolar constraints, between two cameras [12]. We propose a feature-based method which is fast and can deal with nonplanar scenes with unrestricted camera motion. The camera motion includes not only rotations such as panning or tilting, but also translations. As for the geometric property of shooting scenes, we consider not only planar scenes but also arbitrary depth scenes which are common indoors. Our method consists of the following three steps. First, it makes feature correspondences between consecutive images by using optical flow estimation. Second, it Scripta Technica
2 computes a geometric transformation when the shooting scene can be regarded as planar. Third, when we cannot regard the scene as a planar surface, it divides the scene into several triangles, by using the epipolar constraints between images to obtain each transformation for each triangle. Then we transform images with the transformations. The remainder of this paper is structured as follows. After reviewing related work in Section 2, we show how to apply the Lucas Kanade optical flow estimation to image mosaicing in Section 3. Section 4 describes a technique of image mosaicing when the scene is not planar. Section 5 presents our experimental results using real scene images. We close with a discussion and future work. 2. Related Work One of the previous methods uses a cylindrical projection for image mosaicing [2]. However, this method has the following two problems. First, it needs a tripod to restrict the camera motion to be horizontal. Second, the focal length of the camera must be known in advance. Recently, several methods have been introduced to solve these problems by using planar projective transformation. Given two images taken from the same viewpoint, or images of a planar scene taken from different viewpoints, the relationship between the images can be described by a linear projective transformation called a homography. Once we obtain a homography between two images, we can construct a panoramic image by transferring one image to the other with the homography matrix. Szeliski has introduced a method which can obtain the optical flow vectors and the homography between images by using a nonlinear minimization technique [8]. Zoghlami and colleagues have introduced a method which is based on random sampling framework to obtain both feature correspondence and a homography at the same time [13]. However, these methods have the following two problems. First, their computations are expensive. Szeliski s method has a tendency to fall into a local minimum, since it is based on a nonlinear minimization framework. To deal with nonplanar scenes, several methods have been introduced. However, these methods have the following problems. Their computational cost is expensive. There is instability in obtaining solutions since the methods are based on nonlinear minimization framework with many parameters. One of the reasons for these problems is that the methods do not use feature correspondence to obtain transformation, but they try to obtain the feature correspondence and transformation at the same time. We solve this problem by using feature correspondence. 3. Feature-Based Image Registration 3.1. Optical flow estimation The Lucas Kanade method is one of the best regarding optical flow estimation, because it is fast to compute, easy to implement, and controllable because of the availability of tracking confidence [1]. The major drawback of the gradient methods, including the Lucas Kanade method, is that they cannot deal with large motion. Coarse-to-fine multiresolution strategy has been applied to overcome this problem. A major problem, however, still remains. Low-textured regions have unreliable results. We solved this problem in Ref. 3, with a dilation-based filling technique after thresholding unreliable estimates, at each pyramid level Overlap extraction We need to extract the overlapping region before applying the optical flow estimation, when the difference between images is large. This is because optical flow estimation may fail, even though our flow estimation incorporates a hierarchical multiresolution technique. We find a rough displacement by maximizing the score of the normalized cross correlation, in the overlapping region, defined by the following equation: where C is the cross correlation coefficient of an overlapping region w (size: B M u N between images I 1 and I 2 having displacement d. Ii and V i are the mean and the variance of the overlapping part, respectively, in each image. We make low-resolution images for faster computation, because the following optical flow estimation will obtain finer displacement. Figure 1 shows the original images and Fig. 2 the extracted overlapping region. Fig. 1. Original images. (1) 2
3 Fig. 2. Overlap extraction. When the scene is a planar surface or when the images are taken from the same point of view, images are related by a linear projective transformation called a homography. Figure 4 illustrates the principal of the planar projective transformation. When we see a point M on a planar surface from two different viewpoints C 1 and C 2, we can transform the image coordinates m 1 = x 1, y 1, 1 t to m 2 = x 2, y 2, 1 t using the following 3 u 3 planar projective transformation matrix H [5]: km 2 = Hm 1 (2) where k is an arbitrary scale factor and image coordinates m 1 and m 2 are represented by homogeneous coordinates. This relationship can be rewritten using the following equations: 3.3. Feature correspondence After obtaining the overlapping region of the two images, we first select prominent features in the first image from their image derivatives and the Hessians [11]. This method finds small rectangular regions such as corners automatically. Next, we estimate sparse optical flow vectors for small rectangular patches such as 13 u 13 pixels by using the improved Lucas Kanade method described in Section 3.1. To obtain the point correspondence, we interpolate the results of nearest four patch bilinearly in subpixel order. We discard false correspondences by measuring the cross correlation coefficients. Figure 3 shows an example of the selected and corresponding features Planar projective transformation We solve a homography from feature correspondences. We can solve this homography from four pairs of feature correspondences, because one pair of feature correspondence provides two equations for eight unknown parameters. If we have more than four pairs of feature correspondences, we can use a least-squares method for obtaining a homography Homography-based image mosaicing By using planar projective transformation, we can extend the image plane virtually. The dotted line in Fig. 4 shows an extended region where it is invisible from viewpoint C 1, but visible from viewpoint C 2. Pixels in this area at viewpoint C 1 can be obtained by transferring pixels from C 2 with the homography between the two images. Extend- (3) Fig. 3. Feature correspondence. 3
4 4.1. Piecewise homography computation We need at least four points to obtain a homography since it has eight independent parameters. (It is a 3 u 3 matrix and is invariant to scaling.) We describe a novel technique to compute a homography from three points using the epipolar constraint. The epipolar constraint between two images is described by the fundamental matrix F, with a point m on an image I and the corresponding point mc on the other image Ic as follows: Fig. 4. Planar projective transformation. ing image plane corresponds to placing a wide-view lens whose focal length is short. We blend pixel intensities in the overlapping region. Averaging the intensities in that region causes a seam due to the brightness difference between the two images. Therefore, we blend intensities with the weights depending on the distance to each image. When we have three or more images, we cascade transformation iteratively. We select the middle image as a reference image by judging from the rough alignment by using normalized correlation described in Section 3.2. We obtain the transformation for each image to the reference image, after obtaining those between consecutive images. We can obtain a transformation to the reference image with the following equation, even though the image does not have any overlapping portion with the reference image: 4. Nonplanar Scene We propose a method that can construct a panorama from nonplanar, arbitrary depth scenes with unrestricted camera motion. If the assumption for homographies is not satisfied, that is, taking images of a nonplanar, arbitrary scene from different viewpoints, there will be misregistration caused by motion parallax. Figure 9 (left) shows an example. We propose a feature based method which triangulates a scene with corresponding feature points between images for obtaining the homography for each triangle. We use the Delaunay triangulation method. This is a wellknown triangulation method which is based on Voronoi diagrams [5]. (4) Recently a good method has been developed to obtain a fundamental matrix between uncalibrated cameras [12]. Using it, we first obtain the fundamental matrix between images from their corresponding points. By substituting Eq. (2) into Eq, (5), we have Since H T F means the outer product of vector m, it should be skew symmetric: We obtain 6 equations from Eq. (7), because diagonal elements and the additions of skew symmetric elements should be 0. We already have 6 equations from three sets of corresponding points with Eq. (2). We can compute a homography by least squares with three points and the fundamental matrix, because a total of 12 equations are available for eight unknown parameters Homography test We can determine whether we should use a single homography or divide the scene into several triangles by the following test. We measure the reprojection error D by using a homography with the following equation: where p i is a coordinate of a feature point, p i g is a coordinate of the reprojected feature point by the homography, and N is the number of feature points. If D is larger than a predefined threshold, we divide the scene into triangles. (5) (6) (7) (8) 4
5 4.3. Image mosaicing using triangular patches Here we show how to make image mosaics with triangular patches. First, we obtain an average homography H computed from all of the feature correspondences. Then we obtain each homography H t for each patch after triangulation. In the overlapping region, we use homographies H t for triangles. Outside the overlapping region, we use the generic homography H. Fig. 5. Landscape. 5. Experiments This section introduces the results of applying our feature-based techniques to image mosaicing. These images were taken with a hand-held digital still camera with no tripod Single planar scene Figure 5 shows an example. It took 11 seconds on a Pentium 300 MHz PC, 2 seconds for extracting the overlapping region, 4 seconds for matching features and obtaining the homography, 3 seconds for warping the images, and 2 seconds for blending intensities weighted by the distance from the boundaries, for an image size of 640 u 480 pixels. The reduced image size for obtaining the overlapping region was 40 u 30. We used four levels of pyramid for optical flow estimation. Figure 6 shows a comparison with a cylindrical projection. Our result was better even though the camera motion contained twisting around the optical axis and the scene had perspective distortion. On the other hand, the result with a cylindrical projection was poor. This is because the camera motion contained rotation besides panning. We did not blend intensities for evaluation purposes. Figure 7 shows another example. It demonstrates that our method does not restrict camera motion to horizontal rotation. Our method can make a panoramic image, even though the camera motion includes not only panning but also tilting or twisting. We used only G channel of RGB color for fast processing to obtain transformation, although all of the images are color. This is because the human eye is regarded to be sensitive to green Multiple planar scene Figure 9 shows the result of a multiple planar scene. Since the scene has a certain amount of depth range, we cannot regard it as a planar surface. The overlapping part of the image is divided into 40 triangles using feature points. The average of absolute intensity differences in the overlapping region has been reduced to 16.0%, compared with a single planar model. Here some triangles do not correspond to actual planes. This is because we cannot guarantee that the inside of the triangles always contain flat objects, although the vertices are on planes. There is no distortion when the triangles are on actual planes. However, if they are not on planes, there is significant misregistration. To solve this problem, we postprocessed for those triangles. First we determined whether the triangles contain nonplanar objects or not by measuring the intensity difference with normalized correlation. When the coefficients are Fig. 6. Clock tower (left: planar projective transform; right: cylindrical projection). 5
6 Fig. 7. Canal. beyond the predefined threshold, we took the whole region from either of the images rather than blending. This operation can reduce geometric distortions Comparison with previous work Fig. 8. Triangulation. We compared the processing time with one of the previous methods, namely, Szeliski s method. This method uses the following sum of squared intensity differences E between two images I and Ic to obtain the homography. It finds the optical flow vectors and the homography by minimizing the cost E with the Levenberg Marquardt nonlinear method. Fig. 9. Arbitrary depth scene (left: single; right: multiple). 6
7 (9) Table 2. Accuracy of overlap extraction However, Szeliski s nonlinear minimization method has the following two problems, although it has been widely used. First, it is computationally expensive, requiring iterative warpings. Second, it easily falls into a local minimum. This problem is common to the nonlinear minimization framework method. Then it is very important to provide a good initial estimation. Table 1 shows a comparison with Szeliski s nonlinear minimization method. Our method is 4 to 7 times faster than Szeliski s. We have measured the processing time of computing homographies on an SGI Indigo 2 (195 MHz). We had to prepare four levels of pyramid to avoid local minima. We have set the number of maximum iterations to be Accuracy of overlap extraction We measured the accuracy of our overlap extraction described in Section 3.2. We used 200 pairs of images which were taken with a hand-held camera. The image size ranges from 240 u 320 to 1024 u 768. A size of 40 u 30 was used for overlap extraction. Table 2 shows the result. We counted the number of successful image pairs versus the correct answers by hand. We conducted an experiment not only with normalized correlation but also with the sum of squared difference (SSD). The image pairs that SSD failed but correlation succeeded, had large brightness differences. Using normalized correlation took 2 seconds to process, which was 0.5 second longer than that of SSD, on a Pentium II 300 MHz with an image size of 640 u Discussion The chief advantage of our method is that it is faster and more accurate than previous methods at computing homographies. We can categorize mosaicing techniques that use planar projective transformation into two types. One uses nonlinear minimization framework and the other uses features. Our method is 4 to 7 times faster than Szeliski s method, which is based on nonlinear minimization framework. Feature-based techniques can be categorized into two types. One uses manual feature correspondence and the other uses automatic feature correspondence. There is a method of automatic feature correspondence which makes feature correspondence randomly [13]. This method was reported to take 15 minutes depending on images. Our method is much faster than this method. Traditional feature matching techniques such as area-based matching can be used. However, these techniques take longer time to process and tend to have false matching when the shooting scene contains repetitive pattern. On the other hand, our method is faster and more accurate than these methods, because it is based on a gradient-descent method. All types of image-based techniques have difficulties in handling low-textured images. Our method is not an exception, although it does not depend on a type of geometric properties. With low-textured images, having no feature correspondence or false matches cannot be avoided. Moving objects such as people or automobiles might be a problem since our method assumes static scenes. However, we can avoid this problem by using robust estimation when the scene can be regarded as a single plane or be taken from the same viewpoint. When the scene contains arbitrary depth and is taken from different viewpoints, we have to correct false matching by hand. Our method can afford some range of rotation around the optical center, although our overlapping extraction is based on translation. We have confirmed that it tolerates around 10 degrees of rotation, although it is not as good as Zoghlami s method [13]. Table 1. Processing time comparison (seconds) 7. Conclusions In this paper, we have illustrated a feature-based image mosaicing method. The greatest advantage of our method is that we can make a panoramic image of a nonplanar scene with unrestricted camera motion. First, we showed that our feature-based method is faster and more robust than previous featureless methods, because it is based on linear techniques. 7
8 Second, for a scene that we cannot assume to be a single plane. taken from different viewpoints, we described a method of dividing images into triangles with corresponding features. Our method provides the homography for each triangle from three points using the epipolar constraint, although existing methods require four points to compute. The technique is fast and robust, because it is linear. In future work, we plan to develop a technique using line features which we can expect to have more accuracy. We are also interested in creating a virtual environment that can provide a novel view from any viewpoints in 3D space. REFERENCES 1. Baron J, Fleet D, Beauchemin S. Performance of optical flow techniques. Int J Comput Vision 1994;12: Chen SE. QuickTime VR an image-based approach to virtual environment navigation. SIG- GRAPH 95, p Chiba N, Kanade T. A tracker for broken and closely spaced lines. Syst Comput Japan 1999;30: Kumar R, Anandan P, Hanna K. Direct recovery of shape from multiple views: A parallax based approach. Proc ICPR, p , Faugeras O. Three-dimensional computer vision: A geometric viewpoint. MIT Press; Lucas B, Kanade T. An iterative image registration technique with an application to stereo vision. 7th Int Joint Conference on Artificial Intelligence (IJCAI- 81), p Szeliski R. Video mosaics for virtual environment. IEEE Computer Graphics and Applications, p 22 30, Szeliski R. Image mosaicing for tele-reality applications. CRL Technical Report 94/2, Digital Equipment Corporation, Szeliski R, Shum HY. Creating full view panoramic image mosaics and environment maps. SIG- GRAPH 97, p Sawhney HS, Ayer S. Compact representations of videos through dominant and multiple motion estimation. IEEE PAMI, p , Tomasi C, Kanade T. Shape and motion from image streams: A factorization method Part 3 detection and tracking of point features. CMU-CS , Carnegie Mellon University, Zhang Z. Determining the epipolar geometry and its uncertainty: A review. Int J Comput Vision 1998;27: Zoghlami I, Faugeras O, Deriche R. Using geometric corners to build a 2D mosaic from a set of images. CVPR 97, p AUTHORS (from left to right) Naoki Chiba (member) received his B.S. degree in mechanical engineering from Kobe University in He then joined the Research and Development Headquarters of Sanyo Electric Company, and is currently a chief researcher there. He was a visiting scientist at the Robotics Institute of Carnegie Mellon University from 1995 to His current research interests include feature tracking, image mosaicing, and 3D reconstruction. Hiroshi Kano (member) received his M.S. degree from Kyoto University in He then joined the Research and Development Headquarters of Sanyo Electric Company in 1984, and is currently a lab leader there. He was a visiting scientist at the Robotics Institute of Carnegie Mellon University from 1993 to He received his Ph.D. degree from Kyoto University. 8
9 AUTHORS (continued) (from left to right) Michihiko Minoh (member) received his Ph.D. degree from Kyoto University in He was a visiting research scientist at the University of Massachusetts from 1987 to He became an associate professor at Kyoto University in 1989 and a professor in Masashi Yasuda (non-member) received his M.S. degree from Kyoto University in He then joined the Research and Development Headquarters of Sanyo Electric Company in 1980, and is currently a section leader there. He received his Ph.D. degree from Kyoto University. 9
1-2 Feature-Based Image Mosaicing
MVA'98 IAPR Workshop on Machine Vision Applications, Nov. 17-19, 1998, Makuhari, Chibq Japan 1-2 Feature-Based Image Mosaicing Naoki Chiba, Hiroshi Kano, Minoru Higashihara, Masashi Yasuda, and Masato
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationVisual Tracking (1) Tracking of Feature Points and Planar Rigid Objects
Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationFactorization Method Using Interpolated Feature Tracking via Projective Geometry
Factorization Method Using Interpolated Feature Tracking via Projective Geometry Hideo Saito, Shigeharu Kamijima Department of Information and Computer Science, Keio University Yokohama-City, 223-8522,
More informationA Tracker for Broken and Closely-Spaced Lines
Asupoved ±M pub feleaso A Tracker for Broken and Closely-Spaced Lines Naoki Chiba October 1997 CMU-CS-97-182 Takeo Kanade DTIC QUALITYI 2'j. - : School of Computer Science Carnegie Mellon University Pittsburgh,
More informationVisual Tracking (1) Feature Point Tracking and Block Matching
Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationMosaics. Today s Readings
Mosaics VR Seattle: http://www.vrseattle.com/ Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html Today s Readings Szeliski and Shum paper (sections
More informationCompositing a bird's eye view mosaic
Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition
More informationVideo Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen
Video Mosaics for Virtual Environments, R. Szeliski Review by: Christopher Rasmussen September 19, 2002 Announcements Homework due by midnight Next homework will be assigned Tuesday, due following Tuesday.
More informationIndex. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263
Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine
More informationStep-by-Step Model Buidling
Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 12 130228 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Panoramas, Mosaics, Stitching Two View Geometry
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationAUTOMATIC RECTIFICATION OF LONG IMAGE SEQUENCES. Kenji Okuma, James J. Little, David G. Lowe
AUTOMATIC RECTIFICATION OF LONG IMAGE SEQUENCES Kenji Okuma, James J. Little, David G. Lowe The Laboratory of Computational Intelligence The University of British Columbia Vancouver, British Columbia,
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections
More informationIndex. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253
Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationVisual Tracking (1) Pixel-intensity-based methods
Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationMidterm Examination CS 534: Computational Photography
Midterm Examination CS 534: Computational Photography November 3, 2016 NAME: Problem Score Max Score 1 6 2 8 3 9 4 12 5 4 6 13 7 7 8 6 9 9 10 6 11 14 12 6 Total 100 1 of 8 1. [6] (a) [3] What camera setting(s)
More informationRobust Camera Pan and Zoom Change Detection Using Optical Flow
Robust Camera and Change Detection Using Optical Flow Vishnu V. Makkapati Philips Research Asia - Bangalore Philips Innovation Campus, Philips Electronics India Ltd. Manyata Tech Park, Nagavara, Bangalore
More informationImage stitching. Announcements. Outline. Image stitching
Announcements Image stitching Project #1 was due yesterday. Project #2 handout will be available on the web later tomorrow. I will set up a webpage for artifact voting soon. Digital Visual Effects, Spring
More informationthresholded flow field structure element filled result
ISPRS International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5, pp. 676-683, Hakodate, 1998. A TRACKER FOR BROKEN AND CLOSELY-SPACED LINES Naoki CHIBA Chief Researcher, Mechatronics
More informationA Novel Stereo Camera System by a Biprism
528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel
More informationThere are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...
STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own
More informationImage stitching. Digital Visual Effects Yung-Yu Chuang. with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac
Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac Image stitching Stitching = alignment + blending geometrical registration
More informationHierarchical Matching Techiques for Automatic Image Mosaicing
Hierarchical Matching Techiques for Automatic Image Mosaicing C.L Begg, R Mukundan Department of Computer Science, University of Canterbury, Christchurch, New Zealand clb56@student.canterbury.ac.nz, mukund@cosc.canterbury.ac.nz
More informationGlobal Flow Estimation. Lecture 9
Motion Models Image Transformations to relate two images 3D Rigid motion Perspective & Orthographic Transformation Planar Scene Assumption Transformations Translation Rotation Rigid Affine Homography Pseudo
More informationComputational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --
Computational Optical Imaging - Optique Numerique -- Single and Multiple View Geometry, Stereo matching -- Autumn 2015 Ivo Ihrke with slides by Thorsten Thormaehlen Reminder: Feature Detection and Matching
More informationEstimation of common groundplane based on co-motion statistics
Estimation of common groundplane based on co-motion statistics Zoltan Szlavik, Laszlo Havasi 2, Tamas Sziranyi Analogical and Neural Computing Laboratory, Computer and Automation Research Institute of
More informationToday s lecture. Image Alignment and Stitching. Readings. Motion models
Today s lecture Image Alignment and Stitching Computer Vision CSE576, Spring 2005 Richard Szeliski Image alignment and stitching motion models cylindrical and spherical warping point-based alignment global
More informationAnnouncements. Mosaics. How to do it? Image Mosaics
Announcements Mosaics Project artifact voting Project 2 out today (help session at end of class) http://www.destination36.com/start.htm http://www.vrseattle.com/html/vrview.php?cat_id=&vrs_id=vrs38 Today
More informationObject Recognition with Invariant Features
Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing
More informationAn Algorithm for Seamless Image Stitching and Its Application
An Algorithm for Seamless Image Stitching and Its Application Jing Xing, Zhenjiang Miao, and Jing Chen Institute of Information Science, Beijing JiaoTong University, Beijing 100044, P.R. China Abstract.
More informationMultiple View Geometry
Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V
More informationImage stitching. Digital Visual Effects Yung-Yu Chuang. with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac
Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac Image stitching Stitching = alignment + blending geometrical registration
More informationFast Optical Flow Using Cross Correlation and Shortest-Path Techniques
Digital Image Computing: Techniques and Applications. Perth, Australia, December 7-8, 1999, pp.143-148. Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques Changming Sun CSIRO Mathematical
More informationOutdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera
Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute
More informationParticle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease
Particle Tracking For Bulk Material Handling Systems Using DEM Models By: Jordan Pease Introduction Motivation for project Particle Tracking Application to DEM models Experimental Results Future Work References
More informationApplication questions. Theoretical questions
The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationA COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES
A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES Yuzhu Lu Shana Smith Virtual Reality Applications Center, Human Computer Interaction Program, Iowa State University, Ames,
More informationLeow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1
Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape
More informationReal-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images
Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationLucas-Kanade Without Iterative Warping
3 LucasKanade Without Iterative Warping Alex RavAcha School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, Israel EMail: alexis@cs.huji.ac.il Abstract A significant
More informationLocal Image Registration: An Adaptive Filtering Framework
Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow
More informationHomographies and RANSAC
Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position
More informationReal-Time Stereo Mosaicing using Feature Tracking
2011 IEEE International Symposium on Multimedia Real- Stereo Mosaicing using Feature Tracking Marc Vivet Computer Science Department Universitat Autònoma de Barcelona Barcelona, Spain Shmuel Peleg School
More informationVisual Tracking of Planes with an Uncalibrated Central Catadioptric Camera
The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 29 St. Louis, USA Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera A. Salazar-Garibay,
More informationStereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world
More informationFLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE
FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE Naho INAMOTO and Hideo SAITO Keio University, Yokohama, Japan {nahotty,saito}@ozawa.ics.keio.ac.jp Abstract Recently there has been great deal of interest
More informationFinal Exam Study Guide CSE/EE 486 Fall 2007
Final Exam Study Guide CSE/EE 486 Fall 2007 Lecture 2 Intensity Sufaces and Gradients Image visualized as surface. Terrain concepts. Gradient of functions in 1D and 2D Numerical derivatives. Taylor series.
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationBSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy
BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationFace Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm
Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Dirk W. Wagener, Ben Herbst Department of Applied Mathematics, University of Stellenbosch, Private Bag X1, Matieland 762,
More informationDominant plane detection using optical flow and Independent Component Analysis
Dominant plane detection using optical flow and Independent Component Analysis Naoya OHNISHI 1 and Atsushi IMIYA 2 1 School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, 263-8522,
More informationObject and Motion Recognition using Plane Plus Parallax Displacement of Conics
Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Douglas R. Heisterkamp University of South Alabama Mobile, AL 6688-0002, USA dheister@jaguar1.usouthal.edu Prabir Bhattacharya
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points
More informationCSE 527: Introduction to Computer Vision
CSE 527: Introduction to Computer Vision Week 5 - Class 1: Matching, Stitching, Registration September 26th, 2017 ??? Recap Today Feature Matching Image Alignment Panoramas HW2! Feature Matches Feature
More information3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY
IJDW Volume 4 Number January-June 202 pp. 45-50 3D FACE RECONSRUCION BASED ON EPIPOLAR GEOMERY aher Khadhraoui, Faouzi Benzarti 2 and Hamid Amiri 3,2,3 Signal, Image Processing and Patterns Recognition
More informationAccurate and Dense Wide-Baseline Stereo Matching Using SW-POC
Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp
More informationImage Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003
Image Transfer Methods Satya Prakash Mallick Jan 28 th, 2003 Objective Given two or more images of the same scene, the objective is to synthesize a novel view of the scene from a view point where there
More informationOn Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications
ACCEPTED FOR CVPR 99. VERSION OF NOVEMBER 18, 2015. On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications Peter F. Sturm and Stephen J. Maybank Computational Vision Group,
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationAdaptive Multi-Stage 2D Image Motion Field Estimation
Adaptive Multi-Stage 2D Image Motion Field Estimation Ulrich Neumann and Suya You Computer Science Department Integrated Media Systems Center University of Southern California, CA 90089-0781 ABSRAC his
More informationLocal Feature Detectors
Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationPlayer Viewpoint Video Synthesis Using Multiple Cameras
Player Viewpoint Video Synthesis Using Multiple Cameras Kenji Kimura *, Hideo Saito Department of Information and Computer Science Keio University, Yokohama, Japan * k-kimura@ozawa.ics.keio.ac.jp, saito@ozawa.ics.keio.ac.jp
More informationMatching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.
Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationMultiview Stereo COSC450. Lecture 8
Multiview Stereo COSC450 Lecture 8 Stereo Vision So Far Stereo and epipolar geometry Fundamental matrix captures geometry 8-point algorithm Essential matrix with calibrated cameras 5-point algorithm Intersect
More informationOmni-directional Multi-baseline Stereo without Similarity Measures
Omni-directional Multi-baseline Stereo without Similarity Measures Tomokazu Sato and Naokazu Yokoya Graduate School of Information Science, Nara Institute of Science and Technology 8916-5 Takayama, Ikoma,
More informationUnit 3 Multiple View Geometry
Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover
More information3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera
3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,
More informationRecap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?
Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple
More informationDense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera
Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information
More informationImage warping and stitching
Image warping and stitching Thurs Oct 15 Last time Feature-based alignment 2D transformations Affine fit RANSAC 1 Robust feature-based alignment Extract features Compute putative matches Loop: Hypothesize
More informationLecture 16: Computer Vision
CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field
More informationLecture 16: Computer Vision
CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationLong-term motion estimation from images
Long-term motion estimation from images Dennis Strelow 1 and Sanjiv Singh 2 1 Google, Mountain View, CA, strelow@google.com 2 Carnegie Mellon University, Pittsburgh, PA, ssingh@cmu.edu Summary. Cameras
More informationMosaicing with Parallax using Time Warping Λ
Mosaicing with Parallax using Time Warping Λ Alex Rav-Acha Yael Shor y Shmuel Peleg School of Computer Science and Engineering The Hebrew University of Jerusalem 994 Jerusalem, Israel E-Mail: falexis,yaelshor,pelegg@cs.huji.ac.il
More informationStereo vision. Many slides adapted from Steve Seitz
Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is
More informationInvariant Features from Interest Point Groups
Invariant Features from Interest Point Groups Matthew Brown and David Lowe {mbrown lowe}@cs.ubc.ca Department of Computer Science, University of British Columbia, Vancouver, Canada. Abstract This paper
More informationDynamic Pushbroom Stereo Mosaics for Moving Target Extraction
Dynamic Pushbroom Stereo Mosaics for Moving Target Extraction Paper ID #413 Abstract Our goal is to rapidly acquire panoramic mosaicing maps with information of all 3D (moving) targets as a light aerial
More information