Sashi Kumar Penta COMP Final Project Report Department of Computer Science, UNC at Chapel Hill 13 Dec, 2006
|
|
- Opal Lynch
- 5 years ago
- Views:
Transcription
1 Computer vision framework for adding CG Simulations Sashi Kumar Penta COMP Final Project Report Department of Computer Science, UNC at Chapel Hill 13 Dec, 2006 Figure 1: (i) Top row: Images from left to right, left view and right view of stereo pair, disparity map using only data cost and disparity map using graph-cut algorithm respectively. (ii) Bottom row: Images from left to right, original image with foreground, foreground separated from background, a snapshot of 3D model constructed from the disparity map, and a snapshot of CG model in MSR dance sequence. Abstract Rendering synthetic simulations into real world scenes is an important application of both Computer Graphics (CG) and Computer Vision (CV). CV techniques have been used to reconstruct the 3D models of the real world scenes. As part of my course project, I ve implemented computer vision modules for calibrating camera, computing stereo, computing depth-maps, computing 3D models and a graphics interactive tool for adding simulations into this 3D model obtained from the stereo video. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation Display Algorithms I.3.7 [Computer Graphics]: Three-Dimensional GRaphics and Realism Animation I.4.8 [Image processing and Computer vision]: Scene Analysis Stereo and Time-varying imagery Keywords: Computer vision, Dynamic Scenes, Computer Animation, Image-based rendering 1 Introduction Augmentation of real world scenes with synthetic objects is a problem of great significance for Computer Graphics (CG) as well as other research disciplines. Many special effects involve combining CG elements with real footage. Synthetic objects rendered onto real video have also been used to enhance visualization for engineering as well as medical purposes. Most research in this area has focused on using either static or rigid moving objects as CG elements to be inserted into the scene. Relevance to the Robotics course : Robots need to know where they are located in the world and they need to understand 3D of the world they walk in. Camera calibration techniques are required to find the position and orientation of the camera. Its quite common in robots world to have the patterns around, to let the robots find where they are located in the world. This technique will have direct application in using Simultaneous localization and mapping (SLAM) of robot. Infrared sensors can be used to find how far they are from the surroundings, which is understanding the 3D of the world around the robot. Infrared sensors are noisy and they only give the distance in the frontal direction. Its quite easy to have two cameras for the robot and use stereo for understanding 3D of the world around it. We used Zhang s [Zhang 2000a] calibration technique for calibrating the camera. We used graph-cut based method [Boykov et al. 2001] for computing the stereo correspondences. Finally, we applied triangulation to the stereo to get the 3D of the world. we ve used the above reconstructed 3D world as height field for simple CG simulations.
2 2 Background Computer vision techniques can be used to reconstruct 3D models of real world scenes, where as computer graphics can be used to render these reconstructed 3D models as well as synthetic models. Constructing 3D models has been the primary focus of computer vision techniques. Camera calibration [Tsai 1987a; Zhang 2000a] is the first step to Construct 3D. Calibration involves computing camera matrix, which tells where a 3D point projects in the image. Stereo correspondence [Scharstein et al. 2002; Boykov et al. 2001] can be used to recover 3D models from 2D stereo images. Full geometry recovered through stereo techniques have been used to get high quality novel views [Zitnick et al. 2004; Buehler et al. 2001]. These techniques mainly focus on modeling and rendering real-scenes. However, adding CG simulations into these high quality novel views is relatively unexplored. State of the art : Full geometry recovered through stereo techniques have been used to get high quality novel views [Zitnick et al. 2004; Buehler et al. 2001]. These techniques mainly focus on modeling and rendering real-scenes. Adding CG simulations into these high quality novel views will be very interesting. In this project, I built a framework towards achieving this goal. Figure 2: Stereo rig used for capturing stereo images 3 Modules This project has been divided into following modules. Capturing: This module explains imaging process involved in obtaining stereo videos. Camera calibration : This module will be used to obtain camera parameters (both interior and exterior) through calibration. Stereo correspondence : This module will be used to compute correspondences from the stereo images/videos. Depth-map : This module will be used to obtain depth-maps from the correspondences and calibration matrices obtained in the previous modules. Foreground : This module explains the process of separating foreground from the background. Tool for interaction : This module will allow user to do all kinds of interactions, selecting region of interests, changing view points, selecting regions to place the CG into the video. Contributions and Collaborations : This project is part of my research project, Fluids in Video, in collaboration with Vivek Kwatra and Philippos Mordohai. We worked closely to get all the modules working. Although I spent a good amount of time on Stereo correspondence, Foreground and Tool for interaction modules. 4 Capturing We used the standard stereo rig such as one shown in the figure 2 for capturing our stereo videos and one such frame is shown in the figure 3. In this section, we first explain the notation used for the rest of the sections and then will explain all the mathematics behind the imaging process. Notation: A point in 2D space is represented by a pair of coordinates (x,y) in R 2 and in homogeneous coordinates as a 3D-vector. An arbitrary homogeneous vector representative of a point is of the Figure 3: Stereo pair form x = (x 1, x 2, x 3 ) T, represents the point (x 1 /x 3,x 2 /x 3 ) in R 2. Similarly a point in 3D space is represented by the triplet (X,Y, Z) in R 3 and in homogeneous coordinates as a 4D-vector. Points in the 2D image plane are written using boldface lower case letters x, y, z, etc, and in Cartesian coordinate system as x, ỹ, z, etc. 3D points are written using boldface capital letters X,Y,Z, etc, and in Cartesian coordinate system as X, Ỹ, Z, etc. Matrices are represented in bold face capital letters M, P, V,K, R, etc. Imaging: A camera maps a 3D world point to a 2D point in image. If the world and image points are represented by homogeneous vectors, then the mapping between their homogeneous coordinates can be expressed as x = M X (1) where, X represents world point by the homogeneous vector (X,Y,Z,1) T, x represents image point as a homogeneous 3-vector, and M represents a 3 4 homogeneous camera projection matrix. Camera matrix M can be written as where K = M = K [R t] α s p x 0 β p y is 3 3 intrinsic matrix consisting of internal parameters α, β, s, p x and p y, R is 3 3 rotation matrix and t is 3 1 translation vector. The augmented matrix [R t] is the extrinsic matrix. M has 11 degrees of freedom, out of which 5 come from K, 3 from R (rotations about three axes θ x,θ y and θ z respectively) and 3 from t (t x,t y and t z ). R and t represent the rotation and translation of the camera with respect to the world coordinate system as shown in the figure 4.
3 Ycam Camera Coordinate System C X cam Zcam R,t Y X O World Coordinate System Figure 4: The rotation and translation between the world and camera coordinate frames. 5 Camera calibration Z Figure 6: Rectified images of the stereo pair. For every point on the white line of the left image has the corresponding point in the right image. Observe the matched blue points (corresponding points), on nose and telephone wire marked on left and righ images resepectively. Camera calibration is the process of estimating the matrix M as shown in Equation 1. Many methods [Tsai 1987b; Zhang 2000b; Wang and Tsai 1990; Strunz 1992; Maas 1999] have been proposed in literature to calibrate cameras. Some methods [Mikhail and Mulawa 1985; Kruck 1984; Forket 1996; Heikkila 1990] use known geometric shapes, such as straight lines, circles, known angles and lengths in the scene to calibrate cameras. Most methods [Tsai 1987b; Holt and Netravali 1991; Sutherland 1974] involve estimating components of M matrix using a linear or non-linear optimization technique, given sufficient matching points x in image and X in the 3D world. The M matrix is factored into K, R and t using the input information. We used popular method proposed by Zhang [Zhang 2000b] is a flexible camera calibration technique, which only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. A stereo pair containing the patterns are shown in the figure 5. the disagreement between f and the observed data. Left-most image shown in the figure 7 is solely based on the data cost. E(f ) = E smooth (f ) + E data (f ) When we applied graph-cut based algorithm using the above formulation we got stair casing as shown in the middle image of the figure 7. We used guassian filtering and got the final good result as shown in the last image of the figure 7. Figure 7: Images from left to right show disparity map using only data cost, disparity map using the graphcut algorithm, disparity map applying the Gaussian filter 7 Depth map and 3D Model from depth-map Figure 5: Stereo pair : with calibration pattern, checkered board. 6 Stereo correspondence Stereo correspondences is the problem of finding the corresponding points in the left-view and right-view. For a particular point in the left-image can match to any point in the right image. Matching criteria could be close-ness to average brightness or special features of that pixel in the image, such as edge, corner, point, etc. Given that matching point can be any where in the image, search space is huge. It makes the problem very difficult. Epipolar geometry tells that a point on the left view, contains on a line (called epipolar line) on the right image. In an arbitrary situation this line can be at any arbitrary angle. We rectified the image such that epipolar lines are horizontal. One such example is shown in the figure 6. We used graph-cut based method [Boykov et al. 2001] to compute the stereo correspondence. Graph-cut methods have two costs, Data cost and Smoothness cost. Smoothness cost measures the extent to which f (labeling) is not piecewise smooth and Data cost measures Several methods are used for depth recovery from images. These include depth from stereo, shape from focus, shape from defocus, structure from motion, shape from silhouettes, shape from shading, etc. Triangulation is a standard method to find the 3D point from the corresponding 2D points in two images. A point a in one image corresponds a ray in space passing through camera center. Clearly the camera center C 1 is a point on this ray and M + 1 a is the point at infinity in that direction. The 3D points on this ray can be written as C 1 + z 1 M + 1 a, where z 1 is the depth of the point. Ambiguity in determining the 3D point on the ray can be resolved using another camera. If point b in the second image corresponding to the same 3D point, another 3D ray, with points given by, C 2 + z 2 M + 2 b contains the point. These rays will intersect at the point that is projected at a and b as shown in the Figure 8. If z 1 or z 2 is known, then one can compute the 3D point from C 1 + z 1 M + 1 a or C 2 + z 2 M + 2 b. Other wise, z i s can be computed from the following equation: C 1 + z 1 M + 1 a = C 2 + z 2 M + 2 b. (2) Equation 2 gives 3 equations in 2 unknowns z 1 and z 2 if calibration parameters C 1, C 2, M 1 and M 2 are known. One such 3D model obtained is shown in the figure 9 at an arbitrary angle. As you can
4 see reconstruction shown in the figure 9 is not very accurate. We are still investigating to fix the problems with the reconstruction. X (3D world point) a b Figure 11: Three 3 frames of the animation C 1 C 2 Figure 8: Triangulation to find the 3D point Figure 9: Textured 3D model shown at an arbitrary angle 8 Foreground As we proceeded to use the graph-cut method and figured that it is very expensive operation and performing it for every frame will not make our system interactive. So we decided to apply the graph-cut algorithm separately to the foregrounds, i.e regions that are different from the background image. We used a simple HSV color based segmentation of the difference image and the results are shown in the figure Conclusions and Future work We have developed a computer vision framework which will facilitate us to add CG simulations into the dynamic real scenes, using the interactive tool. In future, we plan to implement Crowd simulations in the real videos and want to change the view point of this augmented scene using some kind of Image based rendering technique. Crowd simulations in real videos We plan to implement the crowd simulations in the real videos in the following fashion. Render the top view of the reconstructed world facing the land in the scene using OpenGL Read the Z-buffer which acts like the height field for the crowd simulation Define other goal positions of the crowd in the field Implement some kind of Continuum crowds [Treuille et al. 2006] on top of these fields Challenges involved in this kind of simulations are (i) dynamic height fields obtained in the above process mentioned and (ii) inaccuracy of height maps. We are planning to develop simulation and rendering algorithms to work for this kind of noisy/dynamic environments. References Figure 10: From left to right shows the images of background, composite (both background and foreground) and extracted foreground image. Noise in the foreground is due to varying lighting conditions in the input images. 9 Interactivity and CG in videos I ve used my interactive tool for adding simple CG models into the model obtained using the depth-map module mentioned above in the section 7. Using this tool, One can change the view point (using mouse and keyboard) and select position on the screen (using mouse) to place the 3D CG models into the scene. I made use of OpenGL for this. I read the depth-buffer at the point where user wants to place the CG model and obtain the 3D location in the scene. Three such frames are shown in the figure 11. BOYKOV, Y., VEKSLER, O., AND ZABIH, R Fast approximate energy minization via graph cuts. In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 23, 11, BUEHLER, C., BOSSE, M., MCMILLAN, L., GORTLER, S. J., AND COHEN, M. F Unstructured Lumigraph Rendering. In Proc. ACM SIGGRAPH, FORKET, G Image orientation exclusively based on freeform tie curves. International Archives of Photogrammetry and Remote Sensing 31, B3, HEIKKILA, J Update calibration of a photogrammetric station. International Archives of Photogrammetry and Remote Sensing 28, 5/2, HOLT, R. J., AND NETRAVALI, A. N Camera calibration problem: some new results. Computer Vision, Graphics, and Image Processing 54, 3, KRUCK, E A program for bundle adjustment for engineering applications - possibilities, facilities and practical results. International Archives of Photogrammetry and Remote Sensing 25, A5,
5 MAAS, H. G Image sequence based automatic multi-camera system calibration techiques. ISPRS J. of Photogr. and Rem. Sens 54, 6, MIKHAIL, E. M., AND MULAWA, D. C Geometric form fitting in industrial metrology using compuer-assisted theodolites. ASP/ACSM Fall meeting. SCHARSTEIN, D., SZELISKI, R., AND ZABIH, R A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision 47, 1. STRUNZ, G Image orientation and quality assessment in feature based photogrammetry. Robust Computer Vision, SUTHERLAND, I. E Three-dimensional data input by tablet. Proceedings of the IEEE 62, 4, TREUILLE, A., COOPER, S., AND POPOVIć, Z Continuum crowds. ACM Trans. Graph. 25, 3, TSAI, R A versatile camera calibration technique for highaccuracy 3D machine vision metrology using off-the-shelf tv cameras and lenses. IEEE Journal of Robotics and Automation 3, 4, TSAI, R. Y A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the- Shelf TV Cameras and Lenses. IEEE Journal of Robotics and Automation 4. WANG, L., AND TSAI Computing Camera parameters using vanishing-line information from a rectangular parallepiped. Machine Vision and Applications 3, ZHANG, Z A Flexible New Technique for Camera Calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 11, ZHANG, Z A Flexible New Technique for Camera Calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 11, ZITNICK, C. L., KANG, S. B., UYTTENDAELE, M., WINDER, S., AND SZELISKI, R High-quality video view interpolation using a layered representation. In SIGGRAPH.
Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X
More information1 Projective Geometry
CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationEpipolar Geometry and Stereo Vision
CS 1674: Intro to Computer Vision Epipolar Geometry and Stereo Vision Prof. Adriana Kovashka University of Pittsburgh October 5, 2016 Announcement Please send me three topics you want me to review next
More information3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,
3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates
More informationWhat have we leaned so far?
What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic
More informationBIL Computer Vision Apr 16, 2014
BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationScene Segmentation by Color and Depth Information and its Applications
Scene Segmentation by Color and Depth Information and its Applications Carlo Dal Mutto Pietro Zanuttigh Guido M. Cortelazzo Department of Information Engineering University of Padova Via Gradenigo 6/B,
More informationRecap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?
Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple
More informationView Synthesis for Multiview Video Compression
View Synthesis for Multiview Video Compression Emin Martinian, Alexander Behrens, Jun Xin, and Anthony Vetro email:{martinian,jxin,avetro}@merl.com, behrens@tnt.uni-hannover.de Mitsubishi Electric Research
More informationPassive 3D Photography
SIGGRAPH 2000 Course on 3D Photography Passive 3D Photography Steve Seitz Carnegie Mellon University University of Washington http://www.cs cs.cmu.edu/~ /~seitz Visual Cues Shading Merle Norman Cosmetics,
More informationStereo vision. Many slides adapted from Steve Seitz
Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationLecture 14: Computer Vision
CS/b: Artificial Intelligence II Prof. Olga Veksler Lecture : Computer Vision D shape from Images Stereo Reconstruction Many Slides are from Steve Seitz (UW), S. Narasimhan Outline Cues for D shape perception
More informationEpipolar Geometry CSE P576. Dr. Matthew Brown
Epipolar Geometry CSE P576 Dr. Matthew Brown Epipolar Geometry Epipolar Lines, Plane Constraint Fundamental Matrix, Linear solution + RANSAC Applications: Structure from Motion, Stereo [ Szeliski 11] 2
More informationA virtual tour of free viewpoint rendering
A virtual tour of free viewpoint rendering Cédric Verleysen ICTEAM institute, Université catholique de Louvain, Belgium cedric.verleysen@uclouvain.be Organization of the presentation Context Acquisition
More information3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY
IJDW Volume 4 Number January-June 202 pp. 45-50 3D FACE RECONSRUCION BASED ON EPIPOLAR GEOMERY aher Khadhraoui, Faouzi Benzarti 2 and Hamid Amiri 3,2,3 Signal, Image Processing and Patterns Recognition
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More informationStereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world
More informationRecap from Previous Lecture
Recap from Previous Lecture Tone Mapping Preserve local contrast or detail at the expense of large scale contrast. Changing the brightness within objects or surfaces unequally leads to halos. We are now
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can
More informationBut, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction
Computer Graphics -based rendering Output Michael F. Cohen Microsoft Research Synthetic Camera Model Computer Vision Combined Output Output Model Real Scene Synthetic Camera Model Real Cameras Real Scene
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More informationUnit 3 Multiple View Geometry
Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More informationEpipolar Geometry and Stereo Vision
CS 1699: Intro to Computer Vision Epipolar Geometry and Stereo Vision Prof. Adriana Kovashka University of Pittsburgh October 8, 2015 Today Review Projective transforms Image stitching (homography) Epipolar
More informationMultiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Structure Computation Lecture 18 March 22, 2005 2 3D Reconstruction The goal of 3D reconstruction
More informationThere are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...
STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationRigid Body Motion and Image Formation. Jana Kosecka, CS 482
Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationStructure from Motion and Multi- view Geometry. Last lecture
Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,
More informationLecture 10: Multi view geometry
Lecture 10: Multi view geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from
More informationModeling, Combining, and Rendering Dynamic Real-World Events From Image Sequences
Modeling, Combining, and Rendering Dynamic Real-World Events From Image s Sundar Vedula, Peter Rander, Hideo Saito, and Takeo Kanade The Robotics Institute Carnegie Mellon University Abstract Virtualized
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera Stereo
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More information3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationImage Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003
Image Transfer Methods Satya Prakash Mallick Jan 28 th, 2003 Objective Given two or more images of the same scene, the objective is to synthesize a novel view of the scene from a view point where there
More informationCamera Model and Calibration
Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationLecture 10: Multi-view geometry
Lecture 10: Multi-view geometry Professor Stanford Vision Lab 1 What we will learn today? Review for stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationBut First: Multi-View Projective Geometry
View Morphing (Seitz & Dyer, SIGGRAPH 96) Virtual Camera Photograph Morphed View View interpolation (ala McMillan) but no depth no camera information Photograph But First: Multi-View Projective Geometry
More informationView Synthesis for Multiview Video Compression
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com View Synthesis for Multiview Video Compression Emin Martinian, Alexander Behrens, Jun Xin, and Anthony Vetro TR2006-035 April 2006 Abstract
More informationStereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision
Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras
More informationCHAPTER 3. Single-view Geometry. 1. Consequences of Projection
CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.
More informationCIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM
CIS 580, Machine Perception, Spring 2016 Homework 2 Due: 2015.02.24. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Recover camera orientation By observing
More informationProject 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)
Announcements Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by email) One-page writeup (from project web page), specifying:» Your team members» Project goals. Be specific.
More informationKeywords:Synthetic Data, IBR, Data Generation Tool. Abstract
Data Generation Toolkit for Image Based Rendering Algorithms V Vamsi Krishna, P J Narayanan Center for Visual Information Technology International Institute of Information Technology, Hyderabad, India
More informationImage Transformations & Camera Calibration. Mašinska vizija, 2018.
Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize
More informationSIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE
SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE S. Hirose R&D Center, TOPCON CORPORATION, 75-1, Hasunuma-cho, Itabashi-ku, Tokyo, Japan Commission
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationVisualization 2D-to-3D Photo Rendering for 3D Displays
Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information
More informationLecture 14: Basic Multi-View Geometry
Lecture 14: Basic Multi-View Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram
More informationComputer Vision cmput 428/615
Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?
More informationAssignment 2: Stereo and 3D Reconstruction from Disparity
CS 6320, 3D Computer Vision Spring 2013, Prof. Guido Gerig Assignment 2: Stereo and 3D Reconstruction from Disparity Out: Mon Feb-11-2013 Due: Mon Feb-25-2013, midnight (theoretical and practical parts,
More informationUsing Shape Priors to Regularize Intermediate Views in Wide-Baseline Image-Based Rendering
Using Shape Priors to Regularize Intermediate Views in Wide-Baseline Image-Based Rendering Cédric Verleysen¹, T. Maugey², P. Frossard², C. De Vleeschouwer¹ ¹ ICTEAM institute, UCL (Belgium) ; ² LTS4 lab,
More informationComments on Consistent Depth Maps Recovery from a Video Sequence
Comments on Consistent Depth Maps Recovery from a Video Sequence N.P. van der Aa D.S. Grootendorst B.F. Böggemann R.T. Tan Technical Report UU-CS-2011-014 May 2011 Department of Information and Computing
More informationComputer Vision I - Algorithms and Applications: Multi-View 3D reconstruction
Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View
More informationMulti-View Stereo for Static and Dynamic Scenes
Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.
More informationHow to Compute the Pose of an Object without a Direct View?
How to Compute the Pose of an Object without a Direct View? Peter Sturm and Thomas Bonfort INRIA Rhône-Alpes, 38330 Montbonnot St Martin, France {Peter.Sturm, Thomas.Bonfort}@inrialpes.fr Abstract. We
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 12 130228 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Panoramas, Mosaics, Stitching Two View Geometry
More information5LSH0 Advanced Topics Video & Analysis
1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl
More informationVOLUMETRIC MODEL REFINEMENT BY SHELL CARVING
VOLUMETRIC MODEL REFINEMENT BY SHELL CARVING Y. Kuzu a, O. Sinram b a Yıldız Technical University, Department of Geodesy and Photogrammetry Engineering 34349 Beşiktaş Istanbul, Turkey - kuzu@yildiz.edu.tr
More informationEpipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz
Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two
More informationMultiple View Geometry
Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V
More informationPattern Feature Detection for Camera Calibration Using Circular Sample
Pattern Feature Detection for Camera Calibration Using Circular Sample Dong-Won Shin and Yo-Sung Ho (&) Gwangju Institute of Science and Technology (GIST), 13 Cheomdan-gwagiro, Buk-gu, Gwangju 500-71,
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera
More informationChaplin, Modern Times, 1936
Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections
More informationC / 35. C18 Computer Vision. David Murray. dwm/courses/4cv.
C18 2015 1 / 35 C18 Computer Vision David Murray david.murray@eng.ox.ac.uk www.robots.ox.ac.uk/ dwm/courses/4cv Michaelmas 2015 C18 2015 2 / 35 Computer Vision: This time... 1. Introduction; imaging geometry;
More informationStereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes
More informationCV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more
CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured
More informationCALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang
CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS Cha Zhang and Zhengyou Zhang Communication and Collaboration Systems Group, Microsoft Research {chazhang, zhang}@microsoft.com ABSTRACT
More informationBinocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?
Binocular Stereo Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given a calibrated binocular stereo
More informationImage Based Reconstruction II
Image Based Reconstruction II Qixing Huang Feb. 2 th 2017 Slide Credit: Yasutaka Furukawa Image-Based Geometry Reconstruction Pipeline Last Lecture: Multi-View SFM Multi-View SFM This Lecture: Multi-View
More informationAn Overview of Matchmoving using Structure from Motion Methods
An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu
More informationAn idea which can be used once is a trick. If it can be used more than once it becomes a method
An idea which can be used once is a trick. If it can be used more than once it becomes a method - George Polya and Gabor Szego University of Texas at Arlington Rigid Body Transformations & Generalized
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationOverview. Related Work Tensor Voting in 2-D Tensor Voting in 3-D Tensor Voting in N-D Application to Vision Problems Stereo Visual Motion
Overview Related Work Tensor Voting in 2-D Tensor Voting in 3-D Tensor Voting in N-D Application to Vision Problems Stereo Visual Motion Binary-Space-Partitioned Images 3-D Surface Extraction from Medical
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera
More informationEECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems
EECS 442 Computer vision Stereo systems Stereo vision Rectification Correspondence problem Active stereo vision systems Reading: [HZ] Chapter: 11 [FP] Chapter: 11 Stereo vision P p p O 1 O 2 Goal: estimate
More informationLecture 9 & 10: Stereo Vision
Lecture 9 & 10: Stereo Vision Professor Fei- Fei Li Stanford Vision Lab 1 What we will learn today? IntroducEon to stereo vision Epipolar geometry: a gentle intro Parallel images Image receficaeon Solving
More informationCamera Calibration with a Simulated Three Dimensional Calibration Object
Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More information3D Geometry and Camera Calibration
3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often
More informationCamera Model and Calibration. Lecture-12
Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationMeasurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point Auxiliary Orientation
2016 rd International Conference on Engineering Technology and Application (ICETA 2016) ISBN: 978-1-60595-8-0 Measurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point
More informationDD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication
DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:
More informationCorrespondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong
More informationModel-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a
96 Chapter 7 Model-Based Stereo 7.1 Motivation The modeling system described in Chapter 5 allows the user to create a basic model of a scene, but in general the scene will have additional geometric detail
More informationTopics and things to know about them:
Practice Final CMSC 427 Distributed Tuesday, December 11, 2007 Review Session, Monday, December 17, 5:00pm, 4424 AV Williams Final: 10:30 AM Wednesday, December 19, 2007 General Guidelines: The final will
More informationComputer Vision Projective Geometry and Calibration. Pinhole cameras
Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole
More informationMassachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II
Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior
More information