CAMERAS AND GRAVITY: ESTIMATING PLANAR OBJECT ORIENTATION. Zhaoyin Jia, Andrew Gallagher, Tsuhan Chen
|
|
- Sophia Wilson
- 5 years ago
- Views:
Transcription
1 CAMERAS AND GRAVITY: ESTIMATING PLANAR OBJECT ORIENTATION Zhaoyin Jia, Anrew Gallagher, Tsuhan Chen School of Electrical an Computer Engineering, Cornell University ABSTRACT Photography on a mobile camera provies access to aitional sensors. In this paper, we estimate the absolute orientation of a planar object with respect to the groun, which can be a valuable prior for many vision tasks. To fin the planar object orientation, our novel algorithm combines information from a gravity sensor with a planar homography that matches a region of an image to a training image (e.g., of a company logo). We emonstrate our approach with an iphone application that recors the gravity irection for each capture image. We fin a homography that maps the training image to the test image, an propose a novel homography ecomposition to extract the rotation matrix. We believe this is the first paper to estimate absolute planar object orientation by combining the inertial sensor information with vision algorithms. Experiments show that our propose algorithm performs reliably. Inex Terms Image processing, Image motion analysis, Image etection 1. INTRODUCTION Many mobile phones are equippe with inertial sensors, such as accelerometers an gyroscopes, to provie relatively accurate measurements of the phone s position, such as the orientation of the phone with respect to the groun. This information is usually use to improve user interaction, for example, showing an application winow in lanscape when a user rotates the mobile phone horizontally. In this paper, we combine the mobile evice s camera with a gravity sensor an show that the gravity sensor ata can estimate not only the position of the phone itself, but also the orientation of a planar object capture by the camera. One example is shown in Fig. 1: given a planar target object to etect (in Fig. 1 (a), the training view) an the gravity irection with respect to the camera (Fig. 1 (b)), we match the object in the test image (in Fig. 1 (c), the test view), an estimate the orientation of the object with respect to the groun through a special homography ecomposition, shown in Fig. 1 (). To our knowlege, we are the first to combine a vision algorithm (planar matching) with the inertial sensor to etermine the orientation of the object. The orientation of an object is a useful prior for many vision tasks. For example, even with (a) (c) Fig. 1. (a) The training view (frontal, no istortion) of the target. (b) The testing setup. (c) The test image containing the target. () The homography from the training image to the testing image. We compute the absolute orientation of the planar object, e.g. θ = 15, inicating the angle between the target artwork an the groun plane. the same image shown in Fig. 1 (a), if the image is isplaye vertically (θ = 9 ), it is more likely to be a painting hanging on the wall; if it is horizontal (θ = ), then it may be a picture in a book on a table. Relate work: Hoiem et. al [1] show that estimating surface orientation helps occlusion bounary etection, epth estimation an object recognition. Further applications of estimating the camera position, vanishing lines an the horizon are presente in [2]. This prior work proposes a learning-base approach to fin the surface orientation, an roughly classify these surfaces into vertical or horizontal. Our algorithm preicts the object orientation more precisely in egree, an incorporates a gravity sensor with pixel ata. In aition, there are other applications with ifferent goals that combine inertial sensors with images for matching, photo enhancement an robotics, such as [3], [4] an [5], but none combine ho- () (b)
2 2.1. Two-view geometry with planar homography For a homography H that maps points x in the image of camera to points x 1 in the image of camera 1, i.e., x 1 = Hx, [6] shows that the inuce planar homography H for the plane [n T, ] T between the two cameras is: We efine H as: H = K 1 (R tnt )K 1, (1) Fig. 2. Two views introuce by a homography with the gravity vector. mography matching with gravity sensors to accurately measure planar surface absolute orientation. Our work is closely relate to homography ecomposition presente in [6] [7] [8] an [9]. However, our setting an task is ifferent. These algorithms use the camera intrinsic matrices for ecomposition. In our case, although the mobile phone for testing can be calibrate, the training images containing targets to match have no camera information: they can be images ownloae from Internet, as shown in Fig. 1 (a). Our erivation shows that we can synthesize the camera matrix for the training view an get the target orientation, as long as the training image is frontal with no istortion, a usually vali assumption. We believe we are the first to combine this ecomposition with a gravity sensor to estimate the planar object orientation. 2. ALGORITHM We introuce our efinition of the variables in Fig. 2: we have two cameras (camera, which captures the training target image, an camera 1, which captures the test image) relate by a planar homography. The intrinsic matrices are K for the training view, an K 1 for the test image; the worl coorinate is efine as camera s axis, i.e., camera has no rotation an zero translation. The planar object is place at istant with normal irection n. The extrinsic matrix for camera 1 is [ R t ]. The gravity vector g is provie in the camera 1 s coorinates uring testing. The goal is to fin the projection of the surface normal n in camera 1 s coorinates, an then compute its angle with the gravity vector g. To o this, we must fin the rotation matrix R from camera to camera 1, espite the challenge that camera has unknown internal parameters. H = K 1 1 HK = R tnt. (2) Unfortunately, we cannot irectly ecompose H into R an t to get the position of the target ([8] an [1]), because although we have the camera parameter K 1 for the testing view (camera 1), the training view s intrinsics, K, are unknown Depth an intrinsic matrix K With unknown intrinsic matrix K, we make use of our assumptions about the training images 1) they are frontal views of a planar object; 2) the camera has zero-skew; 3) it also has square pixels. These assumptions hol for most planar targets, such as paintings, logos, book or CD covers online. Then K becomes: K = f c x f c y. (3) 1 For any 3D point X that lies on the plane [n T, ] T, since the camera is taking the frontal view of the plane object, then X = [x, y,, 1] T. By efinition, camera has ientity rotation an zero translation, then the extrinsic matrix for camera is [ I 3 3 ]. In homogeneous coorinates, the 2D projection x of 3D point X to camera is : x = K [ I3 3 = 1 c x 1 c y 1 We can rewrite K as K : 1 c x K = 1 c y = 1 ] [x, y,, 1] T x y f [ c (4) ], (5) where c = [c x, c y, 1] T, an represent the epth as : = f. (6) For a frontal view of a planar object, the epth an the focal length are relate: we get the exact same image results
3 if we increase the focal length of the camera, an move the object further away. With this erivation, the equation for the homography H in ( 2 )becomes : f tnt 1, (7) H = K1 H c =R 2.3. Decompose H to R Fig. 3. Sample frontal views of our planar object images teste. To ecompose H, we efine the vectors u an v as: 1 u =, v = 1, (8) an multiply them by H in ( 7). For u, on the left sie of ( 7) we have: 1 H u = K1 1 H c = K1 1 Hu. (9) On the right sie, since nt u =, (remember, in camera the frontal view of the planar object has n = [,, 1]T ) we have: f tnt )u = Ru (1) (R Therefore by combining ( 9) an ( 1) we have K1 1 Hu = Ru. (11) Following the same erivation for vector v, we also have K1 1 Hv = Rv. (b) θgt =, 3 respectively Fig. 4. Experiment setting with (a) fixe object orientation, an ifferent camera positions, an (b) graually increasing object orientation from to 75. to the groun, we compute the projection of plane s normal vector n in camera 1 s projection, i.e., Rn, an calculate its angle to the gravity vector g. This gives the orientation θ. More formally, efine angle α as by applying Eq. 13: α = arccos(rn g), α, α π2. θ= π α, π2 < α < π (14) (15) (12) Since R is a rotation matrix an [u, v, u v] = I3 3, thus [Ru, Rv, (Ru) (Rv)] = R[u, v, u v] = R = [K1 1 Hu, K1 1 Hv, (K1 1 Hu) (K1 1 Hv)]. (a) θgt = 72.5 (13) In practice, ue to the noise in computing H an K1, the final rotation matrix R may not be orthogonal. We approximate R to the nearest orthogonal matrix R by the Frobenius norm, i.e., by taking SVD of the Eq. 13, R = U ΣV T, an then the final R = U V T. Eq. 13 shows that, uner our assumptions for the training images (frontal view, no istortion), the rotation matrix R is inepenent of the camera center c an the focal length f of camera, the epth an the translation t. Once we retrieve the homography H between the two views an know the intrinsic parameter K1 for the testing camera 1, we can etermine the rotation matrix R from camera to camera Planar object orientation θ During testing, the gravity irection g is in camera 1 s coorinates. To compute the planar object orientation θ with respect 3. EXPERIMENTS An iphone 4 is use for the experiments. The camera is calibrate to fin K1, an the inertial sensor ata is recore while capturing images. We assume the coorinates of the gravity vector are aligne with the camera axis, an irectly use the gravity vector from the inertial sensor as g. We use several ifferent types of planar images, all ownloae, for experiments. We choose five famous paintings (Mona Lisa, The Girl with a Pearl Earring, Marriage of Arnolfini, The Music Lesson an Birth of Venus), a logo (Starbucks), an a CD cover (Beatles). We rener the image on an ipa as the testing target, an capture the scene with the calibrate iphone 4 camera to estimate the absolute orientation of the ipa. The groun truth angle of the planar object is manually measure by a protractor. The homography is compute through SIFT point matching [11] an RANSAC algorithm with DLT [6]. Three types of experiments are performe (see Fig. 4 for the first two): 1) we keep the planar object at a fixe angle with respect to the groun, an take images with ifferent camera positions; 2) we keep the camera at a fix angle, an
4 gt 15 3 obj1-mean obj1-var obj2-mean obj2-var Table 1. Results when having one object rotating from to 75. gt is the groun-truth angle. obj1 is the Starbuck logo, an obj2 is the Beatles CD cover. The table shows the mean an variance (var) for each image by sampling homographies with RANSAC. Fig. 5. Experiment result of ifferent object orientations. graually rotate the planar object from to 75 with respect to the groun; 3) we capture images in the real-worl to show some qualitative results. Fixe object, ifferent camera positions: In this setting we keep the object at a fixe angle, an take images from ifferent position. One example is shown in Fig. 4 (a). We take 1 images for each planar object at a certain angle. For each image, we compute homography 3 times using RANSAC, an compute the angle θ through each homography. The experiment results are shown in Fig. 5. The ashe lines inicates the groun-truth angle. Different colors inicate ifferent images at a specific angle. Because RANSAC prouces a ifferent homography each time, the error bar shows the stanar eviation of θ. Our propose algorithm preicts the angle of the planar object with small errors, that may be introuce by misalignment of the gravity vector axes, calibration of K1, an especially by the quality of homography. One example is shown in Fig. 6. A goo homography gives much smaller error in estimating the planar object s orientation. Fixe camera, ifferent object orientations: In another test we keep the camera roughly fixe, but graually increase the orientation angle θ of the planar object from to 75. One example is shown in Fig. 4 (b). The results are shown in Table 1. Our propose algorithm still robustly estimates the object orientation in this case. For these two scenarios, overall 88% of the testing cases are within 5 from the groun truth, an 98% of them are within 1 from the groun truth. Real worl example: We also test on real-worl images with a Starbucks logo for qualitative results, shown in Fig. 7. Our propose algorithm preicts the logo orientation θ accurately in ifferent situations. 4. CONCLUSION This paper estimates the absolute orientation of a planar object using the gravity sensor with a mobile camera. We mae (a) θgt = 42.5 (b) θ = 43. (c) θ = 55.1 Fig. 6. Homography quality affects the orientation θ. The groun-truth orientation (in (a)) is A goo homography in (b) has a better preiction than a worse one in (c). weak assumptions about the training image, e.g., frontal view, zero skew an no istortion. During testing we estimate the rotation through a special homography ecomposition, an calculate the projection of the normal vector of the planar object, an its angle between the gravity vector as the object orientation. Experiments showe that our propose algorithm robustly preicts the object orientation in ifferent scenarios. Future applications can be achieve base on this algorithm, e.g., correcting the homography or improving object etection. Since we fin the rotation matrix of the testing camera, our algorithm can also estimate the in-plane rotation of the planar object. This can be use for other applications, e.g. to preict if the object is up-sie own. (a) θ = 87.5 (b) θ = 3.7 (c) θ = 6.6 Fig. 7. Preicting the angle of Starbucks logo in real-worl images: a) A Starbucks logo outsie the Cafe. The grountruth θgt = 9. b) A bag of coffee in han. θgt shoul be in between of an 9. c) A menu on the table. θgt =. θ uner each figure is the preiction from our algorithm.
5 5. REFERENCES [1] D. Hoiem, A. A. Efros, an M. Hebert, Closing the loop in scene interpretation, in CVPR, 28. [2] D. Hoiem, A. A. Efros, an M. Hebert, Putting objects in perspective, IJCV, vol. 8, 28. [3] D Kurz an S Himane, Inertial sensor-aligne visual feature escriptors, in CVPR, 211. [4] H Lee, E Shechtman, J Wang, an S Lee, Automatic upright ajustment of photographs, in CVPR, 212. [5] J. Lobo an J. Dias, Vision an inertial sensor cooperation using gravity as a vertical reference, PAMI, vol. 25, no. 12, 23. [6] A. Harltey an A. Zisserman, Multiple view geometry in computer vision (2. e.), Cambrige University Press, 26. [7] Y. Ma, S. Soatto, J. Kosecka, an S. Sastry, An Introuction to 3D Vision, Springer, 23. [8] O. D. Faugeras an F. Lustman, Motion an structure from motion in a piecewise planar environment, International Journal of Pattern Recognition an Artificial Intelligence, [9] Z. Zhang an A.R. Hanson, 3 reconstruction base on homography mapping, Proc. ARPA96, [1] E. Malis an M. Vargas, Deeper unerstaning of the homography ecomposition for vision-base control, 28. [11] D. G. Lowe, Distinctive image features from scaleinvariant keypoints, IJCV, 24.
New Geometric Interpretation and Analytic Solution for Quadrilateral Reconstruction
New Geometric Interpretation an Analytic Solution for uarilateral Reconstruction Joo-Haeng Lee Convergence Technology Research Lab ETRI Daejeon, 305 777, KOREA Abstract A new geometric framework, calle
More informationCHAPTER 3. Single-view Geometry. 1. Consequences of Projection
CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationCamera Geometry II. COS 429 Princeton University
Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence
More informationGeometric camera models and calibration
Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October
More information1 Projective Geometry
CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and
More informationSingle-view metrology
Single-view metrology Magritte, Personal Values, 952 Many slides from S. Seitz, D. Hoiem Camera calibration revisited What if world coordinates of reference 3D points are not known? We can use scene features
More informationHomographies and RANSAC
Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationComputer Graphics Chapter 7 Three-Dimensional Viewing Viewing
Computer Graphics Chapter 7 Three-Dimensional Viewing Outline Overview of Three-Dimensional Viewing Concepts The Three-Dimensional Viewing Pipeline Three-Dimensional Viewing-Coorinate Parameters Transformation
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x
More informationCamera Model and Calibration
Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationHomography based visual odometry with known vertical direction and weak Manhattan world assumption
Homography based visual odometry with known vertical direction and weak Manhattan world assumption Olivier Saurer, Friedrich Fraundorfer, Marc Pollefeys Computer Vision and Geometry Lab, ETH Zürich, Switzerland
More informationKinematic Analysis of a Family of 3R Manipulators
Kinematic Analysis of a Family of R Manipulators Maher Baili, Philippe Wenger an Damien Chablat Institut e Recherche en Communications et Cybernétique e Nantes, UMR C.N.R.S. 6597 1, rue e la Noë, BP 92101,
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More informationURBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES
URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.1: 3D Geometry Jürgen Sturm Technische Universität München Points in 3D 3D point Augmented vector Homogeneous
More informationAgenda. Rotations. Camera models. Camera calibration. Homographies
Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate
More informationFast Window Based Stereo Matching for 3D Scene Reconstruction
The International Arab Journal of Information Technology, Vol. 0, No. 3, May 203 209 Fast Winow Base Stereo Matching for 3D Scene Reconstruction Mohamma Mozammel Chowhury an Mohamma AL-Amin Bhuiyan Department
More informationSimultaneous Vanishing Point Detection and Camera Calibration from Single Images
Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Bo Li, Kun Peng, Xianghua Ying, and Hongbin Zha The Key Lab of Machine Perception (Ministry of Education), Peking University,
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationPerspective Projection Describes Image Formation Berthold K.P. Horn
Perspective Projection Describes Image Formation Berthold K.P. Horn Wheel Alignment: Camber, Caster, Toe-In, SAI, Camber: angle between axle and horizontal plane. Toe: angle between projection of axle
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationExercises of PIV. incomplete draft, version 0.0. October 2009
Exercises of PIV incomplete raft, version 0.0 October 2009 1 Images Images are signals efine in 2D or 3D omains. They can be vector value (e.g., color images), real (monocromatic images), complex or binary
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X
More informationMachine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy
1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:
More informationAn idea which can be used once is a trick. If it can be used more than once it becomes a method
An idea which can be used once is a trick. If it can be used more than once it becomes a method - George Polya and Gabor Szego University of Texas at Arlington Rigid Body Transformations & Generalized
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More information3D Reconstruction of a Hopkins Landmark
3D Reconstruction of a Hopkins Landmark Ayushi Sinha (461), Hau Sze (461), Diane Duros (361) Abstract - This paper outlines a method for 3D reconstruction from two images. Our procedure is based on known
More informationUnit 3 Multiple View Geometry
Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover
More informationTwo-view geometry Computer Vision Spring 2018, Lecture 10
Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of
More informationSynchronized Ego-Motion Recovery of Two Face-to-Face Cameras
Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn
More informationDense Disparity Estimation in Ego-motion Reduced Search Space
Dense Disparity Estimation in Ego-motion Reuce Search Space Luka Fućek, Ivan Marković, Igor Cvišić, Ivan Petrović University of Zagreb, Faculty of Electrical Engineering an Computing, Croatia (e-mail:
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationRobust Camera Calibration for an Autonomous Underwater Vehicle
obust Camera Calibration for an Autonomous Unerwater Vehicle Matthew Bryant, Davi Wettergreen *, Samer Aballah, Alexaner Zelinsky obotic Systems Laboratory Department of Engineering, FEIT Department of
More informationMERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia
MERGING POINT CLOUDS FROM MULTIPLE KINECTS Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia Introduction What do we want to do? : Use information (point clouds) from multiple (2+) Kinects
More informationA Plane Tracker for AEC-automation Applications
A Plane Tracker for AEC-automation Applications Chen Feng *, an Vineet R. Kamat Department of Civil an Environmental Engineering, University of Michigan, Ann Arbor, USA * Corresponing author (cforrest@umich.eu)
More informationCS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka
CS223b Midterm Exam, Computer Vision Monday February 25th, Winter 2008, Prof. Jana Kosecka Your name email This exam is 8 pages long including cover page. Make sure your exam is not missing any pages.
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationRigid Body Motion and Image Formation. Jana Kosecka, CS 482
Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3
More informationVisualization 2D-to-3D Photo Rendering for 3D Displays
Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information
More informationSingle-view 3D Reconstruction
Single-view 3D Reconstruction 10/12/17 Computational Photography Derek Hoiem, University of Illinois Some slides from Alyosha Efros, Steve Seitz Notes about Project 4 (Image-based Lighting) You can work
More informationImage correspondences and structure from motion
Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More informationRefinement of scene depth from stereo camera ego-motion parameters
Refinement of scene epth from stereo camera ego-motion parameters Piotr Skulimowski, Pawel Strumillo An algorithm for refinement of isparity (epth) map from stereoscopic sequences is propose. The metho
More informationProjective Geometry and Camera Models
Projective Geometry and Camera Models Computer Vision CS 43 Brown James Hays Slides from Derek Hoiem, Alexei Efros, Steve Seitz, and David Forsyth Administrative Stuff My Office hours, CIT 375 Monday and
More informationOverview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers
Augmented reality Overview Augmented reality and applications Marker-based augmented reality Binary markers Textured planar markers Camera model Homography Direct Linear Transformation What is augmented
More informationA Factorization Method for Structure from Planar Motion
A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationCamera Model and Calibration. Lecture-12
Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationAgenda. Rotations. Camera calibration. Homography. Ransac
Agenda Rotations Camera calibration Homography Ransac Geometric Transformations y x Transformation Matrix # DoF Preserves Icon translation rigid (Euclidean) similarity affine projective h I t h R t h sr
More informationCamera Calibration from the Quasi-affine Invariance of Two Parallel Circles
Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationPin Hole Cameras & Warp Functions
Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg
More informationLecture'9'&'10:'' Stereo'Vision'
Lecture'9'&'10:'' Stereo'Vision' Dr.'Juan'Carlos'Niebles' Stanford'AI'Lab' ' Professor'FeiAFei'Li' Stanford'Vision'Lab' 1' Dimensionality'ReducIon'Machine'(3D'to'2D)' 3D world 2D image Point of observation
More informationDual Arm Robot Research Report
Dual Arm Robot Research Report Analytical Inverse Kinematics Solution for Moularize Dual-Arm Robot With offset at shouler an wrist Motivation an Abstract Generally, an inustrial manipulator such as PUMA
More informationStereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationA Classification of 3R Orthogonal Manipulators by the Topology of their Workspace
A Classification of R Orthogonal Manipulators by the Topology of their Workspace Maher aili, Philippe Wenger an Damien Chablat Institut e Recherche en Communications et Cybernétique e Nantes, UMR C.N.R.S.
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationVision Review: Image Formation. Course web page:
Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some
More informationMidterm Examination CS 534: Computational Photography
Midterm Examination CS 534: Computational Photography November 3, 2016 NAME: Problem Score Max Score 1 6 2 8 3 9 4 12 5 4 6 13 7 7 8 6 9 9 10 6 11 14 12 6 Total 100 1 of 8 1. [6] (a) [3] What camera setting(s)
More informationAutomatic Photo Popup
Automatic Photo Popup Derek Hoiem Alexei A. Efros Martial Hebert Carnegie Mellon University What Is Automatic Photo Popup Introduction Creating 3D models from images is a complex process Time-consuming
More informationComputer Vision Project-1
University of Utah, School Of Computing Computer Vision Project- Singla, Sumedha sumedha.singla@utah.edu (00877456 February, 205 Theoretical Problems. Pinhole Camera (a A straight line in the world space
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationBIL Computer Vision Apr 16, 2014
BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More informationPerspective Projection [2 pts]
Instructions: CSE252a Computer Vision Assignment 1 Instructor: Ben Ochoa Due: Thursday, October 23, 11:59 PM Submit your assignment electronically by email to iskwak+252a@cs.ucsd.edu with the subject line
More informationScene Modeling for a Single View
Scene Modeling for a Single View René MAGRITTE Portrait d'edward James with a lot of slides stolen from Steve Seitz and David Brogan, Breaking out of 2D now we are ready to break out of 2D And enter the
More informationShift-map Image Registration
Shift-map Image Registration Linus Svärm Petter Stranmark Centre for Mathematical Sciences, Lun University {linus,petter}@maths.lth.se Abstract Shift-map image processing is a new framework base on energy
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationCOS429: COMPUTER VISON CAMERAS AND PROJECTIONS (2 lectures)
COS429: COMPUTER VISON CMERS ND PROJECTIONS (2 lectures) Pinhole cameras Camera with lenses Sensing nalytical Euclidean geometry The intrinsic parameters of a camera The extrinsic parameters of a camera
More informationCS201 Computer Vision Camera Geometry
CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the
More informationCamera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006
Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation
More informationRobust Camera Calibration from Images and Rotation Data
Robust Camera Calibration from Images and Rotation Data Jan-Michael Frahm and Reinhard Koch Institute of Computer Science and Applied Mathematics Christian Albrechts University Kiel Herman-Rodewald-Str.
More informationCamera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah
Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu Reference Most slides are adapted from the following notes: Some lecture notes on geometric
More informationPractical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance
Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance Zhaoxiang Zhang, Min Li, Kaiqi Huang and Tieniu Tan National Laboratory of Pattern Recognition,
More informationCS4670: Computer Vision
CS467: Computer Vision Noah Snavely Lecture 13: Projection, Part 2 Perspective study of a vase by Paolo Uccello Szeliski 2.1.3-2.1.6 Reading Announcements Project 2a due Friday, 8:59pm Project 2b out Friday
More informationPin Hole Cameras & Warp Functions
Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:
More informationA linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images
A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images Gang Xu, Jun-ichi Terai and Heung-Yeung Shum Microsoft Research China 49
More informationEuclidean Reconstruction Independent on Camera Intrinsic Parameters
Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More informationHW 1: Project Report (Camera Calibration)
HW 1: Project Report (Camera Calibration) ABHISHEK KUMAR (abhik@sci.utah.edu) 1 Problem The problem is to calibrate a camera for a fixed focal length using two orthogonal checkerboard planes, and to find
More informationCompositing a bird's eye view mosaic
Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition
More informationEfficient Stereo Image Rectification Method Using Horizontal Baseline
Efficient Stereo Image Rectification Method Using Horizontal Baseline Yun-Suk Kang and Yo-Sung Ho School of Information and Communicatitions Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro,
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 5: Projection Reading: Szeliski 2.1 Projection Reading: Szeliski 2.1 Projection Müller Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html Modeling
More informationIntroduction to Computer Vision
Introduction to Computer Vision Michael J. Black Nov 2009 Perspective projection and affine motion Goals Today Perspective projection 3D motion Wed Projects Friday Regularization and robust statistics
More informationCamera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds
Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds Marta Wilczkowiak Edmond Boyer Peter Sturm Movi Gravir Inria Rhône-Alpes, 655 Avenue de l Europe, 3833 Montbonnot, France
More informationFINDING OPTICAL DISPERSION OF A PRISM WITH APPLICATION OF MINIMUM DEVIATION ANGLE MEASUREMENT METHOD
Warsaw University of Technology Faculty of Physics Physics Laboratory I P Joanna Konwerska-Hrabowska 6 FINDING OPTICAL DISPERSION OF A PRISM WITH APPLICATION OF MINIMUM DEVIATION ANGLE MEASUREMENT METHOD.
More information3-D D Euclidean Space - Vectors
3-D D Euclidean Space - Vectors Rigid Body Motion and Image Formation A free vector is defined by a pair of points : Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Coordinates of the vector : 3D Rotation
More informationA COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES
A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES Yuzhu Lu Shana Smith Virtual Reality Applications Center, Human Computer Interaction Program, Iowa State University, Ames,
More informationScene Modeling for a Single View
Scene Modeling for a Single View René MAGRITTE Portrait d'edward James CS194: Image Manipulation & Computational Photography with a lot of slides stolen from Alexei Efros, UC Berkeley, Fall 2014 Steve
More informationLecture 3: Camera Calibration, DLT, SVD
Computer Vision Lecture 3 23--28 Lecture 3: Camera Calibration, DL, SVD he Inner Parameters In this section we will introduce the inner parameters of the cameras Recall from the camera equations λx = P
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationPlanar pattern for automatic camera calibration
Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute
More informationScene Modeling for a Single View
Scene Modeling for a Single View René MAGRITTE Portrait d'edward James with a lot of slides stolen from Steve Seitz and David Brogan, 15-463: Computational Photography Alexei Efros, CMU, Fall 2005 Classes
More information1/5/2014. Bedrich Benes Purdue University Dec 6 th 2013 Prague. Modeling is an open problem in CG
Berich Benes Purue University Dec 6 th 213 Prague Inverse Proceural Moeling (IPM) Motivation IPM Classification Case stuies IPM of volumetric builings IPM of stochastic trees Urban reparameterization IPM
More information