CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM

Size: px
Start display at page:

Download "CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM"

Transcription

1 CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Recover camera orientation By observing the vanishing points of lines or the vanishing line of a plane, we can estimate the camera s orientation. In this problem, we have two pictures of Levine building shown in Figure 1 and Figure 2. The world coordinate system is right handed with the origin at the door of the building; the vector from the door to S 34th St defines z-axis and the vector from the door to the sky defines y-axis. The camera intrinsic parameter K is given by all numbers are in pixels. 1.1 Single vanishing point K = , Compute the z-vanishing point, using the points given in Figure Compute the rotation angles, Pan α and Tilt β (defined in the lecture note) using the z-vanishing point. 1.2 Two vanishing points 1. Compute the x/y-vanishing points in Figure Compute the rotation angles, Pan α, Tilt β and Yaw γ (defined in the lecture note) using x/y-vanishing points. 1

2 Figure 1: Camera orientation from a single vanishing point. We measured four points in the image belonging to two lines parallel with z-axis. Locations of points are specified as in the figure (best view in color). Figure 2: Camera orientation from x-y vanishing points. We measured four points in the image belonging to two lines parallel with y-axis, marked in red, and four points belonging to two lines parallel with the x-axis, marked in green. Locations of points are specified as in the figure (best view in color). 2

3 2 Homography transformation Homography transformation describes the geometrical transformation between two planes. In this question, we will verify this transformation using a cell phone camera. Let s define the world coordinate system as in Figure 3. Recall in HW1, we learned how to locate camera optical center by converging lines as shown in Figure 3 and Figure 4. We first place our cell phone (camera) vertically on the paper, and adjusted its position so that the radiating lines on the paper appear to be parallel to each other in the image. We denote this camera plane image plane A. We tilt the phone (camera) forward around the x -axis by 45, creating image plane B. We will calculate the line equations of the radiating lines on image plane B. The calibration matrix for our camera is given by , K = all numbers are in pixels. Figure 3: Homography transformation. Figure 4: We marked four points on two radiating lines on the paper, and measured their position in the world coordinate (left). We measured the coordinates of their correspondences in the image A (right). 1. Let xa = KλH1 X be the homography projection of points from the paper plane onto the image plane A, where x and X are the homogeneous coordinates of points on the image plane and paper plane respectively. We marked four points on two radiating lines on the paper plane, and measured their coordinates X. On image plane A, we measured coordinates xa of these points as shown in Figure 4. From these four point correspondences, we computed the homography matrix H1 to be H1 =

4 The two lines in image plane A are parallel and intersect at a point at infinity. Write down the point at infinity in homogeneous coordinate representation. Use the homography H 1 and K to project this point back to the paper plane, and obtain its exact 3D coordinates. 2. Tilt the camera forward about x -axis by 45. Write down the camera rotation matrix R associated with this transformation. Assume the camera rotates with respective to its optical center. Note that R maps the previous camera coordinates to the current camera coordinates P cam a f ter = RP cam be f ore, where P cam be f ore and P cam a f ter are 3D point coordinates under camera coordinate system before / after the tilt respectively. 3. Compute the homography transformation H that maps points from image plane A to image plane B. Hint: The projected 2D point on the image plane B is x B = KRλH 1 X. 4. For these two radiating lines we measured in Figure 4, compute their homogeneous line representation on image plane B. Hint: l = H T l. 5. Compute the intersection of these two radiating lines on image plane B. Are they converging or diverging? 3 Estimate the height of objects We can estimate the height of any object on the ground using three measurements in the image: 1) horizon line, 2) vanishing point in Z (perpendicular to the ground plane), and 3) a known object height on the ground. In this question, we will revisit the problem of estimating object heights, and solve the problem by cross ratio when image plane is not perpendicular to the ground plane. Figure 5: Single View Metrology: estimating the height of objects using a known reference object. 1. Take a picture of Levine building. Include an object, or friend, with a known height in the picture. Make sure the bottom and top of the object (or your friend) are in the field of view, and the image plane is NOT perpendicular to the ground plane. In other words, the vertical vanishing point should NOT be at infinity. 2. Compute two vanishing points by intersecting parallel lines on the ground plane or on the building facade. 3. Compute and draw the horizon (a vanishing line) in the image. 4. Compute the vanishing point in the Z axis using vertical lines on the facade. 5. Compute the height of the front door of Levine building in mm by cross ratio. 4

5 4 Camera Rotation (a) Two meanings of rotation matrix (b) Rotation Combination Figure 6: Camera Rotation Recall the camera projection equation is defined as x = K [R t] X, (1) where R and t are the camera extrinsic parameters, and K is the camera intrinsic parameter. In this problem, we will familiarize ourselves with the concept of rotation matrix R. Mathematically, the rotation matrix R can be used as following (3D example): x b y b z b = R This equation has two geometrical meanings for the same rotation action, as illustrated in Figure 6(a): Rotation of point a, (x a, y a, z a ), to point b, (x b, y b, z b ), in the same coordinate system shown in Figure 6(a)(Left); Rotation of the coordinate system b to a, and transfer the point p coordinate in a, (x a, y a, z a ), to its coordinate in b, (x b, y b, z b ), as shown in Figure 6(a)(Right). The camera extrinsic parameter R represents the coordinate transformation from the world coordinate to the camera coordinate. Geometrically the R corresponds to the rotation action that moves the world coordinate system to the camera coordinate system. This is the inverse of the rotation action experienced by the camera. If we rotate the camera by R C, the camera extrinsic parameters R = inv(r C ) = R C. Another property of the rotation is that rotation actions can be composed in sequence (show in Figure 6(b)): if a rotation a can be separated to two rotations: first rotation b, and then rotation c, we have: x a y a z a R a = R c R b Through HW1 problem 4 on Dolly Zoom, we learned how the projected point position changes according to the camera focal length and camera position. We will extend this example to a more general one including camera rotation. We will use the same synthetic scene as HW1. Figure 7(a) illustrates the top view of the simple synthetic scene and the camera placement. There are three objects in the scene, denoted as A (green cube), B (triangular pyramid) and C (blue cube). We will use the following settings: The image size is , in square pixels, and the image center is aligned with the optical center ray. The image plane is perpendicular to the optical center ray. 5

6 (a) beginning status (b) rotate world (c) rotate camera (d) rotate object Figure 7: Top view of the synthetic scene. For the first frame, the image plane is parallel to the xy plane. The horizontal direction is x-axis. The vertical direction is y-axis. For the first frame, the camera center, denoted as O c, is located at the origin. For the first frame, the camera focal length is f o = 400 (in pixel unit). 1. Constructing intrinsic K. Given the 3D position of all the visible vertices, re-render the video of Dolly Zoom (similar to HW1 4.4, but shorter). There are two functions need to be completed [ K ] = intrinsic_para( f, alpha, principal_point, s) construct camera intrinsic matrix f: double, focal length alpha: 1*2 vector, pixel scale principal_point: 1*2 vector, principal point position s: double, slant factor K: 3*3 matrix, intrinsic K matrix [ p2d ] = project( K, p3d ) use for compute vertex image position from given camera intrinsic matrix Input: K: 3*3 matrix, intrinsic K matrix p3d: n by 3, 3D vertex position in world coordinate system Output: p2d: n by 2, each row represents vertex image position, in pixel unit Complete and use generate video 1.m to render the video. In this question, you should re-use your compute f.m function in HW1. The 3D position (X, Y, Z) for each visible vertex in Figure 8 was given in the data.mat file, containing points A (n-by-3), points B and points C matrices. 2. Rotating objects about the world origin. Reset the camera to its initial position (keep focal length as 400 pixels), render the video for a rotating 3D world (all objects) about the y-axis. We will first rotate the objects by N left about the y -axis, and then by a sequence of rotation right about the y -axis by M, as show in Figure 7(b). 6

7 Figure 8: Camera image rendered for the synthetic scene: the first frame with pos=0, and f=400. Rotating N left is given as R start, and incremental rotation of M is given as R delta, both are stored in input R.mat. We also know the total frame number is 31. There is one function needs to be completed [ p3d_new ] = rotate_world( frame, p3d ) rotate the 3D points about y axis frame: frame number p3d: n by 3, 3D vertex position in world coordinate system. p3d_new: n by 3, 3D vertex position after rotation Complete and use generate video 2.m to render the video. 3. Rotating camera. Reset the camera to its initial position (keep focal length as 400 pixels), render the video for a rotating camera about the y-axis as shown in Figure 7(c). We will use the same rotation action: first rotating the camera by N left about the y -axis, and then a sequence of rotation right about the y -axis by M. Hint: you would need to convert the camera rotation to the extrinsic parameters R and t first. There are two functions need to be completed [ p3d_c ] = world2camera( R, t, p3d ) transform points from 3D world to camera local R: 3 by 3, camera extrinsic parameters t: 1 by 3, camera extrinsic parameters p3d: n by 3, 3D vertex position in world coordinate system. p3d_new: n by 3, 3D vertex position after rotation [ R, t ] = extrinsic_para( frame ) compute camera extrinsic parameters for specific frame frame: frame number R: 3 by 3, camera extrinsic parameters t: 1 by 3, camera extrinsic parameters Run generate video 3.m to render the video. 7

8 4. Spinning object A about itself. Reset the camera to its initial position (keep focal length as 400 pixels), render the video for a rotating object A about the axis form by A 3 A 4 as shown in Figure 7(d). We will use the same rotation action: first rotating A by N left about the A 3 A 4, and then a sequence of rotation right about the A 3 A 4 by M. There is one function needs to be completed [ p3d_new ] = rotate_object( frame, p3d ) rotate the 3D points about specific line frame: frame number p3d: n by 3, 3D vertex position in world coordinate system. p3d_new: n by 3, 3D vertex position after rotation Complete and use generate video 4.m to render the video. Hint: Separate the rotation about A 3 A 4 into a translation and a rotation in the world coordinate system first. 8

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM CIS 580, Machine Perception, Spring 2015 Homework 1 Due: 2015.02.09. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Camera Model, Focal Length and

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Camera Model and Calibration

Camera Model and Calibration Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

Single View Geometry. Camera model & Orientation + Position estimation. What am I? Single View Geometry Camera model & Orientation + Position estimation What am I? Vanishing point Mapping from 3D to 2D Point & Line Goal: Point Homogeneous coordinates represent coordinates in 2 dimensions

More information

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

Single View Geometry. Camera model & Orientation + Position estimation. What am I? Single View Geometry Camera model & Orientation + Position estimation What am I? Vanishing points & line http://www.wetcanvas.com/ http://pennpaint.blogspot.com/ http://www.joshuanava.biz/perspective/in-other-words-the-observer-simply-points-in-thesame-direction-as-the-lines-in-order-to-find-their-vanishing-point.html

More information

Single View Geometry. Camera model & Orientation + Position estimation. Jianbo Shi. What am I? University of Pennsylvania GRASP

Single View Geometry. Camera model & Orientation + Position estimation. Jianbo Shi. What am I? University of Pennsylvania GRASP Single View Geometry Camera model & Orientation + Position estimation Jianbo Shi What am I? 1 Camera projection model The overall goal is to compute 3D geometry of the scene from just 2D images. We will

More information

Camera Model and Calibration. Lecture-12

Camera Model and Calibration. Lecture-12 Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

Single View Geometry. Camera model & Orientation + Position estimation. What am I? Single View Geometr Camera model & Orientation + Position estimation What am I? Ideal case: c Projection equation: x = f X / Z = f Y / Z c p f C x c Zx = f X Z = f Y Z = Z Step 1: Camera projection matrix

More information

Introduction to Homogeneous coordinates

Introduction to Homogeneous coordinates Last class we considered smooth translations and rotations of the camera coordinate system and the resulting motions of points in the image projection plane. These two transformations were expressed mathematically

More information

Single-view 3D Reconstruction

Single-view 3D Reconstruction Single-view 3D Reconstruction 10/12/17 Computational Photography Derek Hoiem, University of Illinois Some slides from Alyosha Efros, Steve Seitz Notes about Project 4 (Image-based Lighting) You can work

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

CS4670: Computer Vision

CS4670: Computer Vision CS467: Computer Vision Noah Snavely Lecture 13: Projection, Part 2 Perspective study of a vase by Paolo Uccello Szeliski 2.1.3-2.1.6 Reading Announcements Project 2a due Friday, 8:59pm Project 2b out Friday

More information

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt ECE 47: Homework 5 Due Tuesday, October 7 in class @:3pm Seth Hutchinson Luke A Wendt ECE 47 : Homework 5 Consider a camera with focal length λ = Suppose the optical axis of the camera is aligned with

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Single-view metrology

Single-view metrology Single-view metrology Magritte, Personal Values, 952 Many slides from S. Seitz, D. Hoiem Camera calibration revisited What if world coordinates of reference 3D points are not known? We can use scene features

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu Reference Most slides are adapted from the following notes: Some lecture notes on geometric

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

CSCI 5980: Assignment #3 Homography

CSCI 5980: Assignment #3 Homography Submission Assignment due: Feb 23 Individual assignment. Write-up submission format: a single PDF up to 3 pages (more than 3 page assignment will be automatically returned.). Code and data. Submission

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Module 6: Pinhole camera model Lecture 32: Coordinate system conversion, Changing the image/world coordinate system

Module 6: Pinhole camera model Lecture 32: Coordinate system conversion, Changing the image/world coordinate system The Lecture Contains: Back-projection of a 2D point to 3D 6.3 Coordinate system conversion file:///d /...(Ganesh%20Rana)/MY%20COURSE_Ganesh%20Rana/Prof.%20Sumana%20Gupta/FINAL%20DVSP/lecture%2032/32_1.htm[12/31/2015

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection

More information

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482 Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu VisualFunHouse.com 3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com) 3D

More information

3-D D Euclidean Space - Vectors

3-D D Euclidean Space - Vectors 3-D D Euclidean Space - Vectors Rigid Body Motion and Image Formation A free vector is defined by a pair of points : Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Coordinates of the vector : 3D Rotation

More information

Scene Modeling for a Single View

Scene Modeling for a Single View on to 3D Scene Modeling for a Single View We want real 3D scene walk-throughs: rotation translation Can we do it from a single photograph? Reading: A. Criminisi, I. Reid and A. Zisserman, Single View Metrology

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin Stony Brook University (SUNY at Stony Brook) Stony Brook, New York 11794-2424 Tel: (631)632-845; Fax: (631)632-8334 qin@cs.stonybrook.edu

More information

Perspective Projection [2 pts]

Perspective Projection [2 pts] Instructions: CSE252a Computer Vision Assignment 1 Instructor: Ben Ochoa Due: Thursday, October 23, 11:59 PM Submit your assignment electronically by email to iskwak+252a@cs.ucsd.edu with the subject line

More information

Scene Modeling for a Single View

Scene Modeling for a Single View Scene Modeling for a Single View René MAGRITTE Portrait d'edward James with a lot of slides stolen from Steve Seitz and David Brogan, Breaking out of 2D now we are ready to break out of 2D And enter the

More information

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004 Augmented Reality II - Camera Calibration - Gudrun Klinker May, 24 Literature Richard Hartley and Andrew Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2. (Section 5,

More information

CSE328 Fundamentals of Computer Graphics

CSE328 Fundamentals of Computer Graphics CSE328 Fundamentals of Computer Graphics Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 794--44 Tel: (63)632-845; Fax: (63)632-8334 qin@cs.sunysb.edu

More information

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination Rendering Pipeline 3D Polygon Rendering Many applications use rendering of 3D polygons with direct illumination 3D Polygon Rendering What steps are necessary to utilize spatial coherence while drawing

More information

Scene Modeling for a Single View

Scene Modeling for a Single View Scene Modeling for a Single View René MAGRITTE Portrait d'edward James CS194: Image Manipulation & Computational Photography with a lot of slides stolen from Alexei Efros, UC Berkeley, Fall 2014 Steve

More information

Projective Geometry and Camera Models

Projective Geometry and Camera Models /2/ Projective Geometry and Camera Models Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Note about HW Out before next Tues Prob: covered today, Tues Prob2: covered next Thurs Prob3:

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

DD2429 Computational Photography :00-19:00

DD2429 Computational Photography :00-19:00 . Examination: DD2429 Computational Photography 202-0-8 4:00-9:00 Each problem gives max 5 points. In order to pass you need about 0-5 points. You are allowed to use the lecture notes and standard list

More information

CSE 167: Introduction to Computer Graphics Lecture #3: Coordinate Systems

CSE 167: Introduction to Computer Graphics Lecture #3: Coordinate Systems CSE 167: Introduction to Computer Graphics Lecture #3: Coordinate Systems Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2015 Announcements Project 2 due Friday at 1pm Homework

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

3D Modeling using multiple images Exam January 2008

3D Modeling using multiple images Exam January 2008 3D Modeling using multiple images Exam January 2008 All documents are allowed. Answers should be justified. The different sections below are independant. 1 3D Reconstruction A Robust Approche Consider

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

Geometric transformations in 3D and coordinate frames. Computer Graphics CSE 167 Lecture 3

Geometric transformations in 3D and coordinate frames. Computer Graphics CSE 167 Lecture 3 Geometric transformations in 3D and coordinate frames Computer Graphics CSE 167 Lecture 3 CSE 167: Computer Graphics 3D points as vectors Geometric transformations in 3D Coordinate frames CSE 167, Winter

More information

Computer Graphics. P05 Viewing in 3D. Part 1. Aleksandra Pizurica Ghent University

Computer Graphics. P05 Viewing in 3D. Part 1. Aleksandra Pizurica Ghent University Computer Graphics P05 Viewing in 3D Part 1 Aleksandra Pizurica Ghent University Telecommunications and Information Processing Image Processing and Interpretation Group Viewing in 3D: context Create views

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg

More information

Answers to practice questions for Midterm 1

Answers to practice questions for Midterm 1 Answers to practice questions for Midterm Paul Hacking /5/9 (a The RREF (reduced row echelon form of the augmented matrix is So the system of linear equations has exactly one solution given by x =, y =,

More information

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009 Model s Lecture 3 Sections 2.2, 4.4 World s Eye s Clip s s s Window s Hampden-Sydney College Mon, Aug 31, 2009 Outline Model s World s Eye s Clip s s s Window s 1 2 3 Model s World s Eye s Clip s s s Window

More information

Scene Modeling for a Single View

Scene Modeling for a Single View Scene Modeling for a Single View René MAGRITTE Portrait d'edward James with a lot of slides stolen from Steve Seitz and David Brogan, 15-463: Computational Photography Alexei Efros, CMU, Fall 2005 Classes

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Computer Vision cmput 428/615

Computer Vision cmput 428/615 Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?

More information

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia MERGING POINT CLOUDS FROM MULTIPLE KINECTS Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia Introduction What do we want to do? : Use information (point clouds) from multiple (2+) Kinects

More information

Three-Dimensional Viewing Hearn & Baker Chapter 7

Three-Dimensional Viewing Hearn & Baker Chapter 7 Three-Dimensional Viewing Hearn & Baker Chapter 7 Overview 3D viewing involves some tasks that are not present in 2D viewing: Projection, Visibility checks, Lighting effects, etc. Overview First, set up

More information

CIS 580, Machine Perception, Spring 2014: Assignment 4 Due: Wednesday, April 10th, 10:30am (use turnin)

CIS 580, Machine Perception, Spring 2014: Assignment 4 Due: Wednesday, April 10th, 10:30am (use turnin) CIS 580, Machine Perception, Spring 2014: Assignment 4 Due: Wednesday, April 10th, 10:30am (use turnin) Solutions (hand calculations, plots) have to be submitted electronically as a single pdf file using

More information

3D Viewing. CS 4620 Lecture 8

3D Viewing. CS 4620 Lecture 8 3D Viewing CS 46 Lecture 8 13 Steve Marschner 1 Viewing, backward and forward So far have used the backward approach to viewing start from pixel ask what part of scene projects to pixel explicitly construct

More information

5LSH0 Advanced Topics Video & Analysis

5LSH0 Advanced Topics Video & Analysis 1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl

More information

So we have been talking about 3D viewing, the transformations pertaining to 3D viewing. Today we will continue on it. (Refer Slide Time: 1:15)

So we have been talking about 3D viewing, the transformations pertaining to 3D viewing. Today we will continue on it. (Refer Slide Time: 1:15) Introduction to Computer Graphics Dr. Prem Kalra Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture - 8 3D Viewing So we have been talking about 3D viewing, the

More information

521466S Machine Vision Exercise #1 Camera models

521466S Machine Vision Exercise #1 Camera models 52466S Machine Vision Exercise # Camera models. Pinhole camera. The perspective projection equations or a pinhole camera are x n = x c, = y c, where x n = [x n, ] are the normalized image coordinates,

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Cameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models

Cameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models Cameras and Radiometry Last lecture in a nutshell CSE 252A Lecture 5 Conversion Euclidean -> Homogenous -> Euclidean In 2-D Euclidean -> Homogenous: (x, y) -> k (x,y,1) Homogenous -> Euclidean: (x, y,

More information

Assignment 2 : Projection and Homography

Assignment 2 : Projection and Homography TECHNISCHE UNIVERSITÄT DRESDEN EINFÜHRUNGSPRAKTIKUM COMPUTER VISION Assignment 2 : Projection and Homography Hassan Abu Alhaija November 7,204 INTRODUCTION In this exercise session we will get a hands-on

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 More on Single View Geometry Lecture 11 2 In Chapter 5 we introduced projection matrix (which

More information

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman   Assignment #1. (Due date: 10/23/2012) x P. = z Computer Vision I Name : CSE 252A, Fall 202 Student ID : David Kriegman E-Mail : Assignment (Due date: 0/23/202). Perspective Projection [2pts] Consider a perspective projection where a point = z y x P

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

CV: 3D sensing and calibration

CV: 3D sensing and calibration CV: 3D sensing and calibration Coordinate system changes; perspective transformation; Stereo and structured light MSU CSE 803 1 roadmap using multiple cameras using structured light projector 3D transformations

More information

CSE 167: Introduction to Computer Graphics Lecture #3: Projection. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016

CSE 167: Introduction to Computer Graphics Lecture #3: Projection. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016 CSE 167: Introduction to Computer Graphics Lecture #3: Projection Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016 Announcements Next Monday: homework discussion Next Friday:

More information

Models and The Viewing Pipeline. Jian Huang CS456

Models and The Viewing Pipeline. Jian Huang CS456 Models and The Viewing Pipeline Jian Huang CS456 Vertex coordinates list, polygon table and (maybe) edge table Auxiliary: Per vertex normal Neighborhood information, arranged with regard to vertices and

More information

16720: Computer Vision Homework 1

16720: Computer Vision Homework 1 16720: Computer Vision Homework 1 Instructor: Martial Hebert TAs: Varun Ramakrishna and Tomas Simon Instructions A complete homework submission consists of two parts. A pdf file with answers to the theory

More information

Viewing. Reading: Angel Ch.5

Viewing. Reading: Angel Ch.5 Viewing Reading: Angel Ch.5 What is Viewing? Viewing transform projects the 3D model to a 2D image plane 3D Objects (world frame) Model-view (camera frame) View transform (projection frame) 2D image View

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

CSE 167: Introduction to Computer Graphics Lecture #4: Coordinate Systems

CSE 167: Introduction to Computer Graphics Lecture #4: Coordinate Systems CSE 167: Introduction to Computer Graphics Lecture #4: Coordinate Systems Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017 Announcements Friday: homework 1 due at 2pm Upload

More information

Camera Calibration. COS 429 Princeton University

Camera Calibration. COS 429 Princeton University Camera Calibration COS 429 Princeton University Point Correspondences What can you figure out from point correspondences? Noah Snavely Point Correspondences X 1 X 4 X 3 X 2 X 5 X 6 X 7 p 1,1 p 1,2 p 1,3

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

3D Viewing. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 9

3D Viewing. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 9 3D Viewing CS 46 Lecture 9 Cornell CS46 Spring 18 Lecture 9 18 Steve Marschner 1 Viewing, backward and forward So far have used the backward approach to viewing start from pixel ask what part of scene

More information

UNIT - V PERSPECTIVE PROJECTION OF SIMPLE SOLIDS

UNIT - V PERSPECTIVE PROJECTION OF SIMPLE SOLIDS UNIT - V PERSPECTIVE PROJECTION OF SIMPLE SOLIDS Definitions 1. Perspective Projection is the graphic representation of an object on a single plane called Picture Plane (PP), as it appears to an observer.

More information

(Refer Slide Time: 00:01:26)

(Refer Slide Time: 00:01:26) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 9 Three Dimensional Graphics Welcome back everybody to the lecture on computer

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Geometry of image formation

Geometry of image formation eometry of image formation Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 3, 2008 Talk Outline

More information

CS 4204 Computer Graphics

CS 4204 Computer Graphics CS 4204 Computer Graphics 3D Viewing and Projection Yong Cao Virginia Tech Objective We will develop methods to camera through scenes. We will develop mathematical tools to handle perspective projection.

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Computer Vision Project-1

Computer Vision Project-1 University of Utah, School Of Computing Computer Vision Project- Singla, Sumedha sumedha.singla@utah.edu (00877456 February, 205 Theoretical Problems. Pinhole Camera (a A straight line in the world space

More information

Implementing the IBar Camera Widget

Implementing the IBar Camera Widget Implementing the IBar Camera Widget Karan Singh Univ. of Toronto Cindy Grimm Washington Univ. in St. Louis Figure 1: Using the IBar to adjust the perspective distortion of a scene. Abstract We present

More information

COMP 558 lecture 19 Nov. 17, 2010

COMP 558 lecture 19 Nov. 17, 2010 COMP 558 lecture 9 Nov. 7, 2 Camera calibration To estimate the geometry of 3D scenes, it helps to know the camera parameters, both external and internal. The problem of finding all these parameters is

More information

3D Reconstruction from Scene Knowledge

3D Reconstruction from Scene Knowledge Multiple-View Reconstruction from Scene Knowledge 3D Reconstruction from Scene Knowledge SYMMETRY & MULTIPLE-VIEW GEOMETRY Fundamental types of symmetry Equivalent views Symmetry based reconstruction MUTIPLE-VIEW

More information

N-Views (1) Homographies and Projection

N-Views (1) Homographies and Projection CS 4495 Computer Vision N-Views (1) Homographies and Projection Aaron Bobick School of Interactive Computing Administrivia PS 2: Get SDD and Normalized Correlation working for a given windows size say

More information

Game Architecture. 2/19/16: Rasterization

Game Architecture. 2/19/16: Rasterization Game Architecture 2/19/16: Rasterization Viewing To render a scene, need to know Where am I and What am I looking at The view transform is the matrix that does this Maps a standard view space into world

More information

Chapter 8 Three-Dimensional Viewing Operations

Chapter 8 Three-Dimensional Viewing Operations Projections Chapter 8 Three-Dimensional Viewing Operations Figure 8.1 Classification of planar geometric projections Figure 8.2 Planar projection Figure 8.3 Parallel-oblique projection Figure 8.4 Orthographic

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Computer Graphics 7: Viewing in 3-D

Computer Graphics 7: Viewing in 3-D Computer Graphics 7: Viewing in 3-D In today s lecture we are going to have a look at: Transformations in 3-D How do transformations in 3-D work? Contents 3-D homogeneous coordinates and matrix based transformations

More information

Viewing. Part II (The Synthetic Camera) CS123 INTRODUCTION TO COMPUTER GRAPHICS. Andries van Dam 10/10/2017 1/31

Viewing. Part II (The Synthetic Camera) CS123 INTRODUCTION TO COMPUTER GRAPHICS. Andries van Dam 10/10/2017 1/31 Viewing Part II (The Synthetic Camera) Brownie camera courtesy of http://www.geh.org/fm/brownie2/htmlsrc/me13000034_ful.html 1/31 The Camera and the Scene } What does a camera do? } Takes in a 3D scene

More information