Midterm Examination CS 534: Computational Photography

Similar documents
Mosaics. Today s Readings

Homographies and RANSAC

Image Stitching. Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi

CS6670: Computer Vision

Announcements. Mosaics. How to do it? Image Mosaics

Chapter 3 Image Registration. Chapter 3 Image Registration

Image warping and stitching

Panoramic Image Stitching

Image warping and stitching

CSE 527: Introduction to Computer Vision

Today s lecture. Image Alignment and Stitching. Readings. Motion models

EE795: Computer Vision and Intelligent Systems

Targil 10 : Why Mosaic? Why is this a challenge? Exposure differences Scene illumination Miss-registration Moving objects

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Agenda. Rotations. Camera calibration. Homography. Ransac

Image stitching. Announcements. Outline. Image stitching

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

N-Views (1) Homographies and Projection

Stitching and Blending

Application questions. Theoretical questions

Stereo Vision. MAN-522 Computer Vision

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Object Recognition with Invariant Features

CS 231A Computer Vision (Winter 2014) Problem Set 3

Image warping and stitching

arxiv: v1 [cs.cv] 28 Sep 2018

Agenda. Rotations. Camera models. Camera calibration. Homographies

Image stitching. Digital Visual Effects Yung-Yu Chuang. with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM

Image Warping and Mosacing

Final Exam Study Guide

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

Computer Vision. Exercise 3 Panorama Stitching 09/12/2013. Compute Vision : Exercise 3 Panorama Stitching

Perspective Projection [2 pts]

Final Exam Study Guide CSE/EE 486 Fall 2007

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Broad field that includes low-level operations as well as complex high-level algorithms

CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry

Automatic Panoramic Image Stitching. Dr. Matthew Brown, University of Bath

Multi-stable Perception. Necker Cube

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

Multiple-Choice Questionnaire Group C

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

11/28/17. Midterm Review. Magritte, Homesickness. Computational Photography Derek Hoiem, University of Illinois

Local Feature Detectors

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2

Linear and Affine Transformations Coordinate Systems

Introduction to Computer Vision

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

Panoramic Image Mosaicing

Feature Matching and RANSAC

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Image stitching. Digital Visual Effects Yung-Yu Chuang. with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

CS395T paper review. Indoor Segmentation and Support Inference from RGBD Images. Chao Jia Sep

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

CSE 252B: Computer Vision II

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski

CS4670: Computer Vision

Announcements. Mosaics. Image Mosaics. How to do it? Basic Procedure Take a sequence of images from the same position =

Photo Tourism: Exploring Photo Collections in 3D

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

Motion Tracking and Event Understanding in Video Sequences

Introduction to Computer Vision. Week 3, Fall 2010 Instructor: Prof. Ko Nishino

Final Review CMSC 733 Fall 2014

Local Features Tutorial: Nov. 8, 04

CS201 Computer Vision Lect 4 - Image Formation

More Mosaic Madness. CS194: Image Manipulation & Computational Photography. Steve Seitz and Rick Szeliski. Jeffrey Martin (jeffrey-martin.

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

An Ensemble Approach to Image Matching Using Contextual Features Brittany Morago, Giang Bui, and Ye Duan

Introduction to Image Processing and Computer Vision. -- Panoramas and Blending --

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

Image correspondences and structure from motion

An Algorithm for Seamless Image Stitching and Its Application

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

Scene Modeling for a Single View

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

Image-based Modeling and Rendering: 8. Image Transformation and Panorama

CSCI 5980: Assignment #3 Homography

Miniature faking. In close-up photo, the depth of field is limited.

The Light Field and Image-Based Rendering

CS231A Section 6: Problem Set 3

CS6670: Computer Vision

Object recognition of Glagolitic characters using Sift and Ransac

Why is computer vision difficult?

Feature Based Registration - Image Alignment

Local Features: Detection, Description & Matching

CS231A Midterm Review. Friday 5/6/2016

CS 231A Computer Vision (Winter 2018) Problem Set 3

Single-view 3D Reconstruction

Automatic Image Alignment

The SIFT (Scale Invariant Feature

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16

Wook Kim. 14 September Korea University Computer Graphics Lab.

Transcription:

Midterm Examination CS 534: Computational Photography November 3, 2016 NAME: Problem Score Max Score 1 6 2 8 3 9 4 12 5 4 6 13 7 7 8 6 9 9 10 6 11 14 12 6 Total 100 1 of 8

1. [6] (a) [3] What camera setting(s) was (were) changed to obtain the following two photographs, with identical fields of view but different depths of field? Include in your answer possible values for the setting(s) used for each photo. (b) [3] If the field of view can change, what is another way to increase the depth of field? 2. [8] (a) [3] What happens to the projected size of an object if the lens focal length is doubled? (b) [3] What happens to the field of view if the focal length is doubled from 50 mm to 100 mm? Give your answer both qualitatively and quantitatively, though giving an equation is enough for your quantitative answer. (c) [2] True or False: If you take a photo of Abe Lincoln with Bascom Hall in the background, and then take a second photo after doubling the focal length of the camera lens and moving twice as far away from Abe, the two images will be identical. 2 of 8

3. [9] Given a camera with aperture f-number f/4, lens focal length 50 mm, shutter speed 1/125 sec, ISO 400, and focused on an object 500 mm in front of the lens, what is (a) [3] The diameter of the lens? (b) [3] The distance of the image sensor from the lens? (c) [3] The height of the real object if its height in the image is 5 mm? 4. [12] Gaussian Filters (a) [3] Give a 3 x 3 filter that approximates a Gaussian function. (b) [3] What property of the coefficients of a discrete approximation of a Gaussian filter ensures that regions of uniform intensity are unchanged by smoothing using this filter? 3 of 8

(c) [6] Given the 1 x 2 filter: [0.5 0.5] (i) [3] What is the result of convolving this filter with itself? Give all values that are non-zero, assuming input values of 0 outside of the two given values. Hint: The result will be a 1 x 3 array. (ii) [3] What is the result of convolving your result from (i) with this same 1 x 2 filter again? Give all non-zero values in the result. 5. [4 ] Define a 3 x 3 filter that when convolved with a grayscale image will return a positive value if the average of the 4 nearest neighbors intensity values is less than the center pixel s intensity value, and returns a negative or 0 value otherwise. 4 of 8

6. [13] The seam carving algorithm for image resizing iteratively removes the seam with the least energy. (a) [3] How is the energy of a seam defined? (b) [4] Let E(I t) be the average energy of all pixels computed from the reduced-size image I t after t seams have been removed (not computed from the original image s energy minus all the seam pixels). Does the seam carving algorithm guarantee that E(I t+1) E(I t) for all iterations of seam carving? Briefly explain why or why not. (c) [6] Which of the following are possible legal vertical seams found when using dynamic programming starting from the top row to the bottom row? Note: (iv) has two seams merging, (v) has two seams splitting, and (vi) has two seams crossing. 5 of 8

7. [7] (a) [3] Which one of the following assumptions underlies creating an optically-correct panorama of a general 3D static scene from a set of input images using homographies (aka planar projective transformations) to align them? (i) Rotation about the center of the camera s image sensor (ii) Rotation about the center of the camera lens (iii) Translation in the XY plane, assuming the Z axis points in the direction of the lens (iv) Translation in the XZ plane, assuming the Z axis points in the direction of the lens (b) [4] Instead of using blending to combine images into a panorama, an alternative approach would be to use optimal seam finding instead (as done in the overlapping region in texture synthesis) so that only a single source image is used to determine each output pixel s value. What is a possible noticeable artifact that might be visible in the panorama when using this approach, and what characteristic in the images will make this artifact more likely to occur? 8. [6] (a) [3] A bear image was pasted into a swimming pool image in the left image below. A blending algorithm was applied to produce the result image on the right. Which blending algorithm was likely used: Laplacian pyramid blending or Poisson blending? Briefly explain why. (b) [3] What kind of visible artifacts would occur if the blending method used was feathering to combine the bear image and the swimming pool image above? 6 of 8

9. [9] Say we want to warp an image, I, into a new one, J, using the following homography (i.e., planar projective transformation): sx 2 = sy 1 s 1 1 2 0 8 u 10 v 4 1 (a) [3] If a point in one image is given by (u, v) = (2, 2), what are the homogeneous coordinates of the corresponding point in the other image? (b) [3] What are the (real-valued) Cartesian coordinates of the corresponding point in the other image? (c) [3] In order to prevent mapping multiple pixels in image I into a single pixel in image J, should (u, v) in the above equation be a pixel in image I or in image J? 10. [6] When creating panoramas for scenes that are far away from the camera s position, a simplification is to assume the images can be aligned by performing a 2D affine transformation, i.e., a linear transformation from the coordinates of points in one image to points in a second image. (a) [3] What is the form of the 3 x 3 matrix, H, in this case of a 2D affine transformation? (b) [3] How many corresponding points between the two images are needed to estimate H? 7 of 8

11. [14] Feature Point Detection and Description (a) [8] (i) [4] How does the SIFT feature point detector compute a feature value at each position and scale? That is, describe what operation is used to compute each point s value. (ii) [4] How does the SIFT feature point detector produce a locally sparse set of feature points, i.e., multiple points that are close together in the image are not all detected as feature points, from the values computed by the operator in (i)? Include in your answer which points are compared to a given feature point. (b) [6] Yes or No: Is the SIFT feature point descriptor invariant to: (i) [2] Arbitrary 2D image rotation? (ii) [2] Arbitrary viewpoint change? (iii) [2] Arbitrary camera focal length change? 12. [6] Given a set of n points specified by their 2D coordinates in an image, we d like to automatically determine which subset of these points are approximately collinear, i.e., can be approximated by a straight line. Recall that a line can be parameterized by two parameters, a and b, such that y = ax + b. Describe the main steps for how the RANSAC algorithm could be used to find the line best fitting the inlier data points when it is known that there are some outlier noise points, i.e., are not part of the line because they are greater than distance d from the line, but it is not known how many or which ones they are. 8 of 8