CSE152 Introduction to Computer Vision Assignment 3 (SP15) Instructor: Ben Ochoa Maximum Points : 85 Deadline : 11:59 p.m., Friday, 29-May-2015

Similar documents
CSE 252A Computer Vision Homework 3 Instructor: Ben Ochoa Due : Monday, November 21, 2016, 11:59 PM

1 CSE 252A Computer Vision I Fall 2017

CSE152a Computer Vision Assignment 2 WI14 Instructor: Prof. David Kriegman. Revision 1

Perspective Projection [2 pts]

3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform. Wenqi Zhu

Assignment 2: Stereo and 3D Reconstruction from Disparity

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

521466S Machine Vision Assignment #3 Image Features

Computer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15

EE795: Computer Vision and Intelligent Systems

HW5. March 7, CSE 252B: Computer Vision II, Winter 2018 Assignment 5

CSE152a Computer Vision Assignment 1 WI14 Instructor: Prof. David Kriegman. Revision 0

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

Two-view geometry Computer Vision Spring 2018, Lecture 10

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

Final Exam Study Guide

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

CS 787: Assignment 4, Stereo Vision: Block Matching and Dynamic Programming Due: 12:00noon, Fri. Mar. 30, 2007.

Filtering and Edge Detection. Computer Vision I. CSE252A Lecture 10. Announcement

Capturing, Modeling, Rendering 3D Structures

Take Home Exam # 2 Machine Vision

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

NAME: Sample Final Exam (based on previous CSE 455 exams by Profs. Seitz and Shapiro)

Edge and corner detection

Final Review CMSC 733 Fall 2014

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

CS143 Introduction to Computer Vision Homework assignment 1.

Stereo matching. Francesco Isgrò. 3D Reconstruction and Stereo p.1/21

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Scale Invariant Feature Transform

Announcements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3.

Local Feature Detectors

Depth from two cameras: stereopsis

Name of student. Personal number. Teaching assistant. Optimisation: Stereo correspondences for fundamental matrix estimation.

Announcements. Stereo

Image processing and features

Depth from two cameras: stereopsis

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Corner Detection. GV12/3072 Image Processing.

Edge Detection. Computer Vision Shiv Ram Dubey, IIIT Sri City

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

CSE 252B: Computer Vision II

Scale Invariant Feature Transform

3D Geometry and Camera Calibration

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Artistic Stylization of Images and Video Part III Anisotropy and Filtering Eurographics 2011

Announcements. Stereo

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Line, edge, blob and corner detection

Stereo Vision. MAN-522 Computer Vision

10-701/15-781, Fall 2006, Final

Anno accademico 2006/2007. Davide Migliore

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo --

Multiple View Geometry

Tracking Computer Vision Spring 2018, Lecture 24

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

Local features: detection and description. Local invariant features

Step-by-Step Model Buidling

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

CS5670: Computer Vision

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Local invariant features

16720: Computer Vision Homework 2 Feature Descriptors, Homographies & RANSAC

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Image correspondences and structure from motion

Lecture 10 Dense 3D Reconstruction

C / 35. C18 Computer Vision. David Murray. dwm/courses/4cv.

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS4670: Computer Vision

Computer Vision I - Filtering and Feature detection

[Programming Assignment] (1)

CS 231A Computer Vision (Fall 2012) Problem Set 3

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EN1610 Image Understanding Lab # 3: Edges

Edge Detection. Today s reading. Cipolla & Gee on edge detection (available online) From Sandlot Science

EE795: Computer Vision and Intelligent Systems

Mini-project 2 CMPSCI 689 Spring 2015 Due: Tuesday, April 07, in class

CSCI 5980/8980: Assignment #4. Fundamental Matrix

Review of Filtering. Filtering in frequency domain

Binocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?

Digital Image Processing COSC 6380/4393

MAPI Computer Vision. Multiple View Geometry

COMP 558 lecture 22 Dec. 1, 2010

Dense 3D Reconstruction. Christiano Gava

Omni-directional Multi-baseline Stereo without Similarity Measures

Camera Drones Lecture 3 3D data generation

CS5670: Computer Vision

Computer Vision: Homework 3 Lucas-Kanade Tracking & Background Subtraction

Lecture 9: Epipolar Geometry

Computer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16

Assignment 3: Edge Detection

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

A Tutorial on VLFeat

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Multi-stable Perception. Necker Cube

Transcription:

Instructions: CSE15 Introduction to Computer Vision Assignment 3 (SP15) Instructor: Ben Ochoa Maximum Points : 85 Deadline : 11:59 p.m., Friday, 9-May-015 This assignment should be solved, and written up in groups of 3. Individual work is not allowed. There is no physical hand-in for this assignment. Coding for this assignment should be done in MATLAB All code developed for this assignment should be included in the appendix of the report. You may do problems on pen and paper; just scan and include it in the report. In general, MATLAB code does not have to be efficient. Focus on clarity, correctness and function here, and we can worry about speed in another course. Submit your assignment electronically by email to Akshat Dave [akdave@ucsd.edu] with the subject line CSE15A-Assignment-3. The email should contain one attached file named [CSE 15A HW3 <student1-id> <student-id> <student3-id>.zip]. This zip file must contain the following two artifacts: 1. A pdf file named [CSE 15A HW3 <student1-id> <student-id> <student3-id>.pdf] containing your writeup. Please mention all the authors full names and student identities in the report.. A folder named [CSE 15A HW3 <student1-id> <student-id> <student3-id> code] containing all your MATLAB code files 1 Corner Detection [0 points] In this section, we will implement a detector that can find corner features in our images. To detect corners, we rely on the matrix C defined as: [ ] I C = x Ix I y Ix I y I y where the sum occurs over a w w patch of neighboring pixels around a point p in the image. The point p is a corner if both eigenvalues of C are large. (a) (10 points) Implement a procedure that filters an image using a D Gaussian kernel and computes the horizontal and vertical gradients I x and I y. You cannot use built in routines for smoothing. Your code must create your own Gaussian kernel by sampling a Gaussian function. You can use built-in functions to perform convolution. The width of the kernel should be ±3σ. Include images of the two components of your gradient I x and I y on the image dino01 (found in dino.mat) for σ = 1, σ =, and σ = 4. (b) (10 points) Implement a procedure to detect corner points in an image. The corner detection algorithm should compute the smallest eigenvalue λ of the matrix C at each pixel location in the image and use that as a measure of its cornerness score. Run non-maximal suppression, such that a pixel location is only selected as a corner if its eigenvalue λ is greater than that of its 8 neighboring pixels. Have your corner detection procedure return the top n non-maximally 1

Figure 1: x-component of the gradient I x (left) and the top 50 detected corner points (right) on the image house.pgm, both with σ = 1. Your report should also include the y-component of the gradient I y and results for different values of σ on the dinosaur images. suppressed corners. Test your algorithm on dino01 with n = 50 corners, and plot the resulting detected corners using drawpoint.m. Show corner detection results and report the parameters used. An example of what your plots should look like on a different image pair are shown in Figure 1. You can use the MATLAB function conv.m to perform D convolution for computing gradient images with the same option. The patch width w for detecting corners should be the same as the width of your Gaussian kernel. You can use the function conv.m to help compute I x, I x I y, and I y efficiently for all pixel locations in the image. Epipolar Geometry Theory [15 pts] Suppose a camera calibration gives a transformation (R, t) such that a point in the world is mapped to the camera by p c = Rp + t. 1. (5 points) Given calibrations of two cameras (a stereo pair) to a common external coordinate system, represented by R 1, t 1, R, t, find an expression that will map points expressed in the coordinate system of the right camera to that of the left.. (5 points) What is the length of the baseline of the stereo pair. 3. (5 points) Give an expression for the Essential Matrix in terms of R 1, t 1, R, t.

Figure : reference figure for epipolar geometry theory problem 3 Epipolar lines [0 pts] The geometry of two camera views are related by a transform called the fundamental matrix, F. We can solve for this matrix using the eight-point algorithm given at least eight correspondences between the two views. In this assignment, the eight-point algorithm has already been implemented for you in fund.m. The goal in this section is to gain some familiarity with how the fundamental matrix relates the two views. Figure 3: 15 hand-clicked points on the stereo pair house1.gif and house.gif were used to compute the fundamental matrix F. Plot shows the locations of these 15 points in both images and their corresponding epipolar lines. (a) (10 points) Find the fundamental matrix relating the stereo pair of dino01 and dino0 with n = 15 hand-clicked correspondences. Plot the epipolar lines l 1 and l in both images for at least three points in the first view, and verify that they pass approximately through the corresponding points in the second view. Include plots of hand-clicked points and epipolar lines and the values of the estimated fundamental matrix. (b) (10 points) The eight-point algorithm requires only 8 points to solve for the fundamental matrix 3

F, whereas in part a) you selected 15 points. Using more than 8 points makes the estimated F more stable to errors in hand-clicked locations. A consequence though is that clicked points won t in general lie exactly on their epipolar lines. As an estimate of the error in hand-clicked correspondences, compute the average distance between each clicked point in dino0 and its corresponding epipolar line and include its value in your report. You can use the built-in matlab function cpselect.m to select and save hand-clicked correspondences Use the provided functions drawline.m and drawpoint.m to plot points and epipolar lines. An example of what your plots should look like on a different image pair are shown in Figure 3. 4 Sparse Stereo Correspondences [30 points] Triangulation can be used to estimate the 3D depth for stereo image pairs if (i) we have computed the epipolar geometry, and (ii) we can establish point correspondences between images. For this part of this assignment, we will try to find correspondences between detected corners on stereo image pairs. (a) (10 points) We will test our correspondence algorithm on the dino stereo pair. For each image, detect n 0 corners. For each detected corner location p in dino01 use your fundamental matrix F as computed in Problem 1 to compute its epipolar line in dino0. Consider matching p to a corner point in dino0 if the distance to the epipolar line is less than some threshold τ e. If multiple corner points are within this threshold, choose the one with the smallest distance to the epipolar line. Display images of detected matches using drawpoint.m, and count the number of correct matches and the number of incorrect matches. Specify which value of τ e that you used. (b) (10 points) In part a), we chose matches based on the epipolar geometry relating the two images. In this part, we consider matching corner points based on their appearance instead. Extract a 9 9 patch of neighboring pixels around each corner point p in dino01. Consider matching p to a corner point in dino0 if the Normalized Sum of Squared Differences (NSSD) distance between the two patches is less than some threshold τ a. If multiple corner points are within this threshold, choose the one with the smallest NSSD distance. As before, display images of detected matches, and report the number of correct matches and the number of incorrect matches. Specify which value of τ a you used. (c) (10 points) In this part, we consider matching corner points based on both their epipolar geometry and their appearance. Consider matching p to a corner point in dino0 if the distance to its epipolar line is less than τ e, and the NSSD distance between the two patches is less than τ a. If multiple corner points are within these thresholds, choose the one with the smallest NSSD distance. As before, display images of detected matches, and report the number of correct matches and the number of incorrect matches. An example of what your plots should look like on a different image pair are shown in Figure 4. You can discard corner points that are close to the boundary of the image, such that extracting a patch around that corner point would go outside the boundary of the image. 4

Figure 4: Predicted corner point matches on house1.pgm and house.pgm, using just epipolar geometry. In this case, the following are correct: [14,19, 5, 6, 3], and the following incorrect [7, 1,, 18, 16, 11, 3, 8, 10, 1]. The remaining 5 points were not matched. Your report should include matching results on the dino images when using just epipolar geometry, just NSSD, and when combining epipolar geometry and NSSD. In order to create a figure like Figure 4, you can do imshow([img1 img]). This makes the plot treat them as one image, and then you can add the dots using drawpoint and line.m. Do not drawline - that is only for drawing epipolar lines. Remember to add an offset (= nbr columns in the first image) to points you want to plot in the second image. The NSSD distance between two image patches P 1 and P is the sum of squared differences between each pixel in the patch after normalizing each patch by their mean and variance. NSSD(P 1, P ) = i,j ( P1 (i, j) P (i, j)) P k (i, j) = P k(i, j) µ k, σ k µ k = 1 P k (i, j), n i,j σ k = 1 (P k (i, j) µ k ) n Note that you will have to inspect manually to know if the matches are correct. Since the corner points are extracted automatically, you have no guarantee that the list of corners will have the same order (That e.g. corner 1 in the first image should match corner 1 in the second image). Instead, just plot it like in Figure 4, and look manually to see how well it looks. i,j 5