EE795: Computer Vision and Intelligent Systems

Similar documents
EE795: Computer Vision and Intelligent Systems

CS4670: Computer Vision

ECG782: Multidimensional Digital Signal Processing

Mosaics. Today s Readings

Image Stitching. Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi

CS4670: Computer Vision

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Digital Image Processing COSC 6380/4393

EE795: Computer Vision and Intelligent Systems

Biomedical Image Analysis. Point, Edge and Line Detection

Straight Lines and Hough

Feature Matching and Robust Fitting

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Announcements. Mosaics. How to do it? Image Mosaics

Other Linear Filters CS 211A

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Practical Image and Video Processing Using MATLAB

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Lecture 7: Most Common Edge Detectors

Hough Transform and RANSAC

Image Warping and Mosacing

Local Features: Detection, Description & Matching

Midterm Examination CS 534: Computational Photography

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages

CSE 527: Introduction to Computer Vision

Edge Detection Lecture 03 Computer Vision

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Lecture 6: Edge Detection

Automatic Image Alignment (feature-based)

Edge Detection. Computer Vision Shiv Ram Dubey, IIIT Sri City

Image warping and stitching

Edge and local feature detection - 2. Importance of edge detection in computer vision

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

Feature Detectors and Descriptors: Corners, Lines, etc.

calibrated coordinates Linear transformation pixel coordinates

Warping. 12 May 2015

EECS490: Digital Image Processing. Lecture #20

CAP 5415 Computer Vision Fall 2012

Feature Based Registration - Image Alignment

Model Fitting. Introduction to Computer Vision CSE 152 Lecture 11

Image warping and stitching

Edge detection. Gradient-based edge operators

CS231A Midterm Review. Friday 5/6/2016

[Programming Assignment] (1)

CS 558: Computer Vision 4 th Set of Notes

Local Features Tutorial: Nov. 8, 04

Chapter 3 Image Registration. Chapter 3 Image Registration

CS 664 Image Matching and Robust Fitting. Daniel Huttenlocher

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

Motion Estimation and Optical Flow Tracking

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

Lecture: Edge Detection

Lesson 6: Contours. 1. Introduction. 2. Image filtering: Convolution. 3. Edge Detection. 4. Contour segmentation

Generalized Hough Transform, line fitting

EE795: Computer Vision and Intelligent Systems

Image warping and stitching

10/03/11. Model Fitting. Computer Vision CS 143, Brown. James Hays. Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Homography estimation

Today s lecture. Image Alignment and Stitching. Readings. Motion models

Filtering Images. Contents

E0005E - Industrial Image Analysis

Multi-stable Perception. Necker Cube

CS664 Lecture #16: Image registration, robust statistics, motion

Lecture 9: Hough Transform and Thresholding base Segmentation

Anno accademico 2006/2007. Davide Migliore

Filtering Applications & Edge Detection. GV12/3072 Image Processing.

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

Image features. Image Features

DIGITAL IMAGE PROCESSING

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

Local Image preprocessing (cont d)

Features. Places where intensities vary is some prescribed way in a small neighborhood How to quantify this variability

Motion Tracking and Event Understanding in Video Sequences

Segmentation I: Edges and Lines

Computer Vision I - Basics of Image Processing Part 2

RANSAC and some HOUGH transform

Evaluation Metrics. (Classifiers) CS229 Section Anand Avati

Scale Invariant Feature Transform

Segmentation and Grouping

Edge Detection (with a sidelight introduction to linear, associative operators). Images

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Introduction to Computer Vision

Image stitching. Announcements. Outline. Image stitching

EN1610 Image Understanding Lab # 3: Edges

Global Probability of Boundary

EECS 442 Computer vision. Fitting methods

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou

Edge and corner detection

Miniature faking. In close-up photo, the depth of field is limited.

Image Warping, mesh, and triangulation CSE399b, Spring 07 Computer Vision

Photo by Carl Warner

Broad field that includes low-level operations as well as complex high-level algorithms

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo --

Perception IV: Place Recognition, Line Extraction

Image processing and features

Obtaining Feature Correspondences

N-Views (1) Homographies and Projection

Lecture 4: Finding lines: from detection to model fitting

Transcription:

EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/

2 Outline Review Canny Edge Detector Hough Transform Feature-Based Alignment Image Warping 2D Alignment Using Least Squares

predicted outcome 3 Quantifying Performance Confusion matrix-based metrics Binary {1,0} classification tasks A wide range of metrics can be defined actual value p n total p TP FP P n FN TN N total P N True positives (TP) - # correct matches False negatives (FN) - # of missed matches False positives (FP) - # of incorrect matches True negatives (TN) - # of nonmatches that are correctly rejected True positive rate (TPR) (sensitivity) TPR = TP = TP TP+FN P Document retrieval recall fraction of relevant documents found False positive rate (FPR) FPR = FP = FP FP+TN N Positive predicted value (PPV) PPV = TP = TP TP+FP P Document retrieval precision number of relevant documents are returned Accuracy (ACC) ACC = TP+TN P+N

4 Receiver Operating Characteristic (ROC) Evaluate matching performance based on threshold Examine all thresholds θ to map out performance curve Best performance in upper left corner Area under the curve (AUC) is a ROC performance metric

5 Edges 2D point features only provide a limited number of good locations for matching Edges are plentiful and carry semantic significance Edges detected by gradient slope and direction J x = I x = I, I x y (x) Smooth with Gaussian kernel before computation J σ x = G σ x I(x) = G σ x I x G σ x = x = x, y 1 exp x2 +y 2 σ 3 G σ, IG σ x y Sharper edges obtained by Laplacian (2 nd derivative) S σ x = J σ x = 2 G σ x I x Laplacian of Gaussian (LoG) kernel 2 G σ x = 1 2 x2 +y 2 exp x2 +y 2 σ 3 2σ 2 2σ 2 Can be approximated with difference of Gaussian (DoG) kernel 2σ 2

6 Canny Edge Detection Popular edge detection algorithm that produces a thin lines 1) Smooth with Gaussian kernel 2) Compute gradient Determine magnitude and orientation (45 degree 8- connected neighborhood) 3) Use non-maximal suppression to get thin edges Compare edge value to neighbor edgels in gradient direction p p p + t h t l p p p + object Sobel Canny http://homepages.inf.ed.ac.uk/rbf/hipr2/canny.htm 4) Use hysteresis thresholding to prevent streaking High threshold to detect edge pixel, low threshold to trace the edge

7 Canny Edge Detection Results Original image Thresholded gradient of smoothed image (thick lines) Marr-Hildreth algorithm Canny algorithm (low noise, thin lines)

8 Lines Edges and curves make up contours of natural objects Man-made world uses straight lines 3D lines can be used to determine vanishing points and do camera calibration Estimate pose of 3D scene

9 Hough Transform Lines in the real-world can be broken, collinear, or occluded Combine these collinear line segments into a larger extended line Hough transform creates a parameter space for the line Every pixel votes for a family of lines passing through it Potential lines are those bins (accumulator cells) with high count Uses global rather than local information See hough.m, radon.m in Matlab

10 Hough Transform Insight Want to search for all points that lie on a line This is a large search (take two points and count the number of edgels) Infinite lines pass through a single point (x i, y i ) y i = ax i + b Select any a, b Reparameterize b = x i a + y i ab-space representation has single line defined by point (x i, y i ) All points on a line will intersect in parameter space Divide parameter space into cells/bins and accumulate votes across all a and b values for a particular point Cells with high count are indicative of many points voting for the same line parameters (a, b)

11 Hough Transform in Practice Use a polar parameterization of a line why? After finding bins of high count, need to verify edge Find the extent of the edge (edges do not go across the whole image) This technique can be extended to other shapes like circles

12 Hough Transform Example I Input image Grayscale Canny edge image -499-399 -299-199 -99 1 101 201 301 401-90 -80-70 -60-50 -40-30 -20-10 0 10 20 30 40 50 60 70 80 Hough space Top edges

13 Hough Transform Example II http://www.mathworks.com/help/images/analyzing-images.html

14 Feature-Based Alignment After detecting and matching features, may want to verify if the matches are geometrically consistent Can feature displacements be described by 2D and 3D geometric transformations Provides Geometric registration 2D/3D mapping between images Pose estimation Camera position with respect to a known 3D scene/object Intrinsic camera calibration Find internal parameters of cameras (e.g. focal length, radial distortion)

15 Motion models Image Stitching Richard Szeliski What happens when we take two images with a camera and try to align them? translation? rotation? scale? affine? perspective? Richard Szeliski

16 Image Warping Image Stitching Richard Szeliski image filtering: change range of image g(x) = h(f(x)) f h f x image warping: change domain of image g(x) = f(h(x)) x f h f x x

17 Image Warping Image Stitching Richard Szeliski image filtering: change range of image g(x) = h(f(x)) f h g image warping: change domain of image f g(x) = f(h(x)) g h

18 Image Stitching Richard Szeliski Parametric (global) warping Examples of parametric warps: translation rotation aspect affine perspective cylindrical

19 Image Stitching Richard Szeliski 2D coordinate transformations translation: x = x + t x = (x,y) rotation: x = R x + t similarity: x = s R x + t affine: x = A x + t perspective: x H x x = (x,y,1) (x is a homogeneous coordinate) These all form a nested group (closed w/ inv.)

20 Image Warping Image Stitching Richard Szeliski Given a coordinate transform x = h(x) and a source image f(x), how do we compute a transformed image g(x ) = f(h(x))? h(x) x x f(x) g(x )

21 Forward Warping Image Stitching Richard Szeliski Send each pixel f(x) to its corresponding location x = h(x) in g(x ) What if pixel lands between two pixels? h(x) x x f(x) g(x )

22 Forward Warping Image Stitching Richard Szeliski Send each pixel f(x) to its corresponding location x = h(x) in g(x ) What if pixel lands between two pixels? Answer: add contribution to several pixels, normalize later (splatting) See griddata.m h(x) x x f(x) g(x )

23 Inverse Warping Image Stitching Richard Szeliski Get each pixel g(x ) from its corresponding location x = h -1 (x ) in f(x) What if pixel comes from between two pixels? h(x) x x f(x) g(x )

24 Inverse Warping Image Stitching Richard Szeliski Get each pixel g(x ) from its corresponding location x = h -1 (x ) in f(x) What if pixel comes from between two pixels? Answer: resample color value from interpolated (prefiltered) source image See interp2.m x x f(x) g(x )

25 Interpolation Image Stitching Richard Szeliski Possible interpolation filters: nearest neighbor bilinear bicubic (interpolating) sinc / FIR Needed to prevent jaggies and texture crawl (see demo)

26 Forward vs. Inverse Warping Which type of warping is better? Usually inverse warping is preferred It eliminates holes However, it requires an invertible warp function Not always possible Alyosha Efros

27 Least Squares Alignment Given a set of matched features x i, x i, minimize sum of squared residual error E LS = r 2 2 i i = i f x i ; p x i f x i ; p - is the predicted location based on the transformation p The unknowns are the parameters p Need to have a model for transformation Estimate the parameters based on matched features

28 Linear Least Squares Alignment Many useful motion models have a linear relationship between motion and parameters p Δx = x x = J x p J = f p - the Jacobian of the transform f with respect to the motion parameters p Linear least squares E LLS = J x i p Δx 2 i = p T Ap 2p T b + c i Quadratic form The minimum is found by solving the normal equations Ap = b A = i J T ( x i )J(x i ) Hessian matrix b = J T ( x i )Δx i i Gives the LLS estimate for the motion parameters

Jacobians of 2D Transformations 29

30 Improving Motion Estimates A number of techniques can improve upon linear least squares Uncertainty weighting Weight the matches based certainty of the match texture in the match region Non-linear least squares Iterative algorithm to guess parameters and iteratively improve guess Robust least squares Explicitly handle outliers (bad matches) don t use L2 norm RANSAC Randomly select subset of corresponding points, compute initial estimate of p, count the inliers from all the other correspondences, good match has many inliers