Image Filtering, Warping and Sampling

Similar documents
Image Processing, Warping, and Sampling

Sampling, Resampling, and Warping. COS 426, Spring 2014 Tom Funkhouser

Image Processing. Overview. Trade spatial resolution for intensity resolution Reduce visual artifacts due to quantization. Sampling and Reconstruction

Aliasing. Can t draw smooth lines on discrete raster device get staircased lines ( jaggies ):

Theoretically Perfect Sensor

Theoretically Perfect Sensor

Animating Transformations

Outline. Sampling and Reconstruction. Sampling and Reconstruction. Foundations of Computer Graphics (Fall 2012)

To Do. Advanced Computer Graphics. Discrete Convolution. Outline. Outline. Implementing Discrete Convolution

Sampling: Application to 2D Transformations

Advanced Computer Graphics. Aliasing. Matthias Teschner. Computer Science Department University of Freiburg

Computer Graphics. Texture Filtering & Sampling Theory. Hendrik Lensch. Computer Graphics WS07/08 Texturing

Outline. Foundations of Computer Graphics (Spring 2012)

Announcements. Image Matching! Source & Destination Images. Image Transformation 2/ 3/ 16. Compare a big image to a small image

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

Basics. Sampling and Reconstruction. Sampling and Reconstruction. Outline. (Spatial) Aliasing. Advanced Computer Graphics (Fall 2010)

Fourier analysis and sampling theory

Reading. 2. Fourier analysis and sampling theory. Required: Watt, Section 14.1 Recommended:

Sampling, Aliasing, & Mipmaps

The main goal of Computer Graphics is to generate 2D images 2D images are continuous 2D functions (or signals)

Sampling, Aliasing, & Mipmaps

Sampling and Reconstruction

Sampling, Aliasing, & Mipmaps

Motivation. The main goal of Computer Graphics is to generate 2D images. 2D images are continuous 2D functions (or signals)

To Do. Advanced Computer Graphics. Sampling and Reconstruction. Outline. Sign up for Piazza

Lecture 4: Spatial Domain Transformations

Computer Vision: 4. Filtering. By I-Chen Lin Dept. of CS, National Chiao Tung University

Biomedical Image Analysis. Spatial Filtering

The simplest and most obvious method to go from a continuous to a discrete image is by point sampling,

Aliasing and Antialiasing. ITCS 4120/ Aliasing and Antialiasing

Computer Vision and Graphics (ee2031) Digital Image Processing I

Problem 1 (10 Points)

CS354 Computer Graphics Sampling and Aliasing

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

Matching and Recognition in 3D. Based on slides by Tom Funkhouser and Misha Kazhdan

Drawing a Triangle (and an introduction to sampling)

Image Processing and Analysis

Scaled representations

EE795: Computer Vision and Intelligent Systems

Lecture 2: 2D Fourier transforms and applications

Lecture 6: Edge Detection

Lecture 5: Frequency Domain Transformations

Edge and local feature detection - 2. Importance of edge detection in computer vision

Model Fitting. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Fourier Transform in Image Processing. CS/BIOEN 6640 U of Utah Guido Gerig (slides modified from Marcel Prastawa 2012)

Anno accademico 2006/2007. Davide Migliore

Sampling and Reconstruction

Image Sampling and Quantisation

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Image Sampling & Quantisation

Overview. Spectral Processing of Point- Sampled Geometry. Introduction. Introduction. Fourier Transform. Fourier Transform

Sampling and Monte-Carlo Integration

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Central Slice Theorem

Digital Image Processing. Image Enhancement in the Frequency Domain

Modeling Transformations

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Monte-Carlo Ray Tracing. Antialiasing & integration. Global illumination. Why integration? Domains of integration. What else can we integrate?

Course Evaluations. h"p:// 4 Random Individuals will win an ATI Radeon tm HD2900XT

Lecture 2 Image Processing and Filtering

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

CS 335 Graphics and Multimedia. Geometric Warping

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

The SIFT (Scale Invariant Feature

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation

Segmentation and Grouping

Frequency analysis, pyramids, texture analysis, applications (face detection, category recognition)

Capturing, Modeling, Rendering 3D Structures

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters

Ray Casting. Connelly Barnes CS 4810: Graphics

CPSC 4040/6040 Computer Graphics Images. Joshua Levine

Edges, interpolation, templates. Nuno Vasconcelos ECE Department, UCSD (with thanks to David Forsyth)

Chapter 3: Intensity Transformations and Spatial Filtering

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

2D Image Processing INFORMATIK. Kaiserlautern University. DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

Drawing a Triangle (and an Intro to Sampling)

Ulrik Söderström 17 Jan Image Processing. Introduction

Image preprocessing in spatial domain

Image restoration. Lecture 14. Milan Gavrilovic Centre for Image Analysis Uppsala University

ECG782: Multidimensional Digital Signal Processing

Image preprocessing in spatial domain

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

Computer Graphics. Sampling Theory & Anti-Aliasing. Philipp Slusallek

Lecture 7: Most Common Edge Detectors

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

CPSC 425: Computer Vision

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

Digital Image Processing. Lecture 6

Filtering Images in the Spatial Domain Chapter 3b G&W. Ross Whitaker (modified by Guido Gerig) School of Computing University of Utah

COMP371 COMPUTER GRAPHICS

Biomedical Image Analysis. Point, Edge and Line Detection

Global Illumination. Connelly Barnes CS 4810: Graphics

Computational Aspects of MRI

CS1114 Assignment 5, Part 1

Broad field that includes low-level operations as well as complex high-level algorithms

Lecture 6 Basic Signal Processing

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Transcription:

Image Filtering, Warping and Sampling Connelly Barnes CS 4810 University of Virginia Acknowledgement: slides by Jason Lawrence, Misha Kazhdan, Allison Klein, Tom Funkhouser, Adam Finkelstein and David Dobkin

Outline Image Processing Image Warping Image Sampling 2

Image Processing What about the case when the modification that we would like to make to a pixel depends on the pixels around it? Blurring Edge Detection Etc. 3

Multi-Pixel Operations In the simplest case, we define a mask of weights which tells us how the values at adjacent pixels should be combined to generate the new value. 4

Blurring To blur across pixels, define a mask: Whose value is largest at the center pixel Whose entries sum to one Filter = Original Blur 5

Blurring Pixel(x,y): red = 36 green = 36 blue = 0 Filter = 6

Blurring Pixel(x,y): red = 36 green = 36 blue = 0 X - 1 X X + 1 Y - 1 Y 36 109 146 32 36 109 Filter = Y + 1 32 36 73 Pixel(x,y).red and its red neighbors 7

Blurring New value for Pixel(x,y).red = (36 * 1/16) + (109 * 2/16) + (146 * 1/16) (32 * 2/16) + (36 * 4/16) + (109 * 2/16) (32 * 1/16) + (36 * 2/16) + (73 * 1/16) X - 1 X X + 1 Y - 1 Y 36 109 146 32 36 109 Filter = Y + 1 32 36 73 Pixel(x,y).red and its red neighbors 8

Blurring New value for Pixel(x,y).red = 62.69 X - 1 X X + 1 Y - 1 Y 36 109 146 32 36 109 Filter = Y + 1 32 36 73 Pixel(x,y).red and its red neighbors 9

Blurring New value for Pixel(x,y).red = 63 Original Blur 10

Blurring Repeat for each pixel and each color channel Note 1: Keep source and destination separate to avoid drift Note 2: For boundary pixels, not all neighbors are used, and you need to normalize the mask so that the sum of the values is correct 11

Blurring In general, the mask can have arbitrary size: We can express a smaller mask as a bigger one by padding with zeros. Original Blur 12

Blurring More non-zero entries to give rise to a wider blur Original Narrow Blur Wide Blur 13

Blurring A general way for defining the entries of an nxn blurring mask is to use the values of a Gaussian: Gaussian[i, j] =e x2 +y 2 2 2 σ equals the mask radius ( n/2 for an n x n mask ) x is i s horizontal distance from center pixel y is j s vertical distance from center pixel Don t forget to normalize! 14

Bivariate Gaussian Function Gaussian[i, j] =e x2 +y 2 2 2 aka Normal Distribution 15

Edge Detection To find the edges in an image, define a mask: Whose value is largest at the center pixel Whose entries sum to zero. Edge pixels are those whose value is larger (or smaller) than those of its neighbors. Filter = Original Highlighted Edges 16

Edge Detection Pixel(x,y): red = 36 green = 36 blue = 0 Filter = 17

Edge Detection Pixel(x,y): red = 36 green = 36 blue = 0 X - 1 X X + 1 Y - 1 Y 36 109 146 32 36 109 Filter = Y + 1 32 36 73 Pixel(x,y).red and its red neighbors 18

Edge Detection New value for Pixel(x,y).red = (36 * -1) + (109 * -1) + (146 * -1) (32 * -1) + (36 * 8) + (109 * -1) (32 * -1) + (36 * -1) + (73 * -1) X - 1 X X + 1 Y - 1 Y 36 109 146 32 36 109 Filter = Y + 1 32 36 73 Pixel(x,y).red and its red neighbors 19

Edge Detection New value for Pixel(x,y).red = -285 X - 1 X X + 1 Y - 1 Y 36 109 146 32 36 109 Filter = Y + 1 32 36 73 Pixel(x,y).red and its red neighbors 20

Edge Detection New value for Pixel(x,y).red = 0 X - 1 X X + 1 Y - 1 Y 36 109 146 32 36 109 Filter = Y + 1 32 36 73 Pixel(x,y).red and its red neighbors 21

Edge Detection New value for Pixel(x,y).red = 0 22

Edge Mask is a Derivative Filter @G @x / x 2 e x 2 +y 2 2 2 23

Outline Image Processing Image Warping Image Sampling 24

Image Warping Move pixels of image Mapping Resampling Warp Source image Destination image 25

Overview Mapping Forward Reverse Resampling Point sampling Triangle filter Gaussian filter 26

Mapping Transformation: describe the destination location (x,y) for every source location (u,v) v y u x 27

Example Mappings Scale by factor: x = factor * u y = factor * v y v Scale 0.8 u x 28

Example Mappings Rotate by θ degrees: x = ucosθ - vsinθ y = usinθ + vcosθ y v Rotate 30 deg u x 29

Example Mappings Shear in X by factor: x = u + factor * v y = v v y u Shear X 1.3 x Shear in Y by factor: x = u y = v + factor * u v y Shear Y u 1.3 x 30

Other Mappings Any function of u and v: x = fx(u,v) y = fy(u,v) Swirl Fish-eye Rain 31

Image Warping Attempt 1 (Forward Mapping) for (int u = 0; u < umax; u++) for (int v = 0; v < vmax; v++) float x = fx(u,v); float y = fy(u,v); dst(x,y) = src(u,v); (u,v) f (x,y) Source image Destination image 32

Forward Mapping y v u Rotate -30 x 33

Forward Mapping Many source pixels can map to same destination pixel y v u Rotate -30 x 34

Forward Mapping Some destination pixels may not be covered y v u Rotate -30 x 35

Image Warping Attempt 2 (Reverse Mapping) for (int x = 0; x < xmax; x++) for (int y = 0; y < ymax; y++) float u = fx -1 (x,y); float v = fy -1 (x,y); dst(x,y) = src(u,v); (u,v) f (x,y) Source image Destination image 36

Reverse Mapping GOOD! Iterate over destination image Must resample source May oversample, but much simpler! v y u Rotate -30 x 37

Resampling Issue: (u,v) does not usually have integer coordinates (u,v) (x,y) Source image Destination image 38

Overview Mapping Forward Reverse Resampling Nearest Point Sampling Bilinear Sampling Gaussian Sampling 39

Nearest Point Sampling int iu = floor(u+0.5); int iv = floor(v+0.5); dst[x,y] = src[iu,iv]; v y u Rotate -30 Scale 0.5 x 40

Bilinear Sampling Bilinearly interpolate four closest pixels a = linear interpolation of src(x1,y1) and src(x2,y1) b = linear interpolation of src(x1,y2) and src(x2,y2) dst(x,y) = linear interpolation of a and b (x1,y2) x1 = floor( x ); x2 = x1 + 1; (x,y) y1 = floor( y ); y2 = y1 + 1; dx = x x1; dy = y y1; (x1,y1) a a = src(x1,y1)*(1-dx) + src(x2,y1)*dx; b = src(x1,y2)*(1-dx) + src(x2,y2)*dx; dst(x,y) = a*(1-dy) + b*dy; b (x2,y2) (x2,y1) 41

Bilinear Sampling Bilinearly interpolate four closest pixels a = linear interpolation of src(x1,y1) and src(x2,y1) b = linear interpolation of src(x1,y2) and src(x2,y2) dst(x,y) = linear interpolation of a and b (x1,y2) Make sure to test that the pixels x1 = floor( (x1,y1), x (x2,y2), ); (x1,y2), and x2 = x1 + 1; (x,y) y1 = floor( (x2,y1) y are ); within the image. y2 = y1 + 1; dx = x x1; (x1,y1) a dy = y y1; a = src(x1,y1)*(1-dx) + src(x2,y1)*dx; b = src(x1,y2)*(1-dx) + src(x2,y2)*dx; dst(x,y) = a*(1-dy) + b*dy; b (x2,y2) (x2,y1) 42

Gaussian Sampling Compute weighted sum of pixel neighborhood: The blending weights are the normalized values of a Gaussian function. (x,y) 43

Nearest Neighbor 44

Filtering Methods Comparison Bilinear 45

Filtering Methods Comparison Gaussian 46

Filtering Methods Comparison Trade-offs: 1. Jagged edges versus blurring 2. Computational speed Gaussian 47

Image Warping Implementation for (int x = 0; x < xmax; x++) for (int y = 0; y < ymax; y++) float u = fx-1(x,y); float v = fy-1(x,y); dst(x,y) = resample_src(u,v,w); (u,v) f (x,y) Source image Destination image 48

Image Warping Implementation for (int x = 0; x < xmax; x++) for (int y = 0; y < ymax; y++) float u = fx-1(x,y); float v = fy-1(x,y); dst(x,y) = resample_src(u,v,w); (u,v) f w Source image (x,y) Destination image 49

Example: Scale (src, dst, s) float w =??; for (int x = 0; x < xmax; x++) for (int y = 0; y < ymax; y++) float u = x / s; float v = y / s; dst(x,y) = resample_src(u,v,w); v (u,v) y (x,y) u Scale 0.5 x 50

Example: Scale (src, dst, s) float w =??; for (int x = 0; x < xmax; x++) for (int y = 0; y < ymax; y++) float u = x / s; float v = y / s; dst(x,y) = resample_src(u,v,w); v (u,v) y w=1.0/s (x,y) u Scale 0.5 x 51

Example: Rotate (src, dst, theta) float w =??; for (int x = 0; x < xmax; x++) for (int y = 0; y < ymax; y++) float u = x*cos(-θ) - y*sin(-θ); float v = x*sin(-θ) + y*cos(-θ); dst(x,y) = resample_src(u,v,w); v (u,v) y (x,y) x = ucosθ - vsinθ y = usinθ + vcosθ Rotate u 30 x 52

Example: Rotate (src, dst, theta) float w =??; for (int x = 0; x < xmax; x++) for (int y = 0; y < ymax; y++) float u = x*cos(-θ) - y*sin(-θ); float v = x*sin(-θ) + y*cos(-θ); dst(x,y) = resample_src(u,v,w); w=1.0 v (u,v) y (x,y) x = ucosθ - vsinθ y = usinθ + vcosθ Rotate u 30 x 53

Example: Swirl (src, dst, theta)??? float w =??; for (int x = 0; x < xmax; x++) for (int y = 0; y < ymax; y++) float u = rot(dist(x,xcenter)*theta); float v = rot(dist(y,ycenter)*theta); dst(x,y) = resample_src(u,v,w); v (u,v) f y (x,y) Swirl 45 u x 54

Outline Image Processing Image Warping Image Sampling 55

Sampling Questions How should we sample an image: Nearest Point Sampling? Bilinear Sampling? Gaussian Sampling? Something Else? 56

Image Representation What is an image? An image is a discrete collection of pixels, each representing a sample of a continuous function. Continuous image Digital image 57

Sampling Let s look at a 1D example: Continuous Function Discrete Samples 58

Sampling At in-between positions, values are undefined. How do we determine the value of a sample at these locations?? Discrete Samples 59

Sampling At in-between positions, values are undefined. How do we determine the value of a sample at these locations? We need to reconstruct a continuous function, turning a collection of discrete samples into a 1D function that we can sample at arbitrary locations.? Discrete Samples 60

Sampling At in-between positions, values are undefined. How do we determine the value of a sample at these locations? We need to reconstruct a continuous function, turning a collection of discrete samples into a 1D function that we can sample at arbitrary locations. In other words: How do we define the in-between values?? Discrete Samples 61

Nearest Point Sampling The value at a point is the value of the closest discrete sample.? Reconstructed Function Discrete Samples 62

Nearest Point Sampling The value at a point is the value of the closest discrete sample. The reconstruction: ü Interpolates the samples û Is not continuous? Reconstructed Function Discrete Samples 63

Bilinear Sampling The value at a point is the (bi)linear interpolation of the two surrounding samples.? Reconstructed Function Discrete Samples 64

Bilinear Sampling The value at a point is the (bi)linear interpolation of the two surrounding samples. The reconstruction: ü Interpolates the samples û Is not smooth? Reconstructed Function Discrete Samples 65

Gaussian Sampling The value at a point is the Gaussian average of the surrounding samples.? Reconstructed Function Discrete Samples 66

Gaussian Sampling The value at a point is the Gaussian average of the surrounding samples. The reconstruction: û Does not interpolate ü Is smooth? Reconstructed Function Discrete Samples 67

Image Sampling How do we reconstruct a function from a collection of samples?? Samples Reconstruction 68

Image Sampling How do we reconstruct a function from a collection of samples? To answer this question, we need to understand what kind of information the samples contain.? Original Function Samples Reconstruction 69

Image Sampling How do we reconstruct a function from a collection of samples? To answer this question, we need to understand what kind of information the samples contain. Signal processing helps us understand this better.? Original Function Samples Reconstruction 70

Fourier Analysis Fourier analysis provides a way for expressing (or approximating) any signal as a sum of scaled and shifted cosine functions. cos(0θ) cos(1θ) cos(2θ) cos(3θ) -π π -π π -π π -π π cos(4θ) cos(5θ) cos(6θ) -π π -π π -π π The Building Blocks for the Fourier Decomposition 71

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 0th Order Approximation f(θ) f0(θ)=a0(cos0(θ+φ0) 0th Order Component 72

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 1st Order Approximation f(θ) 0th Order Approximation + f1(θ)=a1(cos1(θ+φ1) 1st Order Component 73

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 2nd Order Approximation f(θ) 1st Order Approximation + f2(θ)=a2(cos2(θ+φ2) 2nd Order Component 74

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 3rd Order Approximation f(θ) 2nd Order Approximation + f3(θ)=a3(cos3(θ+φ3) 3rd Order Component 75

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 4th Order Approximation f(θ) 3rd Order Approximation + f4(θ)=a4(cos4(θ+φ4) 4th Order Component 76

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 5th Order Approximation f(θ) 4th Order Approximation + f5(θ)=a5(cos5(θ+φ5) 5th Order Component 77

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 6th Order Approximation f(θ) 5th Order Approximation + f6(θ)=a6(cos6(θ+φ6) 6th Order Component 78

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 7th Order Approximation f(θ) 6th Order Approximation + f7(θ)=a7(cos7(θ+φ7) 7th Order Component 79

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 8th Order Approximation f(θ) 7th Order Approximation + f8(θ)=a8(cos8(θ+φ8) 8th Order Component 80

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 9th Order Approximation f(θ) 8th Order Approximation + f9(θ)=a9(cos9(θ+φ9) 9th Order Component 81

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 10th Order Approximation f(θ) 9th Order Approximation + f10(θ)=a10(cos10(θ +φ10) 10th Order Component 82

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 11th Order Approximation f(θ) 10th Order Approximation + f11(θ)=a11(cos11(θ +φ11) 11th Order Component 83

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 12th Order Approximation f(θ) 11th Order Approximation + f12(θ)=a12(cos12(θ +φ12) 12th Order Component 84

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 13th Order Approximation f(θ) 12th Order Approximation + f13(θ)=a13(cos13(θ +φ13) 13th Order Component 85

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 14th Order Approximation f(θ) 13th Order Approximation + f14(θ)=a14(cos14(θ +φ14) 14th Order Component 86

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 15th Order Approximation f(θ) 14th Order Approximation + f15(θ)=a15(cos15(θ +φ15) 15th Order Component 87

Fourier Analysis As higher frequency components are added to the approximation, finer details are captured. Initial Function 16th Order Approximation f(θ) 15th Order Approximation + f16(θ)=a16(cos16(θ +φ16) 16th Order Component 88

Fourier Analysis Combining all of the frequency components together, we get the initial function. ak: amplitude of the kth frequency component φk: shift of the kth frequency component Initial Function f(θ) f0(θ) f1(θ) f2(θ) f3(θ) f4(θ) + + + + = f5(θ) f6(θ) f7(θ) f8(θ) + + + + 89

Question As higher frequency components are added to the approximation, finer details are captured. If we have n samples, what is the highest frequency that can be represented? Initial Function f(θ) f0(θ) f1(θ) f2(θ) f3(θ) f4(θ) + + + + = f5(θ) f6(θ) f7(θ) f8(θ) + + + + 90

Question As higher frequency components are added to the approximation, finer details are captured. If we have n samples, what is the highest frequency that can be represented? Each frequency component has two degrees of freedom: Amplitude Shift With n samples we can represent the first n/2 frequency Initial Function components f(θ) f0(θ) f1(θ) f2(θ) f3(θ) f4(θ) + + + + = f5(θ) f6(θ) f7(θ) f8(θ) + + + + 91

Sampling Theorem A signal can be reconstructed from its samples, if the original signal has no frequencies above 1/2 the sampling frequency Shannon s Theorem The minimum sampling rate for band-limited function is called the Nyquist rate A signal is band-limited if its highest non-zero frequency is bounded. The frequency is called the bandwidth. 92

Question What if we have only n samples and we try to reconstruct a function with frequencies larger than the Nyquist frequency (n/2)? 93

Aliasing When a high-frequency signal is sampled with insufficiently many samples, it will be perceived as a lower-frequency signal. This masking of higher frequencies as lower ones is referred to as aliasing. -π π 94

Aliasing When a high-frequency signal is sampled with insufficiently many samples, it will be perceived as a lower-frequency signal. This masking of higher frequencies as lower ones is referred to as aliasing. -π π 95

Aliasing When a high-frequency signal is sampled with insufficiently many samples, it will be perceived as a lower-frequency signal. This masking of higher frequencies as lower ones is referred to as aliasing. -π π 96

Aliasing When a high-frequency signal is sampled with insufficiently many samples, it will be perceived as a lower-frequency signal. This masking of higher frequencies as lower ones is referred to as aliasing. -π π 97

Temporal Aliasing Artifacts due to limited temporal resolution 98

Nearest Neighbor 99

Sampling There are two problems: You don t have enough samples to correctly reconstruct your high-frequency information You corrupt the low-frequency information because the highfrequencies mask themselves as lower ones. 100

Anti-Aliasing Two possible ways to address aliasing: Sample at higher rate Pre-filter to form band-limited signal 101

Anti-Aliasing Two possible ways to address aliasing: Sample at higher rate Not always possible Still rendering to fixed resolution Pre-filter to form band-limited signal 102

Anti-Aliasing Two possible ways to address aliasing: Sample at higher rate Pre-filter to form a band-limited signal You still don t get your high frequencies, but at least the low frequencies are uncorrupted. 103

Fourier Analysis If we just look at how much information each frequency contributes, we obtain the power spectrum of the signal: Initial Function + + + + = + + + + 104

Fourier Analysis If we just look at how much information each frequency contributes, we obtain the power spectrum of the signal: Initial Function + + + + = + + + + Power Spectrum 105

Pre-Filtering Band-limit by discarding the high-frequency components of the Frequency decomposition. Initial Power Spectrum Band-Limited Power Spectrum 106

Pre-Filtering Band-limit by discarding the high-frequency components of the Fourier decomposition. We can do this by multiplying the frequency components by a 0/1 function: 1 X = Initial Power Spectrum Frequency Filter Band-Limited Spectrum 107

Pre-Filtering Band-limit by discarding the high-frequency components of the Fourier decomposition. We can do this by multiplying the frequency components by a 0/1 function: 1 X = Initial Power Spectrum Frequency Filter Band-Limited Spectrum 108

Fourier Theory A fundamental fact from Fourier theory is that multiplication in the frequency domain is equivalent to convolution in the spatial domain. 109

Convolution To convolve two functions f and g, we resample the function f using the weights given by g. 1 g(θ) f(θ) 110

Convolution To convolve two functions f and g, we resample the function f using the weights given by g. 1 g(θ) 1 0 f(θ) (f g)(θ) 111

Convolution To convolve two functions f and g, we resample the function f using the weights given by g..6.4 g(θ).6.4 0 f(θ) (f g)(θ) 112

Convolution To convolve two functions f and g, we resample the function f using the weights given by g..6.4 g(θ).4.6 0 f(θ) (f g)(θ) 113

Convolution To convolve two functions f and g, we resample the function f using the weights given by g. 1 g(θ) 0 1 0 f(θ) (f g)(θ) 114

Convolution To convolve two functions f and g, we resample the function f using the weights given by g..6.4 g(θ) 0.6.4 f(θ) (f g)(θ) 115

Convolution To convolve two functions f and g, we resample the function f using the weights given by g..6.4 g(θ) 0.4.6 f(θ) (f g)(θ) 116

Convolution To convolve two functions f and g, we resample the function f using the weights given by g. 1 g(θ) 0 1 0 f(θ) (f g)(θ) 117

Convolution To convolve two functions f and g, we resample the function f using the weights given by g. 0.5 g(θ) 0.5.5 f(θ) (f g)(θ) 118

Convolution To convolve two functions f and g, we resample the function f using the weights given by g. 1 g(θ) 0 1 0 f(θ) (f g)(θ) 119

Convolution To convolve two functions f and g, we resample the function f using the weights given by g. 1 g(θ) f(θ) 0 1 (f g)(θ) 120

Convolution To convolve two functions f and g, we resample the function f using the weights given by g. Nearest point, bilinear, and Gaussian interpolation are just convolutions with different filters. * = * = * = 121

Convolution Recall that convolution in the spatial domain is the equal to multiplication in the frequency domain. In order to avoid aliasing, we need to convolve with a filter whose power spectrum has value: 1 at low frequencies 0 at high frequencies 1 X = Initial Power Spectrum Frequency Filter Band-Limited Spectrum 122

Nearest Point Convolution * = Discrete Samples Reconstruction Filter Reconstructed Function Filter Spectrum 123

Bilinear Convolution * = Discrete Samples Reconstruction Filter Reconstructed Function Filter Spectrum 124

Gaussian Convolution * = Discrete Samples Reconstruction Filter Reconstructed Function Filter Spectrum 125

Convolution The ideal filter for avoiding aliasing has a power spectrum with values: 1 at low frequencies 0 at high frequencies The sinc function has such a power spectrum and is referred to as the ideal reconstruction filter: 126

The Sinc Filter The ideal filter for avoiding aliasing has a power spectrum with values: 1 at low frequencies 0 at high frequencies The sinc function has such a power spectrum and is referred to as the ideal reconstruction filter: Filter Spectrum Reconstruction Filter 127

The Sinc Filter Limitations: Has negative values, giving rise to negative weights in the interpolation. The discontinuity in the frequency domain (power spectrum) results in ringing artifacts known as the Gibbs Phenomenon. * = Initial Function Reconstruction Filter Reconstructed Function 128

The Sinc Filter Limitations: Has negative values, giving rise to negative weights in the interpolation. The discontinuity in the frequency domain (power spectrum) results in ringing artifacts near spatial discontinuities, known as the Gibbs Phenomenon. * = Initial Function Reconstruction Filter Reconstructed Function 129

Summary There are different ways to sample an image: Nearest Point Sampling Linear Sampling Gaussian Sampling Sinc Sampling These methods have advantages and disadvantages. 130

Summary Nearest ü Can be implemented efficiently because the filter is non-zero in a very small region.? Interpolates the samples. û Is discontinuous. û Does not address the aliasing problem, giving bad results when a signal is under-sampled. * = Discrete Samples Reconstruction Filter Reconstructed Function 131

Summary Linear ü Can be implemented efficiently because the filter is non-zero in a very small region.? Interpolates the samples. û Is not smooth. û Partially addresses the aliasing problem, but can still give bad results when a signal is under-sampled. * = Discrete Samples Reconstruction Filter Reconstructed Function 132

Summary Gaussian û Is slow to implement because the filter is non-zero in a large region.? Does not interpolate the samples. ü Is smooth. ü Addresses the aliasing problem by killing off the high frequencies. * = Discrete Samples Reconstruction Filter Reconstructed Function 133

Summary Sinc û Is slow to implement because the filter is non-zero in a large region.? Does not interpolate the samples. û Assigns negative weights. û Ringing at discontinuities. ü Addresses the aliasing problem by killing off the high frequencies. * = Discrete Samples Reconstruction Filter Reconstructed Function 134

Summary Question: Is it good if a reconstruction method is interpolating? (Consider the case when you are down-scaling an image?) 135

Summary It appears that we have been mixing the sampling problem with the reconstruction problem. However, our motivation for the choice of filter is the same in both cases. We want a filter whose spectrum goes to zero so that: Sampling: High frequency samples are killed off, the signal becomes band-limited, and we can sample discretely. Reconstruction: We do not end up reconstructing a function with high frequency components. 136

Image Sampling Given a signal sampled at m positions, if we would like to resample at n positions we need to: 1. Reconstruct a function with maximum non-zero frequency no larger than min(m/2,n/2). 2. Sample the reconstructed function at the n positions. 137

Image Sampling Example: 138

Image Sampling Example: 139

Image Sampling Example: 140

Gaussian Sampling Recall: To avoid aliasing, we kill off high-frequency components, by convolving with a function whose power spectrum is zero at high frequencies. We use a Gaussian for function reconstruction and sampling because it smoothly kills of the high frequency components. 141

Gaussian Sampling Q: What variance Gaussian should we use? A: The variance of the Gaussian should be between 0.5 and 1.0 times the distance between samples. 142

Gaussian Sampling Q: What variance Gaussian should we use? A: The variance of the Gaussian should be between 0.5 and 1.0 times the distance between samples. Power spectra of the Gaussians used for reconstructing and sampling a function with 20 samples 143

Gaussian Sampling Scaling Example: Q: Suppose we have data represented by 20 samples that we would like to down-sample to 5 samples. What variance should we use? 144

Gaussian Sampling Scaling Example: Q: Suppose we have data represented by 20 samples that we would like to down-sample to 5 samples. What variance should we use? A: The distance between two adjacent samples in the final array corresponds to a distance of 4 units in the initial array. The variance of the Gaussian should be between 2.0 and 4.0. 145

Gaussian Sampling Scaling Example: Q: Suppose we have data represented by 20 samples that we would like to up-sample to 40 samples. What variance should we use? 146

Gaussian Sampling Scaling Example: Q: Suppose we have data represented by 20 samples that we would like to up-sample to 40 samples. What variance should we use? A: Because the initial samples can t represent frequencies higher than 10, we shouldn t use a Gaussian with smaller variance since this would introduce high-frequency components into the reconstruction. The variance of the Gaussian should remain between 0.5 and 1.0. 147