CPSC 425: Computer Vision

Size: px
Start display at page:

Download "CPSC 425: Computer Vision"

Transcription

1 1 / 92 CPSC 425: Computer Vision Instructor: Jim Little little@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2016/2017 Term 2

2 2 / 92 Menu February 14, 2017 Topics: Review for midterm Reading: (after midterm) Paper: Distinctive Image Features from Scale-Invariant Keypoints Forsyth & Ponce (2nd ed.) 5.4 Reminders: Midterm exam, in class, Thursday, February 16 See Web page regarding office hours Reading Week Feb www: piazza:

3 3 / 92 Today s Fun Example Demo of chromatic adaptation

4 4 / 92 Lecture 10: Re-cap Approaches to texture exploit pyramid (i.e. scaled) and oriented representations Human colour perception colour matching experiments additive and subtractive matching principle of trichromacy RGB and CIE XYZ are linear colour spaces Uniform colour space: differences in coordinates are a good guide to differences in perceived colour HSV colour space: more intuitive description of colour for human interpretation (Human) colour constancy: perception of intrinsic surface colour under different colours of lighting

5 5 / 92 Paths to Understanding Five distinct paths to a deeper understanding of CPSC 425 course material: 1 mathematics (i.e., theory) 2 visualize computation(s) 3 experiment on simple (test) cases on real images 4 read code 5 write code

6 6 / 92 Midterm Review: Readings Lecture 1 10 slides Assigned readings from Forsyth & Ponce (2nd ed.) Paper Texture Synthesis by Non-parametric Sampling Assignments 2 3 iclicker questions Lecture exercises Practice problems (with solutions)

7 7 / 92 Midterm Details 60 minutes Closed book, no calculators Format similar to posted practice problems Part A: Multiple-part true/false Part B: Short answer No coding questions

8 8 / 92 Linear Filters (cont d) I (X, Y ) = k k F(i, j) I (X + i, Y + j) j= k i= k For each X and Y, superimpose the filter, F(X, Y ), on the image centered at (X, Y ) Compute the new pixel value, I (X, Y ), as the sum of m m values, where each value is the product of the original pixel value in I(X, Y ) and the corresponding value in the filter

9 9 / 92

10 10 / 92 Linear Filters: Boundary Effects 3 standard ways to deal with boundaries 1 Ignore these locations Make the computation undefined for the top and bottom k rows and the leftmost and rightmost k columns 2 Pad the image with zeros Return zero whenever a value of I is required at some position outside the defined limits of X and Y 3 Assume periodicity The top row wraps around to the bottom row; the leftmost column wraps around to the rightmost column

11 11 / 92 Linear Filters The correlation of F(X, Y ) and I(X, Y ) is I (X, Y ) = k j= k k i= k F(i, j) I (X + i, Y + j) Visual interpretation: Superimpose the filter F on the image I at (X,Y), perform an element-wise multiply, and sum up the values Convolution is like correlation except filter flipped When F( i, j) = F(i, j) the two are equivalent

12 12 / 92 Linear Systems: Characterization Theorem Any linear, shift invariant operation can be expressed as a convolution

13 13 / 92

14 14 / 92 Example 1 (cont d): Smoothing with a Gaussian Idea: Weigh contributions of neighbouring pixels by nearness 2D Gaussian (continuous case): x 2 + y 2 G σ (x, y) = 1 2πσ 2 exp 2σ 2 Forsyth & Ponce (2nd ed.) Figure 4.2

15 15 / 92 Gaussian: Area Under the Curve σ σ σ σ σ σ σ σ 68% 95% 99.7% 99.99%

16 16 / 92 Efficient Implementation: Separability A 2D function of x and y is separable if it can be written as the product of two functions, one a function only of x and the other a function only of y Both the 2D box filter and the 2D Gaussian filter are separable Both can be implemented as two 1D convolutions: First, convolve each row with a 1D filter Then, convolve each column with a 1D filter Aside: or vice versa The 2D Gaussian is the only (non trivial) 2D function that is both separable and rotationally invariant.

17 17 / 92 Linear Filters: Additional Properties Let denote convolution. Let I(X, Y ) be a digital image. Let F and G be digital filters Convolution is associative. That is, G (F I(X, Y )) = (G F) I(X, Y ) Convolution is symmetric. That is, (F G) I(X, Y ) = (G F) I(X, Y ) Convolving I(X, Y ) with filter F and then convolving the result with filter G can be achieved in a single step, namely convolving I(X, Y ) with filter G F = G F

18 18 / 92 Bilateral Filter An edge-preserving non-linear filter Like a Gaussian filter, the filter weights depend on spatial distance from the center pixel Idea: Pixels nearby (in space) should have greater influence than pixels far away Unlike a Gaussian filter, the filter weights also depend on range distance from the center pixel Idea: Pixels with similar brightness value should have greater influence than pixels with dissimilar brightness value

19 19 / 92 Bilateral Filter Application: Denoising Figure credit: Alexander Wong

20 Sample Question: Filters What does the following 3 3 linear, shift invariant filter compute when applied to an image? / 92

21 21 / 92 Continuous Case y i(x,y) Denote the image as a function, i(x, y), where x and y are spatial variables x

22 22 / 92 Discrete Case Idea: Superimpose (regular) grid on continuous image y i(x,y) x Sample the underlying continuous image according to the tessellation imposed by the grid

23 23 / 92 Discrete Case (cont d) y i(x,y) pixel x Each grid cell is called a picture element (or pixel) Denote the discrete image as I(X, Y ) We can store the pixels in a matrix or array

24 24 / 92 Sampling It s clear that some information may be lost when we work on a discrete pixel grid. Figure credit: Forsyth & Ponce (2nd ed.) Figure 4.7

25 25 / 92 Sampling Theory Exact reconstruction requires constraint on the rate at which i(x, y) can change between samples rate of change means derivative the formal concept is bandlimited signal bandlimit and constraint on derivative are linked An image is bandlimited if it has some maximum spatial frequency A fundamental result (Sampling Theorem) is: For bandlimited signals, if you sample regularly at or above twice the maximum frequency (called the Nyquist rate), then you can reconstruct the original signal exactly

26 26 / 92 Sampling Theory (cont d) Sometimes undersampling is unavoidable, and there is a trade-off between things missing and artifacts. Medical imaging: usually try to maximize information content, tolerate some artifacts Computer graphics: usually try to minimize artifacts, tolerate some information missing

27 27 / 92 Template Matching Figure credit: Kristen Grauman

28 28 / 92 Template Matching We can think of convolution/correlation as comparing a template (the filter) with each local image patch. Consider the filter and image patch as vectors. Applying a filter at an image location can be interpreted as computing the dot product (recall: element-wise multiply and sum) between the filter and the local image patch. But... The dot product may be large simply because the image region is bright. We need to normalize the result in some way.

29 29 / 92 Template Matching Let a and b be vectors. Let θ be the angle between them We know cos θ = a b a b = a b (a a) (b b) where is dot product and is vector magnitude Correlation is a dot product Correlation measures similarity between the filter and each local image region Normalized correlation varies between 1 and 1 Normalized correlation attains the value 1 when the filter and image region are identical (up to a scale factor)

30 30 / 92 Template Matching Figure credit: Kristen Grauman

31 31 / 92 Example 1 (cont d): Normalized Correlation Template (left), image (middle), normalized correlation (right) Note peak value at the true position of the hand Credit: W. Freeman et al., Computer Vision for Interactive Computer Graphics, IEEE Computer Graphics and Applications, 1998

32 32 / 92 Sample Question: Template Matching True or false: Normalized correlation is robust to a constant scaling in the image brightness.

33 33 / 92 Scaled Representations Goals: to find template matches at all scales template size constant, image scale varies e.g., finding hands or faces when we don t know what size they will be in the image efficient search for image to image correspondences look first at coarse scales, refine at finer scales much less cost (but may miss best match) to examine all levels of detail find edges with different amounts of blur find textures with different spatial frequencies (i.e., different levels of detail)

34 34 / 92 Shrinking an Image no smoothing Gaussian σ = 1 Forsyth & Ponce (2nd ed.) Figures (top rows) Gaussian σ = 2

35 35 / 92 Image Pyramid An image pyramid is a collection of representations of an image Typically, each layer of the pyramid is half the width and half the height of the previous layer In a Gaussian pyramid, each layer is smoothed by a Gaussian filter and resampled to get the next layer

36 36 / 92 Example 1: A Gaussian Pyramid Forsyth & Ponce (2nd ed.) Figure 4.17

37 37 / 92 From Template Matching to Local Feature Detection Slide credit: Li Fei-Fei, Rob Fergus, and Antonio Torralba

38 38 / 92 Estimating Derivatives Recall, for a 2D (continuous) function, f (x, y) f x = lim ε 0 f (x + ε, y) f (x, y) ε Differentiation is linear and shift invariant, and therefore can be implemented as a convolution A (discrete) approximation is f x F (X + 1, Y ) F(X, Y ) x

39 39 / 92 Estimating Derivatives A similar definition (and approximation) holds for f y. Image noise tends to result in pixels not looking exactly like their neighbours, so simple finite differences are sensitive to noise. The usual way to deal with this problem is to smooth the image prior to derivative estimation.

40 What Causes an Edge? What causes an edge? Depth discontinuity Surface orientation discontinuity Reflectance discontinuity (i.e., change in surface material properties) Illumination discontinuity (e.g., shadow) Slide credit: Christopher Rasmussen 40 / 92

41 41 / 92 Smoothing and Differentiation Edge: a location with high gradient (derivative) Need smoothing to reduce noise prior to taking derivative Need two derivatives, in x and y direction We can use derivative of Gaussian filters because differentiation is convolution, and convolution is associative Let denote convolution D (G I(X, Y )) (D G) I(X, Y )

42 42 / 92 Gradient Magnitude Let I(X, Y ) be a (digital) image Let I x (X, Y ) and I y (X, Y ) be estimates of the partial derivatives in the x and y directions, respectively. Call these estimates I x and I y (for short) The vector [I x, I y ] is the gradient The scalar I x 2 + I y 2 is the gradient magnitude

43 43 / 92 Two Generic Approaches to Edge Detection i(r) d i(r) dr r y r d 2 i(r) dr 2 r x r Two generic approaches to edge point detection: (significant) local extrema of a first derivative operator zero crossings of a second derivative operator

44 44 / 92 Marr/Hildreth Laplacian of Gaussian (cont d) Steps: 1 Gaussian for smoothing 2 Laplacian ( 2 ) for differentiation where 2 f (x, y) 2 f (x, y) x f (x, y) y 2 3 locate zero-crossings in the Laplacian of the Gaussian ( 2 G) where 2 G(x, y) = 1 2πσ 4 [2 x 2 + y 2 ] σ 2 exp x 2 + y 2 2σ 2

45 45 / 92 Marr/Hildreth Laplacian of Gaussian (cont d) Here s a 1D plot of the Laplacian of the Gaussian ( 2 G)... g(t) t... with its characteristic Mexican hat shape

46 46 / 92 Canny Edge Detection (cont d) Steps: 1 Apply directional derivatives of Gaussian 2 Non-maximum suppression thin multi-pixel wide ridges down to single pixel width 3 Linking and thresholding Low, high edge-strength thresholds Accept all edges over low threshold that are connected to edge over high threshold

47 47 / 92 Edge Hysteresis One way to deal with broken edge chains is to use hysteresis Hysteresis: A lag or momentum factor Idea: Maintain two thresholds k high and k low Use k high to find strong edges to start edge chain Use k low to find weak edges which continue edge chain Typical ratio of thresholds is (roughly) k high k low = 2

48 48 / 92

49 49 / 92 How do humans perceive boundaries? Figure credit: Szeliski Fig Original: Martin et al Each image shows multiple (4-8) human-marked boundaries. Pixels are darker where more humans marked a boundary.

50 50 / 92 Sample Question: Edges Why is non-maximum suppression applied in the Canny edge detector?

51 We can think of a corner as any locally distinct 2D image feature that (hopefully) corresponds to a distinct position on an 3D object of interest in the scene. 51 / 92 What is a corner? Credit: John Shakespeare, Sydney Morning Herald

52 52 / 92 Why are corners distinct"? A corner can be localized reliably. Thought experiment: Place a small window over a patch of constant image value. If you slide the window in any direction, the image in the window will not change. Place a small window over an edge. If you slide the window in the direction of the edge, the image in the window will not change Cannot estimate location along an edge Place a small window over a corner. If you slide the window in any direction, the image in the window changes.

53 53 / 92 Autocorrelation Figure credit: Szeliski Fig. 4.5

54 54 / 92 Corner Detection Edge detectors perform poorly at corners Observations: The gradient is ill defined exactly at a corner Near a corner, the gradient has two (or more) distinct values

55 55 / 92 Harris Corner Detector The Harris corner detector Form the second-moment matrix: Sum over a small region around the hypothetical corner C = I I x 2 x I y Gradient with respect to x, times gradient with respect to y I I x y 2 I y Matrix is symmetric Slide credit: David Jacobs

56 56 / 92 Harris Corner Detector Filter image with Gaussian Compute magnitude of the x and y gradients at each pixel Construct C in a window around each pixel Harris uses a Gaussian window Solve for product of the λs If λs both are big (product reaches local maximum above threshold) then we have a corner Harris also checks that ratio of λs is not too high

57 57 / 92

58 58 / 92

59 59 / 92 Example 1: Harris Corners Originally developed as features for motion tracking Greatly reduces amount of computation (compared to tracking every pixel) Translation and rotation invariant (but not scale invariant)

60 60 / 92 Harris Corner Detector Not scale invariant: Slide credit: Trevor Darrell

61 61 / 92 Sample Question: Corners The Harris corner detector is stable under some image transformations (features are considered stable if the same locations on an object are typically selected in the transformed image). True or false: The Harris corner detector is stable under image blur.

62 62 / 92 Texture We will look at two main questions: 1 How do we represent texture? Texture analysis 2 How do we generate new examples of a texture? Texture synthesis

63 63 / 92 Texture Synthesis Why might we want to synthesize texture? 1 To fill holes in images (inpainting) Art directors might want to remove telephone wires. Restorers might want to remove scratches or marks. We need to find something to put in place of the pixels that were removed We synthesize regions of texture that fit in and look convincing 2 To produce large quantities of texture for computer graphics Good textures make object models look more realistic

64 64 / 92 Texture Synthesis Figure credit: Szeliski Fig

65 65 / 92 Texture Synthesis Photo Credit: Associated Press

66 Efros and Leung: Synthesizing One Pixel 66 / 92 Infinite sample image SAMPLE Generated image Assuming Markov property, what is conditional probability distribution What conditional of p, given probability the neighbourhood distribution of p, given window? the neighbourhood window? Instead of constructing a model, let s directly search the Directly search the input image for all such neighbourhoods to input produce image a histogram for all such for p neighbourhoods to produce a histogram To synthesize for p, p pick one match at random To synthesize p, just pick one match at random Credit: p

67 EfrosReally and Leung: Synthesizing Really Synthesizing One One Pixel Pixel 67 / 92 finite sample image SAMPLE p Generated image However, since our sample image is finite, an exact neighbourhood match might not be present Since the sample image is finite, an exact neighbourhood match might not be present So Find we the find best the match best using match SSD using error, SSD weighted error by (weighted a Gaussian by toa Gaussian emphasize to local emphasize structure, local and structure), take all samples and within take all some samples distance from that match within some distance from that match Credit:

68 68 / 92 Efros and Leung: Synthesizing Many Pixels For multiple pixels, "grow" the texture in layers In the case of hole-filling, start from the edges of the hole For an interactive demo, see (written by Julieta Martinez, a previous CPSC 425 TA)

69 Randomness Parameter Efros and Leung: Randomness Parameter 69 / 92 Credit:

70 70 / 92 Big Data" Meets Inpainting Figure credit: Hays and Efros 2007

71 71 / 92 Big Data" Meets Inpainting Algorithm sketch (Hays and Efros 2007): 1 Create a short list of a few hundred best matching" images based on global image statistics 2 Find patches in the short list that match the context surrounding the image region we want to fill 3 Blend the match into the original image Purely data-driven, requires no manual labelling of images

72 72 / 92 The Goal of Texture Analysis The Goal of Texture Analysis input image ANALYSIS Same or different True (infinite) texture generated image Compare textures and decide if they re made of the Compare same stuff. textures and decide if they re made of the same stuff Credit: Bill Freeman 25

73 73 / 92 Texture Segmentation Question: Is texture a property of a point or a property of a region? Answer: We need a region to have a texture. There is a chicken and egg problem. Texture segmentation can be done by detecting boundaries between regions of the same (or similar) texture. Texture boundaries can be detected using standard edge detection techniques applied to the texture measures determined at each point We compromise! Typically one uses a local window to estimate texture properties and assigns those texture properties as point properties of the window s center row and column

74 74 / 92 Texture Representation Observation: Textures are made up of generic sub-elements, repeated over a region with similar statistical properties Idea: Find the sub-elements with filters, then represent each point in the image with a summary of the pattern of sub-elements in the local region

75 75 / 92 Texture Representation Observation: Textures are made up of generic sub-elements, repeated over a region with similar statistical properties Idea: Find the sub-elements with filters, then represent each point in the image with a summary of the pattern of sub-elements in the local region Question: What filters should we use? Answer: Human vision suggests spots and oriented edge filters at a variety of different orientations and scales

76 76 / 92 Texture Representation Observation: Textures are made up of generic sub-elements, repeated over a region with similar statistical properties Idea: Find the sub-elements with filters, then represent each point in the image with a summary of the pattern of sub-elements in the local region Question: What filters should we use? Answer: Human vision suggests spots and oriented edge filters at a variety of different orientations and scales Question: How do we summarize? Answer: Compute the mean or maximum of each filter response over the region Other statistics can also be useful

77 77 / 92 Texture Representation Figure credit: Leung and Malik 2001

78 78 / 92 Spots and Bars (Fine Scale) Forsyth & Ponce (1st ed.) Figures

79 79 / 92 Spots and Bars (Coarse Scale) Forsyth & Ponce (1st ed.) Figures 9.3 & 9.5

80 80 / 92 Laplacian Pyramid Building a Laplacian pyramid: Create a Gaussian pyramid Take the difference between one Gaussian pyramid level and the next (before subsampling) Properties Also known as the difference-of-gaussian (DOG) function, a close approximation to the Laplacian It is a band pass filter each level represents a different band of spatial frequencies Reconstructing the original image: Reconstruct the Gaussian pyramid starting at top

81 81 / 92 Oriented Pyramids (cont d) Forsyth & Ponce (1st ed.) Figure 9.13 Reprinted from Shiftable MultiScale Transforms, by Simoncelli et al., IEEE Transactions on Information Theory, 1992, c 1992, IEEE

82 82 / 92 Final Texture Representation Steps: 1 Form a Laplacian and oriented pyramid (or equivalent set of responses to filters at different scales and orientations) 2 Square the output (makes values positive) 3 Average responses over a neighborhood by blurring with a Gaussian 4 Take statistics of responses Mean of each filter output Possibly standard deviation of each filter

83 83 / 92 Sample Question: Texture How does the top-most image in a Laplacian pyramid differ from the others?

84 84 / 92 Colour Matching Experiments: I Forsyth & Ponce (2nd ed.) Figure 3.2 Show a split field to subjects. One side shows the light whose colour one wants to match. The other a weighted mixture of three primaries (fixed lights)

85 85 / 92 Colour Matching Experiments: II Many colours can be represented as a positive weighted sum of A, B, C Write M = a A + b B + c C where the = sign should be read as matches This is additive matching Defines a colour description system two people who agree on A, B, C need only supply (a, b, c)

86 86 / 92 Colour Matching Experiments: II (cont d) Some colours can t be matched this way Instead, we must write M + a A = b B + c C where, again, the = sign should be read as matches This is subtractive matching Interpret this as ( a, b, c) Problem for designing displays: Choose phosphors R, G, B so that positive linear combinations match a large set of colours

87 87 / 92 Linear Colour Spaces A choice of primaries yields a linear colour space the coordinates of a colour are given by the weights of the primaries used to match it Choice of primaries is equivalent to choice of colour space RGB: Primaries are monochromatic energies, say nm, nm, nm CIE XYZ: Primaries are imaginary, but have other convenient properties. Colour coordinates are (X, Y, Z ), where X is the amount of the X primary, etc.

88 88 / 92 Uniform Colour Spaces McAdam ellipses demonstrate that differences in x, y are a poor guide to differences in perceived colour A uniform colour space is one in which differences in coordinates are a good guide to differences in perceived colour example: CIE LAB

89 89 / 92 HSV Colour Space More natural description of colour for human interpretation Hue: attribute that describes a pure colour e.g. red, blue Saturation: measure of the degree to which a pure colour is diluted by white light pure spectrum colours are fully saturated Value: intensity or brightness Hue + saturation also referred to as chromaticity.

90 90 / 92 Colour Constancy Image colour depends on both light colour and surface colour Colour constancy: determine hue and saturation under different colours of lighting It is surprisingly difficult to predict what colours a human will perceive in a complex scene depends on context, other scene information Humans can usually perceive the colour a surface would have under white light the colour of the reflected light (separate surface colour from measured colour)

91 91 / 92 Summary Table Summary of what we ve seen so far: Representation Result is... Approach Technique intensity dense (2D) template matching (normalized) correlation, SSD edge relatively sparse (1D) derivatives 2 G, Canny corner sparse (0D) locally distinct features Harris

92 Questions? 92 / 92

CPSC 425: Computer Vision

CPSC 425: Computer Vision CPSC 425: Computer Vision Image Credit: https://docs.adaptive-vision.com/4.7/studio/machine_vision_guide/templatematching.html Lecture 9: Template Matching (cont.) and Scaled Representations ( unless otherwise

More information

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection COS 429: COMPUTER VISON Linear Filters and Edge Detection convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection Reading:

More information

CPSC 425: Computer Vision

CPSC 425: Computer Vision 1 / 55 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 55 Menu January 12, 2016 Topics:

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Edge Detection (with a sidelight introduction to linear, associative operators). Images

Edge Detection (with a sidelight introduction to linear, associative operators). Images Images (we will, eventually, come back to imaging geometry. But, now that we know how images come from the world, we will examine operations on images). Edge Detection (with a sidelight introduction to

More information

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges CS 4495 Computer Vision Linear Filtering 2: Templates, Edges Aaron Bobick School of Interactive Computing Last time: Convolution Convolution: Flip the filter in both dimensions (right to left, bottom to

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary EDGES AND TEXTURES The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their

More information

Texture. Texture. 2) Synthesis. Objectives: 1) Discrimination/Analysis

Texture. Texture. 2) Synthesis. Objectives: 1) Discrimination/Analysis Texture Texture D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC) Key issue: How do we represent texture? Topics: Texture segmentation Texture-based matching Texture

More information

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge) Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded in the edges More compact than pixels Ideal: artist

More information

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing Larry Matthies ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies" lhm@jpl.nasa.gov, 818-354-3722" Announcements" First homework grading is done! Second homework is due

More information

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13. Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with

More information

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University Edge and Texture CS 554 Computer Vision Pinar Duygulu Bilkent University Filters for features Previously, thinking of filtering as a way to remove or reduce noise Now, consider how filters will allow us

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

Other Linear Filters CS 211A

Other Linear Filters CS 211A Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin

More information

Scaled representations

Scaled representations Scaled representations Big bars (resp. spots, hands, etc.) and little bars are both interesting Stripes and hairs, say Inefficient to detect big bars with big filters And there is superfluous detail in

More information

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11 Announcement Edge and Corner Detection Slides are posted HW due Friday CSE5A Lecture 11 Edges Corners Edge is Where Change Occurs: 1-D Change is measured by derivative in 1D Numerical Derivatives f(x)

More information

Solution: filter the image, then subsample F 1 F 2. subsample blur subsample. blur

Solution: filter the image, then subsample F 1 F 2. subsample blur subsample. blur Pyramids Gaussian pre-filtering Solution: filter the image, then subsample blur F 0 subsample blur subsample * F 0 H F 1 F 1 * H F 2 { Gaussian pyramid blur F 0 subsample blur subsample * F 0 H F 1 F 1

More information

Review of Filtering. Filtering in frequency domain

Review of Filtering. Filtering in frequency domain Review of Filtering Filtering in frequency domain Can be faster than filtering in spatial domain (for large filters) Can help understand effect of filter Algorithm: 1. Convert image and filter to fft (fft2

More information

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages Announcements Mailing list: csep576@cs.washington.edu you should have received messages Project 1 out today (due in two weeks) Carpools Edge Detection From Sandlot Science Today s reading Forsyth, chapters

More information

Image Analysis. Edge Detection

Image Analysis. Edge Detection Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

More information

Edges and Binary Images

Edges and Binary Images CS 699: Intro to Computer Vision Edges and Binary Images Prof. Adriana Kovashka University of Pittsburgh September 5, 205 Plan for today Edge detection Binary image analysis Homework Due on 9/22, :59pm

More information

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10 Announcements Assignment 2 due Tuesday, May 4. Edge Detection, Lines Midterm: Thursday, May 6. Introduction to Computer Vision CSE 152 Lecture 10 Edges Last Lecture 1. Object boundaries 2. Surface normal

More information

Filtering Applications & Edge Detection. GV12/3072 Image Processing.

Filtering Applications & Edge Detection. GV12/3072 Image Processing. Filtering Applications & Edge Detection GV12/3072 1 Outline Sampling & Reconstruction Revisited Anti-Aliasing Edges Edge detection Simple edge detector Canny edge detector Performance analysis Hough Transform

More information

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick Edge Detection CSE 576 Ali Farhadi Many slides from Steve Seitz and Larry Zitnick Edge Attneave's Cat (1954) Origin of edges surface normal discontinuity depth discontinuity surface color discontinuity

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

DIGITAL IMAGE PROCESSING

DIGITAL IMAGE PROCESSING The image part with relationship ID rid2 was not found in the file. DIGITAL IMAGE PROCESSING Lecture 6 Wavelets (cont), Lines and edges Tammy Riklin Raviv Electrical and Computer Engineering Ben-Gurion

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

Edge Detection. CSC320: Introduction to Visual Computing Michael Guerzhoy. René Magritte, Decalcomania. Many slides from Derek Hoiem, Robert Collins

Edge Detection. CSC320: Introduction to Visual Computing Michael Guerzhoy. René Magritte, Decalcomania. Many slides from Derek Hoiem, Robert Collins Edge Detection René Magritte, Decalcomania Many slides from Derek Hoiem, Robert Collins CSC320: Introduction to Visual Computing Michael Guerzhoy Discontinuities in Intensity Source: Robert Collins Origin

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Low-level Vision Processing Algorithms Speaker: Ito, Dang Supporter: Ishii, Toyama and Y. Murakami

Low-level Vision Processing Algorithms Speaker: Ito, Dang Supporter: Ishii, Toyama and Y. Murakami Low-level Vision Processing Algorithms Speaker: Ito, Dang Supporter: Ishii, Toyama and Y. Murakami Adaptive Systems Lab The University of Aizu Overview Introduction What is Vision Processing? Basic Knowledge

More information

Motion illusion, rotating snakes

Motion illusion, rotating snakes Motion illusion, rotating snakes Local features: main components 1) Detection: Find a set of distinctive key points. 2) Description: Extract feature descriptor around each interest point as vector. x 1

More information

Edge Detection CSC 767

Edge Detection CSC 767 Edge Detection CSC 767 Edge detection Goal: Identify sudden changes (discontinuities) in an image Most semantic and shape information from the image can be encoded in the edges More compact than pixels

More information

Prof. Feng Liu. Winter /15/2019

Prof. Feng Liu. Winter /15/2019 Prof. Feng Liu Winter 2019 http://www.cs.pdx.edu/~fliu/courses/cs410/ 01/15/2019 Last Time Filter 2 Today More on Filter Feature Detection 3 Filter Re-cap noisy image naïve denoising Gaussian blur better

More information

CS 534: Computer Vision Texture

CS 534: Computer Vision Texture CS 534: Computer Vision Texture Ahmed Elgammal Dept of Computer Science CS 534 Texture - 1 Outlines Finding templates by convolution What is Texture Co-occurrence matrices for texture Spatial Filtering

More information

Does everyone have an override code?

Does everyone have an override code? Does everyone have an override code? Project 1 due Friday 9pm Review of Filtering Filtering in frequency domain Can be faster than filtering in spatial domain (for large filters) Can help understand effect

More information

Announcements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3.

Announcements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3. Announcements Edge Detection Introduction to Computer Vision CSE 152 Lecture 9 Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3. Reading from textbook An Isotropic Gaussian The picture

More information

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges

More information

Image Analysis. Edge Detection

Image Analysis. Edge Detection Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

More information

Digital Image Processing. Image Enhancement - Filtering

Digital Image Processing. Image Enhancement - Filtering Digital Image Processing Image Enhancement - Filtering Derivative Derivative is defined as a rate of change. Discrete Derivative Finite Distance Example Derivatives in 2-dimension Derivatives of Images

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

2%34 #5 +,,% ! # %& ()% #% +,,%. & /%0%)( 1 ! # %& % %()# +(& ,.+/ +&0%//#/ &

2%34 #5 +,,% ! # %& ()% #% +,,%. & /%0%)( 1 ! # %& % %()# +(& ,.+/ +&0%//#/ & ! # %& ()% #% +,,%. & /%0%)( 1 2%34 #5 +,,%! # %& % %()# +(&,.+/ +&0%//#/ & & Many slides in this lecture are due to other authors; they are credited on the bottom right Topics of This Lecture Problem

More information

Image gradients and edges April 11 th, 2017

Image gradients and edges April 11 th, 2017 4//27 Image gradients and edges April th, 27 Yong Jae Lee UC Davis PS due this Friday Announcements Questions? 2 Last time Image formation Linear filters and convolution useful for Image smoothing, removing

More information

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Edge detection. Winter in Kraków photographed by Marcin Ryczek Edge detection Winter in Kraków photographed by Marcin Ryczek Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Image gradients and edges April 10 th, 2018

Image gradients and edges April 10 th, 2018 Image gradients and edges April th, 28 Yong Jae Lee UC Davis PS due this Friday Announcements Questions? 2 Last time Image formation Linear filters and convolution useful for Image smoothing, removing

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534: Introduction to Computer Vision Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators Laplacian

More information

Why is computer vision difficult?

Why is computer vision difficult? Why is computer vision difficult? Viewpoint variation Illumination Scale Why is computer vision difficult? Intra-class variation Motion (Source: S. Lazebnik) Background clutter Occlusion Challenges: local

More information

Edge Detection. Computer Vision Shiv Ram Dubey, IIIT Sri City

Edge Detection. Computer Vision Shiv Ram Dubey, IIIT Sri City Edge Detection Computer Vision Shiv Ram Dubey, IIIT Sri City Previous two classes: Image Filtering Spatial domain Smoothing, sharpening, measuring texture * = FFT FFT Inverse FFT = Frequency domain Denoising,

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

Straight Lines and Hough

Straight Lines and Hough 09/30/11 Straight Lines and Hough Computer Vision CS 143, Brown James Hays Many slides from Derek Hoiem, Lana Lazebnik, Steve Seitz, David Forsyth, David Lowe, Fei-Fei Li Project 1 A few project highlights

More information

Edge Detection. Today s reading. Cipolla & Gee on edge detection (available online) From Sandlot Science

Edge Detection. Today s reading. Cipolla & Gee on edge detection (available online) From Sandlot Science Edge Detection From Sandlot Science Today s reading Cipolla & Gee on edge detection (available online) Project 1a assigned last Friday due this Friday Last time: Cross-correlation Let be the image, be

More information

CPSC 425: Computer Vision

CPSC 425: Computer Vision 1 / 49 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 49 Menu March 10, 2016 Topics: Motion

More information

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Edge detection. Winter in Kraków photographed by Marcin Ryczek Edge detection Winter in Kraków photographed by Marcin Ryczek Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, edges carry most of the semantic and shape information

More information

Image gradients and edges

Image gradients and edges Image gradients and edges Thurs Sept 3 Prof. Kristen Grauman UT-Austin Last time Various models for image noise Linear filters and convolution useful for Image smoothing, remov ing noise Box filter Gaussian

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Lecture 2: Edge detection From Sandlot Science Announcements Project 1 (Hybrid Images) is now on the course webpage (see Projects link) Due Wednesday, Feb 15, by 11:59pm

More information

Image gradients and edges

Image gradients and edges Image gradients and edges April 7 th, 2015 Yong Jae Lee UC Davis Announcements PS0 due this Friday Questions? 2 Last time Image formation Linear filters and convolution useful for Image smoothing, removing

More information

Final Exam Schedule. Final exam has been scheduled. 12:30 pm 3:00 pm, May 7. Location: INNOVA It will cover all the topics discussed in class

Final Exam Schedule. Final exam has been scheduled. 12:30 pm 3:00 pm, May 7. Location: INNOVA It will cover all the topics discussed in class Final Exam Schedule Final exam has been scheduled 12:30 pm 3:00 pm, May 7 Location: INNOVA 1400 It will cover all the topics discussed in class One page double-sided cheat sheet is allowed A calculator

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Texture and Other Uses of Filters

Texture and Other Uses of Filters CS 1699: Intro to Computer Vision Texture and Other Uses of Filters Prof. Adriana Kovashka University of Pittsburgh September 10, 2015 Slides from Kristen Grauman (12-52) and Derek Hoiem (54-83) Plan for

More information

Edge Detection Lecture 03 Computer Vision

Edge Detection Lecture 03 Computer Vision Edge Detection Lecture 3 Computer Vision Suggested readings Chapter 5 Linda G. Shapiro and George Stockman, Computer Vision, Upper Saddle River, NJ, Prentice Hall,. Chapter David A. Forsyth and Jean Ponce,

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

I Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University. Computer Vision: 6. Texture

I Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University. Computer Vision: 6. Texture I Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University Computer Vision: 6. Texture Objective Key issue: How do we represent texture? Topics: Texture analysis Texture synthesis Shape

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

School of Computing University of Utah

School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 4 Main paper to be discussed David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, IJCV, 2004. How to find useful keypoints?

More information

Filtering and Edge Detection. Computer Vision I. CSE252A Lecture 10. Announcement

Filtering and Edge Detection. Computer Vision I. CSE252A Lecture 10. Announcement Filtering and Edge Detection CSE252A Lecture 10 Announcement HW1graded, will be released later today HW2 assigned, due Wed. Nov. 7 1 Image formation: Color Channel k " $ $ # $ I r I g I b % " ' $ ' = (

More information

Edge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient

Edge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient Edge Detection CS664 Computer Vision. Edges Convert a gray or color image into set of curves Represented as binary image Capture properties of shapes Dan Huttenlocher Several Causes of Edges Sudden changes

More information

Local Image preprocessing (cont d)

Local Image preprocessing (cont d) Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

Computer Vision: 4. Filtering. By I-Chen Lin Dept. of CS, National Chiao Tung University

Computer Vision: 4. Filtering. By I-Chen Lin Dept. of CS, National Chiao Tung University Computer Vision: 4. Filtering By I-Chen Lin Dept. of CS, National Chiao Tung University Outline Impulse response and convolution. Linear filter and image pyramid. Textbook: David A. Forsyth and Jean Ponce,

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

Texture. D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC)

Texture. D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC) Texture D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC) Previously Edges, contours, feature points, patches (templates) Color features Useful for matching, recognizing

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Computer Vision I - Basics of Image Processing Part 2

Computer Vision I - Basics of Image Processing Part 2 Computer Vision I - Basics of Image Processing Part 2 Carsten Rother 07/11/2014 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

CS 534: Computer Vision Texture

CS 534: Computer Vision Texture CS 534: Computer Vision Texture Spring 2004 Ahmed Elgammal Dept of Computer Science CS 534 Ahmed Elgammal Texture - 1 Outlines Finding templates by convolution What is Texture Co-occurrence matrecis for

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Texture. D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC) Texture

Texture. D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC) Texture Texture D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC) Texture Key issue: How do we represent texture? Topics: Texture segmentation Texture-based matching Texture

More information

Texture. D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC) Previously

Texture. D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC) Previously Texture D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC) Previously Edges, contours, feature points, patches (templates) Color features Useful for matching, recognizing

More information

Wikipedia - Mysid

Wikipedia - Mysid Wikipedia - Mysid Erik Brynjolfsson, MIT Filtering Edges Corners Feature points Also called interest points, key points, etc. Often described as local features. Szeliski 4.1 Slides from Rick Szeliski,

More information

Lecture 10 Detectors and descriptors

Lecture 10 Detectors and descriptors Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of detectors SIFT Shape context Silvio Savarese Lecture 10-26-Feb-14 From the 3D to 2D & vice versa P =

More information

Texture April 14 th, 2015

Texture April 14 th, 2015 Texture April 14 th, 2015 Yong Jae Lee UC Davis Announcements PS1 out today due 4/29 th, 11:59 pm start early! 2 Review: last time Edge detection: Filter for gradient Threshold gradient magnitude, thin

More information

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of

More information

Colour Reading: Chapter 6. Black body radiators

Colour Reading: Chapter 6. Black body radiators Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each wavelength, which gives objects their natural colours

More information

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Lecture: Edge Detection

Lecture: Edge Detection CMPUT 299 Winter 2007 Lecture: Edge Detection Irene Cheng Overview. What is a pixel in an image? 2. How does Photoshop, + human assistance, detect an edge in a picture/photograph? 3. Behind Photoshop -

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES 1 B.THAMOTHARAN, 2 M.MENAKA, 3 SANDHYA VAIDYANATHAN, 3 SOWMYA RAVIKUMAR 1 Asst. Prof.,

More information

CS4670: Computer Vision

CS4670: Computer Vision CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know

More information