A Study on Blur Kernel Estimation from Blurred and Noisy Image Pairs

Size: px
Start display at page:

Download "A Study on Blur Kernel Estimation from Blurred and Noisy Image Pairs"

Transcription

1 A Study on Blur Kernel Estimation from Blurred and Noisy Image Pairs Mushfiqur Rouf Department of Computer Science University of British Columbia Abstract The course can be split in two parts. In the implementation part, the kernel estimation process as described in [1] has been studied. The algorithm has been tested against synthetic and real data; and its performance has been discussed. In the reading part, a number of papers have been covered from the list of papers discussed in the graduate level course CPSC 505 Image Understanding I: Image Analysis in Winter Term I, Introduction Image deconvolution is a widely studied yet largely unsolved problem. This study has been carried out on Landweber s algorithm, an iterative variation of the Tikhonov Regularization technique as is described in [1]. An in-depth analysis has been performed to understand the behavior of Landweber s algorithm, which is an iterative method, when performed with Hysteresis thresholding A Walk through the Algorithm The algorithm in [1] can be described to have two steps: 1. Kernel Estimation using Landweber method with Hysteresis Thresholding; 2. Deconvolution using gain controlled Richardson Lucy followed by a detail adding step. In this study, the first step has been investigated. This study started with a simple implementation of the algorithm as described in the paper, and then the implementation was modified in order to study its behavior under different circumstances Landweber Method Landweber method invokes the Tikhonov regularization algorithm [2] in an iterative fashion. Convolution operation can be described as a vector multiplication by the blur kernel k, Ak = b Here, A is matrix where each row is the neighborhood of a pixel in the original image and b is a column vector containing the corresponding pixels from the blurred image. In the implementation, pixel values lie between 0 and 1 This is a CPSC 548 Direct Graduate Studies course report. This study has been carried out under the supervision of Dr. Wolfgang Heidrich. inclusive; and 8-bit depth has been assumed. The iterative Tikhonov regularization can be expressed as an iterative matrix inversion algorithm that starts with an initial guess and converges to k = A -1 b: k k ( 0) = δ ( n+ 1) ( n) T T 2 ( n) = k + β ( A b ( A A + λ I) k ) Here, δ is the Dirac Delta function, β is the convergence parameter with a default value of 1 and λ is the regularization parameter with a default value of 5 [1]. However, since A is unknown, the noisy image was used in place of A. It has been shown in [1] that the noisy image can be a reasonable estimate to compute k effectively Hysteresis Thresholding Regularization is done at different scale levels. Original image is scaled down by a factor of 1/ 2 in each scale level, stopping at the size of 9x9. Thresholded versions of predicted lower resolution kernels are used as masks for the upper levels in order to reduce noise. In the implementation, cubic interpolation was used to scale down the image for hysteresis thresholding Contribution 1. This report gives a study of parameters described in the kernel estimation portion of [1]. 2. This report investigates the effectiveness of the Landweber method and presents the weaknesses of this algorithm. 3. This report gives the results of running the algorithm on real datasets and describes a possible reason why it fails Organization of the Report The remaining of the report are organized as follows. In section 2, the experiments and observations are described. Results are summarized and the effectiveness of the Kernel estimation as described in [1] is evaluated in section 3. Section 4 gives the conclusion of this report. Section 4 also points to some possible future research directions. Section 7 gives a list of papers read for the reading part. 2. Observations The kernel estimation algorithm has been implemented in Matlab and tested with both synthetic and real datasets.

2 (a) β set to 1 and remains unchanged. (a) β set to and remains unchanged. (c) β initialized to and reduced by 5% when (d) β initialized to and reduced by 5% when necessary necessary Figure 1: The oscillation behavior. (a) and (b) shows the oscillation occurred even for a small value for β. (c) shows convergence can be achieved with a varying β. In this study, the favorable initial value of β has been found to be , which resulted in a faster convergence (d). Image 5 (Figure 4(c)) and kernel 6 (Figure 4(a)) were used to produce the above results. Synthetic data have been created using a number of test kernels, applied over a set of images collected from the internet. All computations are done on grayscale versions Initial Implementation The initial implementation uses default parameter values as mentioned in [1]. The investigation started with synthetic data; real data was input to the system once the system was stabilized for all synthetic cases Speed up It is found that the most time consuming step is computing A T A. A speed up was designed using the fact that most of A is redundant since the rows represent neighborhood of adjacent pixels Oscillatory Behavior The initial implementation failed to predict the kernel in all cases. The Tikhonov iterations seem to toggle between two different states (Figure 1), none of them being the solution. In most cases the states were vertically, horizontally or diagonally opposite mirrored with respect to one another. In some cases, a diverging oscillation seemed to build up gradually; at each iteration, picking a state further from the solution than the previous state. In some cases, the algorithm would come very close to a stable solution and then drift away to oscillation. These phenomena strongly suggested the algorithm is trying to converge to a solution but is being prevented by some wrong choice of parameter value. To stabilize the algorithm, the parameters were needed to be tuned. λ was tried with first, then β Role of λ and the Condition Number Blindly trying with different λ values ranging from 0 to 100 did not yield any better result. The Landweber method is essentially a matrix inversion algorithm; therefore a high condition number reduces the chance of getting a meaningful result. Initially the reason behind the oscillatory behavior was suspected to be poor conditioning of A. For a number of datasets, it was found that λ should have a value in the range of thousands to get the condition number down to the range of thousands. Unfortunately, a high value of λ also means a heavy diagonal was being added, essentially rendering the information in A useless. It was understood that lower λ values would result in a low condition number which would introduce computational errors, but these errors were not responsible for the oscillations that were observed Keeping the negative values in k A blur kernel must have nonnegative values at each pixel and the sum of all values must be equal to one. However, the Tikhonov regularization iterations were producing negative values; and the initial solution was to put a zero at the negative value positions; thus clipping the estimated kernel to a zero. However, it was observed that the discarded negative values were becoming positive in the next iteration. This might have caused the oscillations, so the negative values were being kept for first few iterations; and the clipping off at zero began from the fifth iteration. The motivation of doing this was that the initial guess (Delta function) was a very far state from the actual solution and losing information at such early steps may lead to a wrong solution. After first few steps the state became sufficiently close and clipping resumed.

3 2.5. Finding the right β The oscillation seemed to be happening around the solution. That means the convergence rate needed to be slowed down so that the algorithm could converge. By lowering down the value of β the algorithm could be made to converge to a solution. But low β means slow convergence. Also, it was found that the appropriate value of β was dependent on the image and the blur kernel. Consequently, finding an optimum β was needed to be done automatically during run time Finding optimum β The algorithm started with β = Oscillations can be detected by analyzing three consecutive kernel estimations. If dist(k n, k n-1 ) > dist(k n, k n-2 ) then there is an oscillation. 2-norm was used as the distance metric. In case of an oscillation, the value of β was reduced by 5%. That would slow down the process and allow the algorithm to converge at a solution. Similarly, right at the beginning, β was increased for a faster convergence. This way the algorithm could start with a high β to execute a rapid search in the search space and slow down as it got near the solution (Figure 1). satisfactory result in all cases. In most cases the stochastic jump reached the same solution Effect of edges Interestingly, strong edges in the image have strong effect on the kernel estimation. The kernel estimations consistently show distortions parallel to the strong edges in the original image (Figure 4). This biasness can be explained by the fact that having edges means having a set of points with the same neighborhood. The Landweber method is a least square optimization approach; having a set of pixels with the same neighborhood means an aspect getting more weight towards minimization. That way the least squared error can be reduced greatly if these clusters of pixels are satisfied Levin Filter The Levin filter was tried on the synthetic data. Different sizes and rotations of the filter have been tried with. The algorithm successfully produced a good estimate of the kernel (Figure 3). Different sizes and orientations are generated using cubic interpolation Convergence In all the experiments, the kernel estimation converged to a solution. There is no mathematical proof that the algorithm will converge at the global optimum, so a stochastic search was implemented on top of the Tikhonov iteration Introducing Randomness After the Tikhonov iteration converged, a white noise was added to the predicted kernel to run Landweber method again (Figure 2), with this kernel as the initial guess, instead of the Dirac Delta function. The experiments show that estimated kernel is indeed improved by a small proportion in a small number of cases, and four such stochastic jumps are found to produce (a) Tikhonov iteration (c) (d) (b) Stable iterations (e) Figure 2: White noise was added 3 times after reaching convergence each time. The algorithm still converges to the same solution. Figure 3: Synthetic application of the Levin filter. (a) shows the states in a series of iterations. (b) shows stable condition after applying the stochastic jump three times. (c) shows the actual Levin filter, (d) shows the rotated version used to blur the image, and (e) shows the estimated PSF. Test image 1 (Figure 4(c)) was used.

4 (a) 14 blur kernels that were used to synthetically blur the test images. (b) i-th kernel on each block is predicted from a noisy and (c) Test images. blurred image pair produced by applying i-th kernel from the top left block to the corresponding image on the right. Figure 4: The effect of edges on the deconvolution algorithm. For better viewing, small kernels are scaled up and kernel pixel values are scaled such that for each kernel the highest pixel value maps to 1. (a) (b) (c) Figure 5: Real data analysis. (a) and (b) show portions of the blurred and noisy image pair. The non-smooth artifacts in (a) confirm that the blur kernel is not Gaussian. However, the algorithm generated a Guassian PSF estimate (c) Real Data Real data have been collected from the HDR Defocus project at PSM (path to the data files: /ubc/cs/research/psmraid1/hdrdefocus/ /secondscene/raw/). These data had been originally collected using the Levin filter. For this study, 400x400 pixel crops from the center of the images were used. For the real data, the algorithm miserably failed to estimate the kernel (Figure 5). In all cases the algorithm produced only Gaussians kernel. The algorithm could find the size and a rough shape of the kernel. When the Levin filter was slightly stretched and rotated, the predicted Gaussian had the same transformation as well. The question was why the real data failed although the synthetic data did not fail. To understand why real data failed, different circumstances were to be synthesized so that the synthetic case would fail. A number of approaches are taken; the results obtained are discussed here Adding Noise At first, 1% noise was added to the synthetically blurred image to see if the algorithm now failed. But the algorithm continued to find a reasonable kernel estimate.

5 Quantization At this point, it has been found that the implementation was not quantizing the blurred image after computing the convolution. This way the convoluted image was conveying reasonable amount of information that the real data could not, due to precision limit. However, even after quantizing the blurred image to 256 steps (8 bits of information per pixel), the implementation continued to converge at similar estimates of the kernels as before Adding a Fixed Offset When the real data were observed, a small offset was found in the min, max and average pixel values. When such a small 0.1% offset was added to the blurred pixel values, the algorithm failed to predict a reasonable kernel, instead it produced only a Gaussian. In the real data the small offset was not the same in min, average and max pixel values. Apparently the fixed offset was accompanied by a variable scaling factor at different pixel values and locations. This may happen due to the changing black level of the image taken by a camera. The black level, in turn, depends on the temperature of the camera. 3. Results The algorithm works fine with synthetic data; it can make a good estimate of any kernel. This algorithm has also been found to withstand a noise of 1%. However, in real data, it fails; which is found to occur due to inconsistencies in corresponding pixel values. 4. Conclusion and Future Work The algorithm has a tendency of producing Gaussian kernel estimations. It may be difficult to control the black level of a camera while taking the blurred and noisy pairs of the image since the blurred image is taken using a long exposure and the noisy image is taken using a short exposure. Another iterative approach may be developed. The idea is to apply certain preprocessing on the blurred and noisy images so that the offset and the scaling effects of the black level can be reduced. In addition, if the filter is known, a first Kernel estimation can give a reasonable estimation of the size and shape of the kernel. This information may be used to reduce these effects to run the whole kernel estimation algorithm again, now producing a better result. 5. Acknowledgement I must thank Dr. Wolfgang Heidrich for giving me the opportunity to explore this challenging and interesting research area. My special thanks to Matthew Trentacoste for the initial Landweber code he provided, for giving a brief overview of how to program using Matlab and for coming up with solutions to my problems related to the study at a number of occasions. 6. References [1] L. Yuan, J. Sun, L. Quan and H.-Y. Shum, Image Deblurring with Blurred/Noisy Image Pairs, ACM SIGGRAPH, [2] A. Neumaier, Solving ill-conditioned and singular linear systems: A tutorial on regularization, SIAM Review 40, pp , List of Papers Read The papers starting at [2] are selected from the list of papers covered in the graduate level course CPSC 505 Image Understanding I: Image Analysis in Winter Term I, A complete list of papers can be found here: ml. Papers have been read from sampling theory, edge detection, optical flow and visual tracking. [1] Q. Shan, W Xiong and J. Jia, Rotational Motion Deblurring of a Rigid Object from a Single Image, in Proceedings of IEEE 11 th Conference on Computer Vision, pp 1-8, [2] M. Unser, Sampling - 50 years after Shannon, Proceedings of the IEEE, vol. 88, no. 4, pp , [3] C. E. Shannon, Communication in the presence of noise, Proc. IRE, vol. 37, pp , [4] D. Marr and E. Hildreth, Theory of edge detection, Proc. R. Soc. Lond. B, vol. 207, pp , [5] J. F. Canny, A computational approach to edge detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp , [6] R. Deriche, Using Canny's criteria to derive a recursively implemented optimal edge detector, International Journal of Computer Vision, vol. 1, pp , [7] V. Torre and T. A. Poggio, On edge detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 2, pp , [8] L. Zhang, B. Curless, A. Hertzmann, and S. M. Seitz, Shape and motion under varying illumination: unifying structure from motion, photometric stereo, and multi-view stereo, in Proc. 9th International Conference on Computer Vision, pp , [9] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, Performance of optical flow techniques, International Journal of Computer Vision, vol. 12, no. 1, pp , 1994.

6 [10] R. J. Woodham, Multiple light source optical flow, in Proc. 3rd International Conference on Computer Vision, (Osaka, Japan), pp , [11] R. J. Woodham, Multiple light source optical flow, in Proc. 3rd International Conference on Computer Vision, (Osaka, Japan), pp , [12] C. Stauffer and E. Grimson, Learning patterns of activity using real-time tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp , [13] A. D. Jepson, D. J. Fleet, and T. F. El-Maraghi, Robust online appearance models for visual tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp , [14] D. G. Lowe, Distinctive image features from scaleinvariant keypoints, International Journal of Computer Vision, vol. 60, no. 2, pp , [15] K. Okuma, A. Taleghani, N. de Freitas, J. J. Little, and D. G. Lowe, A boosted particle filter: Multitarget detection and tracking, in Proc. 8th European Conference on Computer Vision, vol. 1, pp , 2004.

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge) Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded in the edges More compact than pixels Ideal: artist

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Edge detection. Winter in Kraków photographed by Marcin Ryczek Edge detection Winter in Kraków photographed by Marcin Ryczek Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Edge detection. Winter in Kraków photographed by Marcin Ryczek Edge detection Winter in Kraków photographed by Marcin Ryczek Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, edges carry most of the semantic and shape information

More information

Image Processing

Image Processing Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore

More information

Other Linear Filters CS 211A

Other Linear Filters CS 211A Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Edge Detection CSC 767

Edge Detection CSC 767 Edge Detection CSC 767 Edge detection Goal: Identify sudden changes (discontinuities) in an image Most semantic and shape information from the image can be encoded in the edges More compact than pixels

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

EDGE BASED REGION GROWING

EDGE BASED REGION GROWING EDGE BASED REGION GROWING Rupinder Singh, Jarnail Singh Preetkamal Sharma, Sudhir Sharma Abstract Image segmentation is a decomposition of scene into its components. It is a key step in image analysis.

More information

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Edges and Binary Images

Edges and Binary Images CS 699: Intro to Computer Vision Edges and Binary Images Prof. Adriana Kovashka University of Pittsburgh September 5, 205 Plan for today Edge detection Binary image analysis Homework Due on 9/22, :59pm

More information

Blur Space Iterative De-blurring

Blur Space Iterative De-blurring Blur Space Iterative De-blurring RADU CIPRIAN BILCU 1, MEJDI TRIMECHE 2, SAKARI ALENIUS 3, MARKKU VEHVILAINEN 4 1,2,3,4 Multimedia Technologies Laboratory, Nokia Research Center Visiokatu 1, FIN-33720,

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

SECTION 5 IMAGE PROCESSING 2

SECTION 5 IMAGE PROCESSING 2 SECTION 5 IMAGE PROCESSING 2 5.1 Resampling 3 5.1.1 Image Interpolation Comparison 3 5.2 Convolution 3 5.3 Smoothing Filters 3 5.3.1 Mean Filter 3 5.3.2 Median Filter 4 5.3.3 Pseudomedian Filter 6 5.3.4

More information

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES 1 B.THAMOTHARAN, 2 M.MENAKA, 3 SANDHYA VAIDYANATHAN, 3 SOWMYA RAVIKUMAR 1 Asst. Prof.,

More information

arxiv: v1 [cs.cv] 2 May 2016

arxiv: v1 [cs.cv] 2 May 2016 16-811 Math Fundamentals for Robotics Comparison of Optimization Methods in Optical Flow Estimation Final Report, Fall 2015 arxiv:1605.00572v1 [cs.cv] 2 May 2016 Contents Noranart Vesdapunt Master of Computer

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS. Andrey Nasonov, and Andrey Krylov

GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS. Andrey Nasonov, and Andrey Krylov GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS Andrey Nasonov, and Andrey Krylov Lomonosov Moscow State University, Moscow, Department of Computational Mathematics and Cybernetics, e-mail: nasonov@cs.msu.ru,

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical

More information

Local Image preprocessing (cont d)

Local Image preprocessing (cont d) Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge

More information

Edge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient

Edge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient Edge Detection CS664 Computer Vision. Edges Convert a gray or color image into set of curves Represented as binary image Capture properties of shapes Dan Huttenlocher Several Causes of Edges Sudden changes

More information

Comparison between Various Edge Detection Methods on Satellite Image

Comparison between Various Edge Detection Methods on Satellite Image Comparison between Various Edge Detection Methods on Satellite Image H.S. Bhadauria 1, Annapurna Singh 2, Anuj Kumar 3 Govind Ballabh Pant Engineering College ( Pauri garhwal),computer Science and Engineering

More information

Computer Vision I - Basics of Image Processing Part 2

Computer Vision I - Basics of Image Processing Part 2 Computer Vision I - Basics of Image Processing Part 2 Carsten Rother 07/11/2014 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

Image Analysis. Edge Detection

Image Analysis. Edge Detection Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

More information

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges

More information

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD M.E-II, Department of Computer Engineering, PICT, Pune ABSTRACT: Optical flow as an image processing technique finds its applications

More information

DIGITAL IMAGE PROCESSING

DIGITAL IMAGE PROCESSING The image part with relationship ID rid2 was not found in the file. DIGITAL IMAGE PROCESSING Lecture 6 Wavelets (cont), Lines and edges Tammy Riklin Raviv Electrical and Computer Engineering Ben-Gurion

More information

Applying Catastrophe Theory to Image Segmentation

Applying Catastrophe Theory to Image Segmentation Applying Catastrophe Theory to Image Segmentation Mohamad Raad, Majd Ghareeb, Ali Bazzi Department of computer and communications engineering Lebanese International University Beirut, Lebanon Abstract

More information

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.

More information

Digital Image Processing. Image Enhancement - Filtering

Digital Image Processing. Image Enhancement - Filtering Digital Image Processing Image Enhancement - Filtering Derivative Derivative is defined as a rate of change. Discrete Derivative Finite Distance Example Derivatives in 2-dimension Derivatives of Images

More information

Data Term. Michael Bleyer LVA Stereo Vision

Data Term. Michael Bleyer LVA Stereo Vision Data Term Michael Bleyer LVA Stereo Vision What happened last time? We have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > N s( p, q) We have learned about an optimization algorithm that

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Fundamentals of Digital Image Processing

Fundamentals of Digital Image Processing \L\.6 Gw.i Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Chris Solomon School of Physical Sciences, University of Kent, Canterbury, UK Toby Breckon School of Engineering,

More information

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH Ignazio Gallo, Elisabetta Binaghi and Mario Raspanti Universitá degli Studi dell Insubria Varese, Italy email: ignazio.gallo@uninsubria.it ABSTRACT

More information

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534: Introduction to Computer Vision Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators Laplacian

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

Ping Tan. Simon Fraser University

Ping Tan. Simon Fraser University Ping Tan Simon Fraser University Photos vs. Videos (live photos) A good photo tells a story Stories are better told in videos Videos in the Mobile Era (mobile & share) More videos are captured by mobile

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection COS 429: COMPUTER VISON Linear Filters and Edge Detection convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection Reading:

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Image Deconvolution.

Image Deconvolution. Image Deconvolution. Mathematics of Imaging. HW3 Jihwan Kim Abstract This homework is to implement image deconvolution methods, especially focused on a ExpectationMaximization(EM) algorithm. Most of this

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary EDGES AND TEXTURES The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their

More information

Why is computer vision difficult?

Why is computer vision difficult? Why is computer vision difficult? Viewpoint variation Illumination Scale Why is computer vision difficult? Intra-class variation Motion (Source: S. Lazebnik) Background clutter Occlusion Challenges: local

More information

Motion Detection. Final project by. Neta Sokolovsky

Motion Detection. Final project by. Neta Sokolovsky Motion Detection Final project by Neta Sokolovsky Introduction The goal of this project is to recognize a motion of objects found in the two given images. This functionality is useful in the video processing

More information

Light Field Occlusion Removal

Light Field Occlusion Removal Light Field Occlusion Removal Shannon Kao Stanford University kaos@stanford.edu Figure 1: Occlusion removal pipeline. The input image (left) is part of a focal stack representing a light field. Each image

More information

Assignment 3: Edge Detection

Assignment 3: Edge Detection Assignment 3: Edge Detection - EE Affiliate I. INTRODUCTION This assignment looks at different techniques of detecting edges in an image. Edge detection is a fundamental tool in computer vision to analyse

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision report University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision Web Server master database User Interface Images + labels image feature algorithm Extract

More information

Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects

Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects Shamir Alavi Electrical Engineering National Institute of Technology Silchar Silchar 788010 (Assam), India alavi1223@hotmail.com

More information

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges CS 4495 Computer Vision Linear Filtering 2: Templates, Edges Aaron Bobick School of Interactive Computing Last time: Convolution Convolution: Flip the filter in both dimensions (right to left, bottom to

More information

Image Compression With Haar Discrete Wavelet Transform

Image Compression With Haar Discrete Wavelet Transform Image Compression With Haar Discrete Wavelet Transform Cory Cox ME 535: Computational Techniques in Mech. Eng. Figure 1 : An example of the 2D discrete wavelet transform that is used in JPEG2000. Source:

More information

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile. Volume 3, Issue 7, July 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Blobs and Cracks

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

[Programming Assignment] (1)

[Programming Assignment] (1) http://crcv.ucf.edu/people/faculty/bagci/ [Programming Assignment] (1) Computer Vision Dr. Ulas Bagci (Fall) 2015 University of Central Florida (UCF) Coding Standard and General Requirements Code for all

More information

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar. Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

Concepts in. Edge Detection

Concepts in. Edge Detection Concepts in Edge Detection Dr. Sukhendu Das Deptt. of Computer Science and Engg., Indian Institute of Technology, Madras Chennai 600036, India. http://www.cs.iitm.ernet.in/~sdas Email: sdas@iitm.ac.in

More information

DUAL DEBLURRING LEVERAGED BY IMAGE MATCHING

DUAL DEBLURRING LEVERAGED BY IMAGE MATCHING DUAL DEBLURRING LEVERAGED BY IMAGE MATCHING Fang Wang 1,2, Tianxing Li 3, Yi Li 2 1 Nanjing University of Science and Technology, Nanjing, 210094 2 National ICT Australia, Canberra, 2601 3 Dartmouth College,

More information

Animated Non-Photorealistic Rendering in Multiple Styles

Animated Non-Photorealistic Rendering in Multiple Styles Animated Non-Photorealistic Rendering in Multiple Styles Ting-Yen Chen and Reinhard Klette Department of Computer Science The University of Auckland, New Zealand Abstract. This paper presents an algorithm

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Motion Deblurring With Graph Laplacian Regularization

Motion Deblurring With Graph Laplacian Regularization Motion Deblurring With Graph Laplacian Regularization Amin Kheradmand and Peyman Milanfar Department of Electrical Engineering University of California, Santa Cruz ABSTRACT In this paper, we develop a

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

Image Mosaicing with Motion Segmentation from Video

Image Mosaicing with Motion Segmentation from Video Image Mosaicing with Motion Segmentation from Video Augusto Román and Taly Gilat EE392J Digital Video Processing Winter 2002 Introduction: Many digital cameras these days include the capability to record

More information

Exploring Curve Fitting for Fingers in Egocentric Images

Exploring Curve Fitting for Fingers in Egocentric Images Exploring Curve Fitting for Fingers in Egocentric Images Akanksha Saran Robotics Institute, Carnegie Mellon University 16-811: Math Fundamentals for Robotics Final Project Report Email: asaran@andrew.cmu.edu

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

A Method of Generating Calligraphy of Japanese Character using Deformable Contours

A Method of Generating Calligraphy of Japanese Character using Deformable Contours A Method of Generating Calligraphy of Japanese Character using Deformable Contours Lisong Wang Tsuyoshi Nakamura Minkai Wang Hirohisa Seki Hidenori Itoh Department of Intelligence and Computer Science,

More information

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University Edge and Texture CS 554 Computer Vision Pinar Duygulu Bilkent University Filters for features Previously, thinking of filtering as a way to remove or reduce noise Now, consider how filters will allow us

More information

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Akitsugu Noguchi and Keiji Yanai Department of Computer Science, The University of Electro-Communications, 1-5-1 Chofugaoka,

More information

PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director of Technology, OTTE, NEW YORK

PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director of Technology, OTTE, NEW YORK International Journal of Science, Environment and Technology, Vol. 3, No 5, 2014, 1759 1766 ISSN 2278-3687 (O) PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director

More information

ICCV 2013 Supplementary Material Super-resolution via Transform-invariant Group-sparse Regularization

ICCV 2013 Supplementary Material Super-resolution via Transform-invariant Group-sparse Regularization ICCV 2013 Supplementary Material Super-resolution via Transform-invariant Group-sparse Regularization October 4, 2013 1 Contents This is the supplementary material for the paper Super-resolution via Transform-invariant

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Lecture 2: Edge detection From Sandlot Science Announcements Project 1 (Hybrid Images) is now on the course webpage (see Projects link) Due Wednesday, Feb 15, by 11:59pm

More information

The Detection of Faces in Color Images: EE368 Project Report

The Detection of Faces in Color Images: EE368 Project Report The Detection of Faces in Color Images: EE368 Project Report Angela Chau, Ezinne Oji, Jeff Walters Dept. of Electrical Engineering Stanford University Stanford, CA 9435 angichau,ezinne,jwalt@stanford.edu

More information

A Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection

A Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection International Journal of Research Studies in Science, Engineering and Technology Volume 2, Issue 5, May 2015, PP 49-57 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) A Robust Method for Circle / Ellipse

More information

Broad field that includes low-level operations as well as complex high-level algorithms

Broad field that includes low-level operations as well as complex high-level algorithms Image processing About Broad field that includes low-level operations as well as complex high-level algorithms Low-level image processing Computer vision Computational photography Several procedures and

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Using Geometric Blur for Point Correspondence

Using Geometric Blur for Point Correspondence 1 Using Geometric Blur for Point Correspondence Nisarg Vyas Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA Abstract In computer vision applications, point correspondence

More information

Edge Detection Lecture 03 Computer Vision

Edge Detection Lecture 03 Computer Vision Edge Detection Lecture 3 Computer Vision Suggested readings Chapter 5 Linda G. Shapiro and George Stockman, Computer Vision, Upper Saddle River, NJ, Prentice Hall,. Chapter David A. Forsyth and Jean Ponce,

More information

Scale Space Based Grammar for Hand Detection

Scale Space Based Grammar for Hand Detection Scale Space Based Grammar for Hand Detection by Jan Prokaj A thesis submitted in partial fulfillment of the requirements for the Honors in the Major Program in Computer Science in the College of Engineering

More information

EE 701 ROBOT VISION. Segmentation

EE 701 ROBOT VISION. Segmentation EE 701 ROBOT VISION Regions and Image Segmentation Histogram-based Segmentation Automatic Thresholding K-means Clustering Spatial Coherence Merging and Splitting Graph Theoretic Segmentation Region Growing

More information

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. Dartmouth, MA USA Abstract: The significant progress in ultrasonic NDE systems has now

More information

Prof. Feng Liu. Winter /15/2019

Prof. Feng Liu. Winter /15/2019 Prof. Feng Liu Winter 2019 http://www.cs.pdx.edu/~fliu/courses/cs410/ 01/15/2019 Last Time Filter 2 Today More on Filter Feature Detection 3 Filter Re-cap noisy image naïve denoising Gaussian blur better

More information

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS Shubham Saini 1, Bhavesh Kasliwal 2, Shraey Bhatia 3 1 Student, School of Computing Science and Engineering, Vellore Institute of Technology, India,

More information

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc. Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based

More information

Median filter. Non-linear filtering example. Degraded image. Radius 1 median filter. Today

Median filter. Non-linear filtering example. Degraded image. Radius 1 median filter. Today Today Non-linear filtering example Median filter Replace each pixel by the median over N pixels (5 pixels, for these examples). Generalizes to rank order filters. In: In: 5-pixel neighborhood Out: Out:

More information