Blur and Contrast Invariant Fast Stereo Matching

Size: px
Start display at page:

Download "Blur and Contrast Invariant Fast Stereo Matching"

Transcription

1 Blur and Contrast Invariant Fast Stereo Matching Matteo Pedone and Janne Heikkilä Department of Electrical and Information Engineering, University of Oulu, Finland Abstract. We propose a novel approach for estimating a depth-map from a pair of rectified stereo images degraded by blur and contrast change. At each location in image space, information is encoded with a new class of descriptors that are invariant to convolution with centrally symmetric PSF and to variations in contrast. The descriptors are based on local-phase quantization, they can be computed very efficiently and encoded in a limited number of bits. A simple measure for comparing two encoded templates is also introduced. Results show that, the proposed method can represent a cheap but still effective way for estimating disparity maps from degraded images, without making restrictive assumptions; these advantages make it attractive for practical applications. 1 Introduction Stereo matching is a widely researched topic, and cannot still be considered a solved problem. It has been addressed in a multitude of different ways, and currently there is still a large gap in terms of accuracy between the state-of-the-art methods, that are usually computationally expensive, and the faster ones that are more suitable for practical applications. Good overviews including analyses and comparison among different methods are [2,1]. Presently, more interest is being devoted to propose algorithms that work robustly under non-ideal conditions, due for example to the presence of highlights or transparent objects, exposure or contrast differences, and other common optical degradations [6,7]. Our work is focused in performing stereo matching with a pair of images degraded by different amount of blur and contrast change. Despite being an interesting and non-trivial task, little work has been done until the time of writing in this area, and most of the current approaches rely on extra information gained by estimating depth from (de)focus and integrating it with conventional stereo correspondence methods. These methods are either computationally expensive [8], or work under very restricting assumptions [9], and do not consider the influence of other radiometric changes besides out-of-focus blur. Concerning the type of scheme for estimating the disparity map, we opted for following the same strategy used by conventional area-based algorithms, since local methods are notably faster than global methods. In this sense, during the cost-aggregation step, the problem is essentially equivalent to that of matching templates in degraded images. The usual way of dealing with the issue is to recur to the use of blur and contrast invariant J. Blanc-Talon et al. (Eds.): ACIVS 08, LNCS 5259, pp , 08. c Springer-Verlag Berlin Heidelberg 08

2 884 M. Pedone and J. Heikkilä descriptors. Flusser et al. in [3] proposed several descriptors that they obtain by specific combinations of higher order central moments; they are invariant to a wide range of typical geometric and radiometric degradations. Other methods like [5] are directly derived from properties of the Fourier transform, and they have been successfully used to perform blur-invariant phase correlation. Van de Weijer et al. propose color angles that are robust to blur, contrast changes and illuminant color [4], but they are apparently efficient mainly for building reliable histograms for image-retrieval. Considering that invariant descriptors are rather sensitive to noise and less efficient to calculate, we preferred to develop a new class of blur and contrast invariant phase-based descriptors. As we will show, these local descriptors can be derived in a very fast way, and we consider it an important requirement. 2 Blur Robust Stereo Matching In this section we present blur and contrast invariant descriptors based on quantized local phase. We also introduce a measure of similarity between two encoded templates discussing also its limits, and describe the approach for the estimation of the disparity map. 2.1 Phase-Based Local Descriptors Under the assumption that image noise is negligible and the blur point-spreadfunction (PSF) is centrally symmetric, it is fairly easy to show [3,5] that considering an arbitrary phase value Φ A (u, v) in the spectral domain, the term 2kΦ A (u, v) (for any k 1) is convolution and contrast invariant (any variation in contrast affects only the magnitude spectrum). If we consider a discretized N N image block A, a descriptor for A is naturally given by G k (A) ={2kΦ A (u, v) 0 u, v N 1}. (1) However further considerations can be done. Firstly, the stereo pair is assumed to be rectified, so corresponding pixels in the left and right images, will appear horizontally displaced by an amount that is inversely proportional to the z-depth associated with that point. This implies that when two templates of width W contain pixels of the same depth, for the shift-theorem their phase spectra will be (in this case, approximately) related according the following equation: Φ L (u, v) Φ R (u, v) 2πu W t (2) where t is the translation displacement in pixels. This observation is used by many phase-based stereo methods, that try to estimate the gradient of the phase difference between two portions of the left and right images. Moreover it is apparent that it is not necessary to sample the whole set G k (A), in fact the left term in (2) is always 0 when u = 0 and it remains unchanged by varying v; in

3 Blur and Contrast Invariant Fast Stereo Matching 885 Fig. 1. Components of the spectrum encoded in the descriptor for r=4, s=2 and for r=s=1 addition, being the image function A always real-valued, the resulting spectrum is always antisymmetric. Furthermore, justified by the work of Curtis et al. [11] that demonstrated the high informativeness of the sign of the phase, we consider b-bits discretized phase values, and we propose the following local descriptor: { } 2 Dr,s k,b b 1 (A) = π arg(f A(u, v) k ) u [0,r], v [ s, s], u+ sgn(v 1) 0 (3) where b, r, s 0, k {1, 2}, F A (u, v) returns the spectral component at (u, v) of the image A (Figure 2.1), and the arg function return values in the range [ π, π). Note that since D Dr,s k,b(a) = r(2s+1)+s, a descriptor can be encoded using using bd bits, and that for k = 0 the values of the descriptor are not necessarily blur invariant; however as we will see, this particular case may turn convenient in some circumstances. It is also worth mentioning that all the local descriptors of the whole image can be efficiently computed with D convolutions with L L sized 2d-filters, where L is the size in pixels of the neighborhood to be described. 2.2 Similarity Measure Once it is possible to locally describe a rectangular portion of image with the proposed method, there is still the need for a fast and efficient way to compare the encoded templates and detect the right matches. Let s denote with d A (u, v) the element of Dr,s k,b (A) evaluated at (u, v), and let s define the function { x if x M f(x) = (4) 2M x if x >M Considering that the values of the descriptors are essentially phase angles, using (2) with M =2 b 1 we have k 2πu L t +2πΔ f( d Left(u, v) d Right (u, v) ) (5) where, the term 2πΔ accounts for the phase-wrap. Ignoring the wrapping problem we introduce the following similarity measure between two templates A and B,

4 886 M. Pedone and J. Heikkilä m(a, B) = f( d A (j) d B (j) ) (6) j : d A (j) Dr,s k,b(a) d B (j) D k,b r,s (B) and it is worth noticing that for 1 b 2, Equation (6) reduces to m(a, B) =H(Dr,s k,b (A),Dk,b r,s (B)) (7) where H is the Hamming distance between two strings of bits. However, when avoiding any phase-unwrapping, several issues arise. The resulting value in (5) wraps for the first time when the original phase exceeds π, specifically at u = L (8) 2t k This suggests that in order to increase the reliability of (5), k should be as small as possible and L large. This creates a problematic multiple trade-off among reliability of the similarity measure, invariance to blur, and accuracy of disparity values, because of the well-known foreground fattening effect when increasing L [2]. However, in order to maximize the discriminative power, we set k =1 observing that the phase values of an image convolved with a centrally symmetric PSF are the same as the ones of the original image, as long as the magnitude of the frequency spectrum is greater than zero. This is always true for a Gaussian kernel; anyway we model the PSF with a pillbox kernel of radius R, whose discrete-time Fourier transform is a two-dimensional periodic sinc function. It is possible to prove by basic calculus that the first zero of this function (that corresponds to the radius of the main lobe of the periodic sinc) is located at u = L (9) 2R A totally analogous discussion could be done also for a PSF of linear motion blur. In our context, Equation (9) tells that the smaller the blur radius is (and eventually, the larger L is), the more values in Dr,s 1,b can be considered blur invariant. Another immediate consequence is that L 2R must be satisfied in order to have the minimum amount of useful values in Dr,s 1,b. 2.3 Disparity Map Estimation Equation (8) instructs us that even in ideal circumstances, the proposed similarity measure can correctly detect displacements t such that t L 2.Moreoverthe behavior of m(a, B) becomes undefined when comparing totally different templates, and this is likely to generate false matches in the same scanline. Basing on these two observations, we reinforce the template matching process by adopting the common strategy of shiftable windows [2]; in particular at each spatial location we use five L 2 -sized rectangular windows (in the the encoded images) in which the pixel of interest anchors the windows respectively to the center, top, bottom, left and right. The similarities of corresponding windows between the

5 Blur and Contrast Invariant Fast Stereo Matching 887 left and right images are computed and the minimum cost from the five comparisons is chosen. Finally we use a simple one-pass dynamic programming scanline optimization to assign disparity values respecting the ordering constrain. 3 Results We tested the proposed algorithm against a conventional stereo method. For a fair comparison we used the same scheme for disparity-map estimation but 25 proposed Error (%bad pixels) σ σ Fig. 2. Performance comparison between the proposed and measures for Cones stereo pair (left column) and Tsukuba (right column)

6 888 M. Pedone and J. Heikkilä Error σ = 0.25 σ = 1 σ = 4 Error 9 8 σ = 1 σ = 0.25 σ = Bits used (b) Bits used (b) Fig. 3. Performance comparison between the proposed and measures for Cones stereo pair (left column) and Tsukuba (right column) Fig. 4. Some of the degraded images used respectively in the three experiments for the Tsukuba and Cones stereo pairs replacing (7) with a sum-of-absolute-differences () for all the three color channels. A normalized version of was used to reduce the sensitivity to contrast changes when necessary. Two stereo pairs were considered (Figure 4), and for each of them, one image was blurred with a different PSF at every run. Three different experiments were performed. In the first one robustness against motion-blur was tested. In the second one Gaussian blur has been considered, while in the last experiment different areas of the image have been degraded by constrast change and by motion and out-of-focus blur (with pillbox PSF) alternatively. Some of the degraded images that have been used are shown in Figure 4. During the aggregation phase, we used support windows of the same size as the PSF. The descriptors used for the three tests were D 2,2 2,1 for the first, and D 1,2 2,1 for the remaining two; this way each neighborhood could be encoded in only 16-bits. The error measure used is the one proposed in [1] which is essentially a percentage of bad matching pixels in the final disparity map, where a bad disparity (measured in pixels) is assumed to be the one that differs from the ground truth for more than 1. Results are illustrated in Figure 2, and some depthmap computed are shown in Figure 5 for a visual comparison. In all the cases

7 Blur and Contrast Invariant Fast Stereo Matching 889 Fig. 5. Resulting disparities from the third experiment with blur factor set to 11 obtained using (left column) and the proposed method (middle column). Ground truth disparities (right column). considered the proposed method performed relatively better, although significant improvements in quality occurred mostly for larger amounts of blur, when our method produced in many cases error percentages between 5 and 9 percent better than. We also tested the sensitivity to the number of bits used for phase quantization. In particular three different amount of Gaussian blur were applied to the two stereo pair considered, and the accuracy of the final disparities were computed by letting the parameter b vary (Figure 3). It is interesting to notice that the percentage of bad pixels in the final depth-maps is fairly constant, and this justifies the use of the smallest values for b. 4 Conclusion and Future Work We described a novel method to compute disparity maps stereo pairs degraded by centrally-symmetric blur and contrast change. The algorithm runs fast also with a naive implementation. Each neighborhood of the image can be efficiently described with a limited number of bits, and the convolutions necessary to compute the local descriptors can be easily performed in a GPU, as well as the whole aggregation phase, opening the concrete possibility for a real-time implementation. The method proved to be relatively robust to the considered types of degradations in comparison to conventional fast approaches; however it should be eventually integrated in a multiscale approach in order to attempt to handle the cases in which the amount of blur is unknown. The similarity measure presently used has limitations that have been discussed; we believe that further improvements in this directions could yield significant results.

8 890 M. Pedone and J. Heikkilä References 1. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, Minnesota, USA (07) 2. Hirschmüller, H., Scharstein, D.: Evaluation of cost functions for stereo matching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, Minnesota, USA (07) 3. Flusser, J., Suk, T.: Degraded image analysis: an invariant approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, (1998); CRAS Paris 287, (1978) 4. van de Weijer, J., Schmid, C.: Blur robust and color constant image description. In: Proceedings of ICIP, Atlanta, USA (06) 5. Ojansivu, V., Heikkilä, J.: Image registration using blur invariant phase correlation. IEEE Signal Processing Letters 14(7), (07) 6. Ogale, A.S., Aloimonos, Y.: Robust contrast invariant stereo correspondence. In: Proc. IEEE Conf. on Robotics and Automation, ICRA (05) 7. Tsing, Y., Kang, S.B., Szelinski, R.: Stereo matching with linear superposition of layers. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(2), (06) 8. Rajagopalan, A.N., Mudenagudi, U.: Depth estimation and image restoration using defocused stereo pairs. IEEE Transactions on Pattern Analysis and Machine Intelligence 26(11), (04) 9. Frese, C., Gheta, I.: Robust depth estimation by fusion of stereo and focus series acquired with a camera array. In: IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp (06). Wang, L., Gong, M., Yang, R.: How far can we go with local optimization in realtime stereo matching. In: Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission, pp (06) 11. Curtis, S.R., Lim, J.S., Oppenheim, A.V.: Signal reconstruction from fourier transform sign information. Technical report 500, Massachusetts Institute of Technology. Research Laboratory of Electronics (1984)

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

BLUR INVARIANT REGISTRATION OF ROTATED, SCALED AND SHIFTED IMAGES

BLUR INVARIANT REGISTRATION OF ROTATED, SCALED AND SHIFTED IMAGES BLUR INVARIANT REGISTRATION OF ROTATED, SCALED AND SHIFTED IMAGES Ville Ojansivu and Janne Heikkilä Machine Vision Group, Department of Electrical and Information Engineering University of Oulu, PO Box

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

Disparity Search Range Estimation: Enforcing Temporal Consistency

Disparity Search Range Estimation: Enforcing Temporal Consistency MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Disparity Search Range Estimation: Enforcing Temporal Consistency Dongbo Min, Sehoon Yea, Zafer Arican, Anthony Vetro TR1-13 April 1 Abstract

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

Computer Vision I - Basics of Image Processing Part 1

Computer Vision I - Basics of Image Processing Part 1 Computer Vision I - Basics of Image Processing Part 1 Carsten Rother 28/10/2014 Computer Vision I: Basics of Image Processing Link to lectures Computer Vision I: Basics of Image Processing 28/10/2014 2

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Detecting motion by means of 2D and 3D information

Detecting motion by means of 2D and 3D information Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION 2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent

More information

Computer Vision I - Filtering and Feature detection

Computer Vision I - Filtering and Feature detection Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

CPSC 425: Computer Vision

CPSC 425: Computer Vision 1 / 55 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 55 Menu January 12, 2016 Topics:

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

A FAST SEGMENTATION-DRIVEN ALGORITHM FOR ACCURATE STEREO CORRESPONDENCE. Stefano Mattoccia and Leonardo De-Maeztu

A FAST SEGMENTATION-DRIVEN ALGORITHM FOR ACCURATE STEREO CORRESPONDENCE. Stefano Mattoccia and Leonardo De-Maeztu A FAST SEGMENTATION-DRIVEN ALGORITHM FOR ACCURATE STEREO CORRESPONDENCE Stefano Mattoccia and Leonardo De-Maeztu University of Bologna, Public University of Navarre ABSTRACT Recent cost aggregation strategies

More information

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection COS 429: COMPUTER VISON Linear Filters and Edge Detection convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection Reading:

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

Disparity from Monogenic Phase

Disparity from Monogenic Phase Disparity from Monogenic Phase Michael Felsberg Department of Electrical Engineering, Linköping University, SE-58183 Linköping, Sweden mfe@isy.liu.se, WWW home page: http://www.isy.liu.se/~mfe Abstract.

More information

Iris Recognition for Eyelash Detection Using Gabor Filter

Iris Recognition for Eyelash Detection Using Gabor Filter Iris Recognition for Eyelash Detection Using Gabor Filter Rupesh Mude 1, Meenakshi R Patel 2 Computer Science and Engineering Rungta College of Engineering and Technology, Bhilai Abstract :- Iris recognition

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude

Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude A. Migukin *, V. atkovnik and J. Astola Department of Signal Processing, Tampere University of Technology,

More information

A Novel Algorithm for Color Image matching using Wavelet-SIFT

A Novel Algorithm for Color Image matching using Wavelet-SIFT International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 A Novel Algorithm for Color Image matching using Wavelet-SIFT Mupuri Prasanth Babu *, P. Ravi Shankar **

More information

IMAGE RETRIEVAL USING VLAD WITH MULTIPLE FEATURES

IMAGE RETRIEVAL USING VLAD WITH MULTIPLE FEATURES IMAGE RETRIEVAL USING VLAD WITH MULTIPLE FEATURES Pin-Syuan Huang, Jing-Yi Tsai, Yu-Fang Wang, and Chun-Yi Tsai Department of Computer Science and Information Engineering, National Taitung University,

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Logical Templates for Feature Extraction in Fingerprint Images

Logical Templates for Feature Extraction in Fingerprint Images Logical Templates for Feature Extraction in Fingerprint Images Bir Bhanu, Michael Boshra and Xuejun Tan Center for Research in Intelligent Systems University of Califomia, Riverside, CA 9252 1, USA Email:

More information

Reconstruction of Images Distorted by Water Waves

Reconstruction of Images Distorted by Water Waves Reconstruction of Images Distorted by Water Waves Arturo Donate and Eraldo Ribeiro Computer Vision Group Outline of the talk Introduction Analysis Background Method Experiments Conclusions Future Work

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Volume Local Phase Quantization for Blur-Insensitive Dynamic Texture Classification

Volume Local Phase Quantization for Blur-Insensitive Dynamic Texture Classification Volume Local Phase Quantization for Blur-Insensitive Dynamic Texture Classification Juhani Päivärinta, Esa Rahtu, and Janne Heikkilä Machine Vision Group, Department of Electrical and Information Engineering,

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints

More information

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe AN EFFICIENT BINARY CORNER DETECTOR P. Saeedi, P. Lawrence and D. Lowe Department of Electrical and Computer Engineering, Department of Computer Science University of British Columbia Vancouver, BC, V6T

More information

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen Video Mosaics for Virtual Environments, R. Szeliski Review by: Christopher Rasmussen September 19, 2002 Announcements Homework due by midnight Next homework will be assigned Tuesday, due following Tuesday.

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Filter Flow: Supplemental Material

Filter Flow: Supplemental Material Filter Flow: Supplemental Material Steven M. Seitz University of Washington Simon Baker Microsoft Research We include larger images and a number of additional results obtained using Filter Flow [5]. 1

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Optical Flow-Based Person Tracking by Multiple Cameras

Optical Flow-Based Person Tracking by Multiple Cameras Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and

More information

An Approach for Reduction of Rain Streaks from a Single Image

An Approach for Reduction of Rain Streaks from a Single Image An Approach for Reduction of Rain Streaks from a Single Image Vijayakumar Majjagi 1, Netravati U M 2 1 4 th Semester, M. Tech, Digital Electronics, Department of Electronics and Communication G M Institute

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Blur Space Iterative De-blurring

Blur Space Iterative De-blurring Blur Space Iterative De-blurring RADU CIPRIAN BILCU 1, MEJDI TRIMECHE 2, SAKARI ALENIUS 3, MARKKU VEHVILAINEN 4 1,2,3,4 Multimedia Technologies Laboratory, Nokia Research Center Visiokatu 1, FIN-33720,

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Real-Time Disparity Map Computation Based On Disparity Space Image

Real-Time Disparity Map Computation Based On Disparity Space Image Real-Time Disparity Map Computation Based On Disparity Space Image Nadia Baha and Slimane Larabi Computer Science Department, University of Science and Technology USTHB, Algiers, Algeria nbahatouzene@usthb.dz,

More information

Image Processing Lecture 10

Image Processing Lecture 10 Image Restoration Image restoration attempts to reconstruct or recover an image that has been degraded by a degradation phenomenon. Thus, restoration techniques are oriented toward modeling the degradation

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing Larry Matthies ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies" lhm@jpl.nasa.gov, 818-354-3722" Announcements" First homework grading is done! Second homework is due

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Computer Vision. Fourier Transform. 20 January Copyright by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved

Computer Vision. Fourier Transform. 20 January Copyright by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved Van de Loosdrecht Machine Vision Computer Vision Fourier Transform 20 January 2017 Copyright 2001 2017 by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved j.van.de.loosdrecht@nhl.nl,

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Comparison Between The Optical Flow Computational Techniques

Comparison Between The Optical Flow Computational Techniques Comparison Between The Optical Flow Computational Techniques Sri Devi Thota #1, Kanaka Sunanda Vemulapalli* 2, Kartheek Chintalapati* 3, Phanindra Sai Srinivas Gudipudi* 4 # Associate Professor, Dept.

More information

SIFT - scale-invariant feature transform Konrad Schindler

SIFT - scale-invariant feature transform Konrad Schindler SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective

More information

CPSC 425: Computer Vision

CPSC 425: Computer Vision CPSC 425: Computer Vision Image Credit: https://docs.adaptive-vision.com/4.7/studio/machine_vision_guide/templatematching.html Lecture 9: Template Matching (cont.) and Scaled Representations ( unless otherwise

More information

Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data

Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data Xue Mei, Fatih Porikli TR-19 September Abstract We

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT 17th European Signal Processing Conference (EUSIPCO 009) Glasgow, Scotland, August 4-8, 009 IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION

More information

Combining Gabor Features: Summing vs.voting in Human Face Recognition *

Combining Gabor Features: Summing vs.voting in Human Face Recognition * Combining Gabor Features: Summing vs.voting in Human Face Recognition * Xiaoyan Mu and Mohamad H. Hassoun Department of Electrical and Computer Engineering Wayne State University Detroit, MI 4822 muxiaoyan@wayne.edu

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

Time-to-Contact from Image Intensity

Time-to-Contact from Image Intensity Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract

More information

Accurate Image Registration from Local Phase Information

Accurate Image Registration from Local Phase Information Accurate Image Registration from Local Phase Information Himanshu Arora, Anoop M. Namboodiri, and C.V. Jawahar Center for Visual Information Technology, IIIT, Hyderabad, India { himanshu@research., anoop@,

More information

Spatio-Temporal Stereo Disparity Integration

Spatio-Temporal Stereo Disparity Integration Spatio-Temporal Stereo Disparity Integration Sandino Morales and Reinhard Klette The.enpeda.. Project, The University of Auckland Tamaki Innovation Campus, Auckland, New Zealand pmor085@aucklanduni.ac.nz

More information

Geometric Reconstruction Dense reconstruction of scene geometry

Geometric Reconstruction Dense reconstruction of scene geometry Lecture 5. Dense Reconstruction and Tracking with Real-Time Applications Part 2: Geometric Reconstruction Dr Richard Newcombe and Dr Steven Lovegrove Slide content developed from: [Newcombe, Dense Visual

More information

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN Gengjian Xue, Jun Sun, Li Song Institute of Image Communication and Information Processing, Shanghai Jiao

More information

Reference Point Detection for Arch Type Fingerprints

Reference Point Detection for Arch Type Fingerprints Reference Point Detection for Arch Type Fingerprints H.K. Lam 1, Z. Hou 1, W.Y. Yau 1, T.P. Chen 1, J. Li 2, and K.Y. Sim 2 1 Computer Vision and Image Understanding Department Institute for Infocomm Research,

More information

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong

More information

hsgm: Hierarchical Pyramid Based Stereo Matching Algorithm

hsgm: Hierarchical Pyramid Based Stereo Matching Algorithm hsgm: Hierarchical Pyramid Based Stereo Matching Algorithm Kwang Hee Won and Soon Ki Jung School of Computer Science and Engineering, College of IT Engineering, Kyungpook National University, 1370 Sankyuk-dong,

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Pictures at an Exhibition

Pictures at an Exhibition Pictures at an Exhibition Han-I Su Department of Electrical Engineering Stanford University, CA, 94305 Abstract We employ an image identification algorithm for interactive museum guide with pictures taken

More information

SECTION 5 IMAGE PROCESSING 2

SECTION 5 IMAGE PROCESSING 2 SECTION 5 IMAGE PROCESSING 2 5.1 Resampling 3 5.1.1 Image Interpolation Comparison 3 5.2 Convolution 3 5.3 Smoothing Filters 3 5.3.1 Mean Filter 3 5.3.2 Median Filter 4 5.3.3 Pseudomedian Filter 6 5.3.4

More information

Announcements. Stereo Vision Wrapup & Intro Recognition

Announcements. Stereo Vision Wrapup & Intro Recognition Announcements Stereo Vision Wrapup & Intro Introduction to Computer Vision CSE 152 Lecture 17 HW3 due date postpone to Thursday HW4 to posted by Thursday, due next Friday. Order of material we ll first

More information

MACHINE VISION APPLICATIONS. Faculty of Engineering Technology, Technology Campus, Universiti Teknikal Malaysia Durian Tunggal, Melaka, Malaysia

MACHINE VISION APPLICATIONS. Faculty of Engineering Technology, Technology Campus, Universiti Teknikal Malaysia Durian Tunggal, Melaka, Malaysia Journal of Fundamental and Applied Sciences ISSN 1112-9867 Research Article Special Issue Available online at http://www.jfas.info DISPARITY REFINEMENT PROCESS BASED ON RANSAC PLANE FITTING FOR MACHINE

More information

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Implementation and Comparison of Feature Detection Methods in Image Mosaicing IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p-ISSN: 2278-8735 PP 07-11 www.iosrjournals.org Implementation and Comparison of Feature Detection Methods in Image

More information

Linear stereo matching

Linear stereo matching Linear stereo matching Leonardo De-Maeztu 1 Stefano Mattoccia 2 Arantxa Villanueva 1 Rafael Cabeza 1 1 Public University of Navarre Pamplona, Spain 2 University of Bologna Bologna, Italy {leonardo.demaeztu,avilla,rcabeza}@unavarra.es

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

High Performance GPU-Based Preprocessing for Time-of-Flight Imaging in Medical Applications

High Performance GPU-Based Preprocessing for Time-of-Flight Imaging in Medical Applications High Performance GPU-Based Preprocessing for Time-of-Flight Imaging in Medical Applications Jakob Wasza 1, Sebastian Bauer 1, Joachim Hornegger 1,2 1 Pattern Recognition Lab, Friedrich-Alexander University

More information

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 8 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

More information

A novel heterogeneous framework for stereo matching

A novel heterogeneous framework for stereo matching A novel heterogeneous framework for stereo matching Leonardo De-Maeztu 1, Stefano Mattoccia 2, Arantxa Villanueva 1 and Rafael Cabeza 1 1 Department of Electrical and Electronic Engineering, Public University

More information

Multi-Camera Calibration, Object Tracking and Query Generation

Multi-Camera Calibration, Object Tracking and Query Generation MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Multi-Camera Calibration, Object Tracking and Query Generation Porikli, F.; Divakaran, A. TR2003-100 August 2003 Abstract An automatic object

More information

View Synthesis for Multiview Video Compression

View Synthesis for Multiview Video Compression View Synthesis for Multiview Video Compression Emin Martinian, Alexander Behrens, Jun Xin, and Anthony Vetro email:{martinian,jxin,avetro}@merl.com, behrens@tnt.uni-hannover.de Mitsubishi Electric Research

More information

Graph Matching Iris Image Blocks with Local Binary Pattern

Graph Matching Iris Image Blocks with Local Binary Pattern Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of

More information

Using Geometric Blur for Point Correspondence

Using Geometric Blur for Point Correspondence 1 Using Geometric Blur for Point Correspondence Nisarg Vyas Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA Abstract In computer vision applications, point correspondence

More information

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 2 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Introduction to Digital Image Processing

Introduction to Digital Image Processing Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. Dartmouth, MA USA Abstract: The significant progress in ultrasonic NDE systems has now

More information

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

Feature Based Registration - Image Alignment

Feature Based Registration - Image Alignment Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html

More information

An Edge-Based Approach to Motion Detection*

An Edge-Based Approach to Motion Detection* An Edge-Based Approach to Motion Detection* Angel D. Sappa and Fadi Dornaika Computer Vison Center Edifici O Campus UAB 08193 Barcelona, Spain {sappa, dornaika}@cvc.uab.es Abstract. This paper presents

More information