Examination of Hybrid Image Feature Trackers

Size: px
Start display at page:

Download "Examination of Hybrid Image Feature Trackers"

Transcription

1 Examination of Hybrid Image Feature Trackers Peter Abeles Robotic Inception Abstract. Typical image feature trackers employ a detect-describe-associate (DDA) or detect-track (DT) paradigm. Intuitively, a hybrid of the two approaches inherits the benefits of each approach and possibly their defects, however this has never been demonstrated formally in a more general setting. In this paper, the stability and speed of DDA, DT, and hybrid trackers are compared and discussed using a diverse set of real-world video sequences. Introduction Image point feature tracking in video sequences is a key component in many computer vision problems, such as structure from motion and object detection/recognition. Specific applications include: scene recognition for loop-closure, face recognition, video stabilization, visual odometry, and augmented reality. The most popular feature trackers can be categorized as employing detect-describe-associate (DDA) or detect-track (DT) paradigms. DDA-based trackers operate by ) detecting feature locations and characteristics, 2) describing each feature with local image information, and 3) associating features between images. Numerous feature detectors (Harris [], Shi-Tomasi [2], FAST [3], MSER [4]) and local feature descriptors (SIFT [5], SURF [6], BRIEF [7]) are available for DDA trackers. Association is done by selecting pairs of feature descriptors between two images that minimize an error metric. DDA trackers are robust to temporary obstructions and abrupt changes in view, but are computationally expensive. DT trackers operate by first detecting features, then updating each feature track location by performing a local search initialized using the last known location. Several template based DT trackers have been proposed in the past, with Kanade-Lucas-Tomasi (KLT) [8,2] being the most popular. Under nominal conditions, DT trackers are faster and more stable than DDA trackers, but if the scene changes abruptly, a DT tracker will fail and cannot recover. Despite DT trackers having superior nominal performance, their usage is limited to a subset of problems that DDA is applied to. The reason for DDA s wider usage is its resilience and ability to recover tracks. To overcome this issue, hybrid approaches have been proposed in the past in an attempt to improve runtime speed and/or stability. Past hybrid approaches were application specific, making their results difficult to generalize. Most practitioners in the field accept or assume these qualitative statements about the relative merits of each type of tracker. However, a direct comparison of these three More commonly referred to as detect then track.

2 2 Peter Abeles approaches, which validates and quantifies this knowledge, does not exist in the literature. In this work, a performance study is presented that compares the three types of trackers. To facilitate this comparison a generic hybrid tracker is described. Real-world video sequences are used to evaluate performance, each of which is designed to stress the trackers in different ways. 2 Related Work and Background The first step for both DDA and DT trackers is feature detection. Corner (e.g. Harris and Shi-Tomasi) and blob (e.g. Hessian Determinant [9]) detectors are commonly used to identify salient features within images for tracking purposes. After a feature has been detected, a description of that feature is extracted from its local region. DT trackers can use simple templates as descriptors due to their assumption that the scene is slowly changing. DDA make a weaker assumptions and require descriptors that can handle greater changes in appearance. The major philosophical difference between the two approaches involves how tracks are updated. DDA perform an expensive global search for features, followed by association. DT performs a local search for each track independently. Feature association is an expensive operation for DDA trackers. The simplest solution has a complexity of O(N 2 ), although faster approximate alternatives are available (e.g. k-d trees []). An optimal solution is defined as the two features that minimize a distance metric, often the L 2 (Euclidean) norm: a i = min j F i F j () where F and F are sets of feature descriptors from two different images. All DT trackers work by minimizing the difference between a template and the image with a local search around feature s previous location. Besides KLT, several others have been proposed [,2]. It was pointed out by Baker and Matthews [3] that these approaches are equivalent to a first order approximation. The primary difference between each approaches lies in the types of motion models used and if the template and/or image is warped. While mathematically similar, these differences significantly affect computation performance. KLT updates track locations using a translational motion model and minimizes the difference between the track s description and the image using the following cost function: ɛ = [I(x d) J(x)] 2 w dx (2) W where W is a local region around location x, w is an optional weight, d is the translation parameter being optimized, and I and J are the first and second images in the sequence. Pyramidal approaches are used to handle larger displacements. Unlike DDA trackers, DT trackers runtime speed is dependent on the number of tracks and not image size. Track stability is often improved by updating the description after each frame. The downside to updating the descriptor is that image noise will cause tracks to perform a random walk.

3 Lecture Notes in Computer Science 3 Several hybrid trackers that employ both DT and a DDA tracker have been proposed in the past. In Uemura and Mikolajczyk [4] diverged KLT tracks are detected using a SIFT descriptor when detecting human actions. In Pilet and Saito [5] robustness is added to a region based normalized-cross-correlation (NCC) tracker by switching to KLT when association fails to find a match. To reduce image processing overhead for visual loop closing in SLAM, Pradeep et. al [6] proposed to attach SIFT descriptions to KLT tracks with periodically associating all tracks to newly detected features. Ladikos et. al [7] track the pose of known objects using a combination of template (DT type) and feature (DDA type) trackers. In all of the just mentioned works, a direct evaluation of track performance is lacking. In those works, performance is measured indirectly as a function of the target application, e.g. localization accuracy or detection rate. Test environments are limited and focused on specific applications. They also lack a more generalized discussion of design issues and failure conditions unique to hybrid tracking. 3 Generalized Hybrid Tracker For purposes of comparison, a generalized hybrid DDA-DT tracker with a modular design is described below. For sake of brevity this specific tracker will be referred to as the hybrid tracker. The previously mentioned hybrid trackers were designed for specific applications. The generalized hybrid tracker only specifies a high-level design and requires that specific implementations for feature detection, feature description, and DT tracking be provided to it. In the hybrid tracker, tracking starts by spawning new tracks from detected image features. A feature track is defined by a 2D location and has two types of descriptions, namely, DDA description and DT description. The DDA description is immutable and the DT description is updated after each image is processed. When the next image in the sequence arrives, tracks are first updated using the DT tracker. If a track fails a DT update, the track is placed in storage. Tracks in Drop Unused Storage Failure Detect Describe Associate Matched Track Tracker Success Unmatched Feature Spawn Fig. : Flow chart showing track life cycle. Track locations are updated using DT tracker. Tracks in storage are periodically reactivated from detected features. storage are not updated as new images arrive. When triggered, new tracks can be spawned and old tracks in storage reactivated. A trigger can be the number of active tracks dropping below a threshold and/or periodically after N frames are processed. The purpose of using a trigger is for computational efficiency alone.

4 4 Peter Abeles The first step in spawning new tracks or reactivating tracks in storage is detecting image features. After image features have been detected they are associated against all active tracks and all tracks in storage. If an image feature is unassociated, then a new track is spawned from it. If a track in storage is associated with an image feature then it is reactivated by removing it from storage and setting its location to the feature and updating its DT description. A flow diagram of the track life cycle is shown in Figure. A greedy nearest-neighbor algorithm is used for association in this implementation. 3. Design Issues When selecting algorithms to use inside the hybrid tracker, there are several issues that need to be considered. Can the DT tracker track features found by the detector? Can the DT tracker detect track faults? Texture is used by DT trackers to track features. Unlike corner features, blob features contain all their texture information along the blob s outside edges, with little information inside. In theory, if a large blob was detected, the DT tracker s template could lack texture and fail, even with a pyramidal approach. In practice, this was found not to be an issue with real-world data due to sufficient texture across scales. In the hybrid algorithm, a feature only switches to using the DDA approach when the DT tracker detects a fault. DT trackers typically employ several methods for detecting faults, but are susceptible to gradual drift. As will be shown below, it is possible for a slowly changing video sequence to produce worse performance than one with abrupt changes. This issue of drift can be mitigated in applications where geometric information is available. For example, when estimating the camera s ego-motion, features which drift are outliers and are dropped. 4 Experimental Setup Performance is evaluated using video sequences for which a homography describes the relationship between each video frame. This approach is similar in spirit to the approach taken by Mikolajczyk et. al [8], where sequences of still images are provided along with corresponding homographies. Videos were collected using a consumer grade hand held point-and-shoot camera. Specifically, a Sony Cyber-Shot DSC-Hx5V camera capturing 64x48 MP4 video. A homography provides a unique mapping between pixels in two image, x H x where H R 3 3 is the homography and x is a homogeneous 2D pixel coordinate. A homographic relationship comes about when the scene is planar, or the camera s motion is purely rotational. Thus, the evaluated video sequences are either of planar objects, panoramic, or taken with a stationary camera. An automated algorithm is used to reconstruct the true homography between the first image and each subsequent image. Procedure: ) Remove lens distortion. 2) Track point features. 3) Estimate homography using points. 4) Non-linear refinement using inlier point set. 5) Non-linear refinement that minimizes average squared difference of pixel intensity.

5 5 Carpet Bricks Lecture Notes in Computer Science Move In Move Out Rotate Skew Fig. 2: Four different 6-DOF camera motions are evaluated for planar scenes. Bricks is more uniform in appearance and level of texture, while has clearly defined objects with less interior texture. Illumination Panoramic Compressed Fig. 3: Changes in ambient light are evaluated in illumination. In panoramic the camera is approximately d about its focal point in an urban environment. For compressed, the skewed sequence is highly compressed using JPEG. Compression artifacts are difficult to see in this figure due the reduced image size. Track stability is measured using F-measure Eq. 3, which is a function of precision and recall. precision recall F =2 (3) precision + recall Precision is defined as the number of true positive tracks divided by the total number of active tracks. Recall is defined as the number of true positives divided by the initial number of tracks. A true positive track is defined as a track that is within tolerance of the true feature location. The choice of tolerance for defining true positive tracks is some what arbitrary, and a value 5 pixels was selected. The same conclusions are reached when other tolerances are used. Four scenes are used to evaluate performance (see Figures 2 and 3). Two scenes of planar objects with four different 6-DOF camera motions, one stationary camera view of a bookshelf with variable illumination, and one panoramic of an urban environment. The two 6-DOF scenes, and, are views of planar objects with different

6 6 Peter Abeles textures intended to stress trackers differently. The effect of frame rate/object speed on tracker performance is examined by reprocessing a sequence with axial camera rotation at different frame rates. Table : Tracker Summary Type Detector Descriptor KLT DT Shi-Tomasi 5x5 Pyramid FAST-BRIEF DDA FAST BRIEF FH-BRIEF DDA FH BRIEF FH-SURF DDA FH SURF-64 FH-BRIEF-KLT Hybrid FH Multiple FH-SURF-KLT Hybrid FH Multiple Evaluated trackers are summarized in Table. Specific implementations of the hybrid tracker are named using a three word pattern, (DETECTOR) - (DESCRIPTOR) - (DT). DDA trackers are named using two words (DETECTOR) - (DESCRIPTOR). Each word signifies an internal algorithm. SURF is a popular state-of-the-art descriptor designed for speed and stability. BRIEF is a more recently proposed descriptor that uses binary encoding. Fast Hessian (FH) is the scale-space blob detector proposed with SURF. Both Shi-Tomasi and FAST are corner detectors, with the former using the image gradient and the latter using pixel intensity values. FAST is popular in applications with constrained computation resources due to its speed, but less stable than other detectors. If provided, all descriptors, detectors, and trackers use recommended parameters from the original authors. Algorithms are tuned for stability across across all scenarios. Runtime performance is evaluated on an Intel Quad Core 2 Q66 at 2.4 GHz running Ubuntu Linux with kernel Algorithms are provided by BoofCV, see project home page ( for a discussion on correctness and performance. All trackers use the first frame as their key-frame and never spawn new tracks. 5 Results Stability as an average across each sequence is shown in Figure 4. The complementary nature of KLT and the DDA trackers is clearly evident. During move out scenarios KLT excels while DDA trackers perform poorly and the reverse is true for rotation scenarios. Ability of feature detectors to estimate scale is limited by the camera s resolution, while KLT continuously updates each feature s description. KLT has more problems tracking sequences with heavy compression than DDA, likely caused by the lack of smoothness between frames due to compression artifacts. The FAST detector is not invariant to illumination or rotation, causing FAST-BRIEF to perform relatively poor for scenarios with changes in illumination and rotation. Other behaviors are more evident when examining performance as a function of frame number, Figures 6 and 7. When the camera is rotating, DDA s stability is cyclical because of its ability to recover tracks, while KLT gradually decays. Hybrid trackers

7 Mean F-Statistic Lecture Notes in Computer Science move in move out Average Track Stability For Each Scenario skew 4 8 move in move out skew 4 8 lighting Compressed Panoramic KLT FAST-BRIEF FH-BRIEF FH-SURF FH-BRIEF-KLT FH-SURF-KLT Fig. 4: Average F-statistic across each scenario, higher is better. Hybrid trackers are top performer in almost every scenario. Other trackers exhibit greater variability, with poor performance in one or more scenarios. switching between the two techniques can also be seen. When tracking first begins it tends to behave like a DT tracker, but if tracks are dropped it recovers them by switching over to DDA. The hybrid tracker s stability has a non-linear relationship with frame rate. This is illustrated by Figure 6 where stability can either improve or degrade as more frames are dropped. The hybrid tracker relies on KLT detecting its own faults, which can be unreliable at certain frame rates. Once a fault is detected, it switches to become more DDA tracker like and its performance recovers. Runtime performance across all scenarios is shown in Figure 5. KLT has the fastest performance followed by the two hybrid trackers. FAST-BRIEF is a close fourth place, but has poor stability. FH-BRIEF and FH-SURF are many times slower than other trackers. 6 Future Work While a popular choice for evaluation in the literature, using homographic scenes to evaluate performance has significant disadvantages. The use of homographies is primary out of necessity, it provides a way to uniquely map pixels between two images. It does not capture the complexities found in arbitrary 3D motion. One such complexity involves objects overlapping each other and moving at different speeds. Evaluating trackers under arbitrary 3D motion requires complete knowledge of the scene s 3D structure and the camera s pose. Recently, data sets have been made available [9]

8 Mean Time (ms) 8 Peter Abeles 3 25 Runtime Performance Fig. 5: Runtime performance box and whisker plot for all scenarios, lower is better. Y-axis is the average time to process a frame in milliseconds. Statistics shown are minimum, 25%, median, 75%, and maximum. Hybrid trackers are about 7.5 times faster than comparable DDA trackers and.5 times slower than KLT on average. that claim high accuracy vehicle pose and capture the scene s structure using LADAR. These could be used to evaluate tracking performance under a less restricted environment. 7 Conclusions For the first time, track stability of DDA, DT, and hybrid trackers have been compared against each other for track stability and runtime performance. This comparison allows practitioners to make more informed decisions about the relative merits of each strategy in different scenarios. Hybrid trackers exhibit many of the advantages of DDA and DT tracking and to a lesser extent their disadvantages. The primary weakness arises from the need to detect faults in the DT tracker, which is unreliable in specific situations. While hybrid trackers have received little attention in the literature, they were among the top performers across all scenarios and never experienced catastrophic failures. In some cases, they out performed both DDA and DT trackers. Performance of DDA and DT trackers fell along predictable lines and with each experiencing catastrophic failures. The results of this study suggest that a larger emphasis should be placed on the development of hybrid trackers. References. Harris, C., Stephens, M.: A combined corner and edge detector. In: Proceedings of the 4th Alvey Vision Conference. (988) 2. Shi, J., Tomasi, C.: Good features to track. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR 94). (994)

9 Lecture Notes in Computer Science 9 3. Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: European Conference on Computer Vision. Volume. (26) Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide-baseline stereo from maximally stable extremal regions. Image and Vision Computing 22 (24) Lowe, D.: Distinctive image features from scale-invariant keypoints, cascade filtering approach. International Journal of Computer Vision (IJCV) 6 (24) 9 6. Bay, H., Ess, A., Tuytelaars, T., Gool, L.V.: Surf: Speeded up robust features. Computer Vision and Image Understanding (CVIU) (28) Calonder, M., Lepetit, V., Ozuysal, M., Trzcinski, T., Strecha, C., Fua, P.: Brief: Computing a local binary descriptor very fast. IEEE Transactions on Pattern Analysis and Machine Intelligence 34 (22) Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: International Joint Conference on Artificial Intelligence. (98) 9. Lindeberg, T.: Feature detection with automatic scale selection. International Journal of Computer Vision 3 (998) Silpa-Anan, C., Hartley, R.: Optimised kd-trees for fast image descriptor matching. In: Conference on Computer Vision and Pattern Recognition. (28). Hager, G.D., Belhumeur, P.N.: Efficient region tracking with parametric models of geometry and illumination. IEEE Trans. Pattern Anal. Mach. Intell. 2 (998) Baker, S., Matthews, I.: Lucas-kanade 2 years on: A unifying framework: Part. Technical report, Robotics Institute (22) 3. Baker, S., Matthews, I.: Equivalence and efficiency of image alignment algorithms. In: Proceedings of the 2 IEEE Conference on Computer Vision and Pattern Recognition. Volume. (2) Uemura, H., Mikolajczyk, S.I.K.: Feature tracking and motion compensation for action recognition. In: British Machine Vision Conference (BMVC). (28) 5. Pilet, J., Saito, H.: Virtual augmenting hundreds of real pictures: An approach based on learning, retrieval, and tracking. In: IEEE Virtual Reality 2. (2) 6. Pradeep, V., Medioni, G., Weiland, J.: Visual loop closing using multi-resolution sift grids in metric-topological slam. In: IEEE Conference on Computer Vision and Pattern Recognition. (29) 7. Ladikos, A., Benhimane, S., Navab, N.: A realtime tracking system combining templatebased and feature-based approaches. In: VISAPP 27. (27) 8. Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 27 (25) Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: Conference on Computer Vision and Pattern Recognition (CVPR). (22)

10 Peter Abeles Bricks Skip Bricks Skip 4 Bricks Skip KLT FAST-BRIEF FH-BRIEF FH-SURF FH-BRIEF-KLT FH-SURF-KLT Carpet Skip Carpet Skip 4 Carpet Skip Fig. 6: Track stability by video frame for axial camera rotation when N frames are skipped, higher is better. Bricks Move In Bricks Move Out Bricks Skew KLT FAST-BRIEF FH-BRIEF FH-SURF FH-BRIEF-KLT FH-SURF-KLT Carpet Move In Carpet Move Out Carpet Skew Illumination Panoramic Compressed Fig. 7: Track stability by video frame for several scenarios, higher is better.

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Fast Natural Feature Tracking for Mobile Augmented Reality Applications Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai

More information

Specular 3D Object Tracking by View Generative Learning

Specular 3D Object Tracking by View Generative Learning Specular 3D Object Tracking by View Generative Learning Yukiko Shinozuka, Francois de Sorbier and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku 223-8522 Yokohama, Japan shinozuka@hvrl.ics.keio.ac.jp

More information

Speeding Up SURF. Peter Abeles. Robotic Inception

Speeding Up SURF. Peter Abeles. Robotic Inception Speeding Up SURF Peter Abeles Robotic Inception pabeles@roboticinception.com Abstract. SURF has emerged as one of the more popular feature descriptors and detectors in recent years. While considerably

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

Local features and image matching. Prof. Xin Yang HUST

Local features and image matching. Prof. Xin Yang HUST Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Visual Tracking (1) Pixel-intensity-based methods

Visual Tracking (1) Pixel-intensity-based methods Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization

Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Jung H. Oh, Gyuho Eoh, and Beom H. Lee Electrical and Computer Engineering, Seoul National University,

More information

Global localization from a single feature correspondence

Global localization from a single feature correspondence Global localization from a single feature correspondence Friedrich Fraundorfer and Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology {fraunfri,bischof}@icg.tu-graz.ac.at

More information

A REAL-TIME TRACKING SYSTEM COMBINING TEMPLATE-BASED AND FEATURE-BASED APPROACHES

A REAL-TIME TRACKING SYSTEM COMBINING TEMPLATE-BASED AND FEATURE-BASED APPROACHES A REAL-TIME TRACKING SYSTEM COMBINING TEMPLATE-BASED AND FEATURE-BASED APPROACHES Alexander Ladikos, Selim Benhimane, Nassir Navab Department of Computer Science, Technical University of Munich, Boltzmannstr.

More information

Application questions. Theoretical questions

Application questions. Theoretical questions The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list

More information

State-of-the-Art: Transformation Invariant Descriptors. Asha S, Sreeraj M

State-of-the-Art: Transformation Invariant Descriptors. Asha S, Sreeraj M International Journal of Scientific & Engineering Research, Volume 4, Issue ş, 2013 1994 State-of-the-Art: Transformation Invariant Descriptors Asha S, Sreeraj M Abstract As the popularity of digital videos

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Semantic Kernels Binarized A Feature Descriptor for Fast and Robust Matching

Semantic Kernels Binarized A Feature Descriptor for Fast and Robust Matching 2011 Conference for Visual Media Production Semantic Kernels Binarized A Feature Descriptor for Fast and Robust Matching Frederik Zilly 1, Christian Riechert 1, Peter Eisert 1,2, Peter Kauff 1 1 Fraunhofer

More information

A Hybrid Feature Extractor using Fast Hessian Detector and SIFT

A Hybrid Feature Extractor using Fast Hessian Detector and SIFT Technologies 2015, 3, 103-110; doi:10.3390/technologies3020103 OPEN ACCESS technologies ISSN 2227-7080 www.mdpi.com/journal/technologies Article A Hybrid Feature Extractor using Fast Hessian Detector and

More information

K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors

K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors Shao-Tzu Huang, Chen-Chien Hsu, Wei-Yen Wang International Science Index, Electrical and Computer Engineering waset.org/publication/0007607

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Lecture 10 Detectors and descriptors

Lecture 10 Detectors and descriptors Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of detectors SIFT Shape context Silvio Savarese Lecture 10-26-Feb-14 From the 3D to 2D & vice versa P =

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-21 April 2012, Tallinn, Estonia LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS Shvarts, D. & Tamre, M. Abstract: The

More information

Fast Image Matching Using Multi-level Texture Descriptor

Fast Image Matching Using Multi-level Texture Descriptor Fast Image Matching Using Multi-level Texture Descriptor Hui-Fuang Ng *, Chih-Yang Lin #, and Tatenda Muindisi * Department of Computer Science, Universiti Tunku Abdul Rahman, Malaysia. E-mail: nghf@utar.edu.my

More information

Video Processing for Judicial Applications

Video Processing for Judicial Applications Video Processing for Judicial Applications Konstantinos Avgerinakis, Alexia Briassouli, Ioannis Kompatsiaris Informatics and Telematics Institute, Centre for Research and Technology, Hellas Thessaloniki,

More information

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1 Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape

More information

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

Kanade Lucas Tomasi Tracking (KLT tracker)

Kanade Lucas Tomasi Tracking (KLT tracker) Kanade Lucas Tomasi Tracking (KLT tracker) Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 26,

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 09 130219 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Feature Descriptors Feature Matching Feature

More information

Evaluation and comparison of interest points/regions

Evaluation and comparison of interest points/regions Introduction Evaluation and comparison of interest points/regions Quantitative evaluation of interest point/region detectors points / regions at the same relative location and area Repeatability rate :

More information

3D Vision. Viktor Larsson. Spring 2019

3D Vision. Viktor Larsson. Spring 2019 3D Vision Viktor Larsson Spring 2019 Schedule Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar 25 Apr 1 Apr 8 Apr 15 Apr 22 Apr 29 May 6 May 13 May 20 May 27 Introduction Geometry, Camera Model, Calibration Features,

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

Image Features: Detection, Description, and Matching and their Applications

Image Features: Detection, Description, and Matching and their Applications Image Features: Detection, Description, and Matching and their Applications Image Representation: Global Versus Local Features Features/ keypoints/ interset points are interesting locations in the image.

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Tracking in image sequences

Tracking in image sequences CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Tracking in image sequences Lecture notes for the course Computer Vision Methods Tomáš Svoboda svobodat@fel.cvut.cz March 23, 2011 Lecture notes

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Department of Electrical and Electronic Engineering, University of Peradeniya, KY 20400, Sri Lanka

Department of Electrical and Electronic Engineering, University of Peradeniya, KY 20400, Sri Lanka WIT: Window Intensity Test Detector and Descriptor T.W.U.Madhushani, D.H.S.Maithripala, J.V.Wijayakulasooriya Postgraduate and Research Unit, Sri Lanka Technological Campus, CO 10500, Sri Lanka. Department

More information

Fuzzy based Multiple Dictionary Bag of Words for Image Classification

Fuzzy based Multiple Dictionary Bag of Words for Image Classification Available online at www.sciencedirect.com Procedia Engineering 38 (2012 ) 2196 2206 International Conference on Modeling Optimisation and Computing Fuzzy based Multiple Dictionary Bag of Words for Image

More information

Feature Based Registration - Image Alignment

Feature Based Registration - Image Alignment Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html

More information

Requirements for region detection

Requirements for region detection Region detectors Requirements for region detection For region detection invariance transformations that should be considered are illumination changes, translation, rotation, scale and full affine transform

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical

More information

A Novel Real-Time Feature Matching Scheme

A Novel Real-Time Feature Matching Scheme Sensors & Transducers, Vol. 165, Issue, February 01, pp. 17-11 Sensors & Transducers 01 by IFSA Publishing, S. L. http://www.sensorsportal.com A Novel Real-Time Feature Matching Scheme Ying Liu, * Hongbo

More information

Simultaneous Recognition and Homography Extraction of Local Patches with a Simple Linear Classifier

Simultaneous Recognition and Homography Extraction of Local Patches with a Simple Linear Classifier Simultaneous Recognition and Homography Extraction of Local Patches with a Simple Linear Classifier Stefan Hinterstoisser 1, Selim Benhimane 1, Vincent Lepetit 2, Pascal Fua 2, Nassir Navab 1 1 Department

More information

Structure Guided Salient Region Detector

Structure Guided Salient Region Detector Structure Guided Salient Region Detector Shufei Fan, Frank Ferrie Center for Intelligent Machines McGill University Montréal H3A2A7, Canada Abstract This paper presents a novel method for detection of

More information

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013 Live Metric 3D Reconstruction on Mobile Phones ICCV 2013 Main Contents 1. Target & Related Work 2. Main Features of This System 3. System Overview & Workflow 4. Detail of This System 5. Experiments 6.

More information

An Evaluation of Volumetric Interest Points

An Evaluation of Volumetric Interest Points An Evaluation of Volumetric Interest Points Tsz-Ho YU Oliver WOODFORD Roberto CIPOLLA Machine Intelligence Lab Department of Engineering, University of Cambridge About this project We conducted the first

More information

Local Image Features

Local Image Features Local Image Features Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial This section: correspondence and alignment

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion

A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion Marek Schikora 1 and Benedikt Romba 2 1 FGAN-FKIE, Germany 2 Bonn University, Germany schikora@fgan.de, romba@uni-bonn.de Abstract: In this

More information

Feature Detection and Matching

Feature Detection and Matching and Matching CS4243 Computer Vision and Pattern Recognition Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS4243) Camera Models 1 /

More information

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Jian Wang 1,2, Anja Borsdorf 2, Joachim Hornegger 1,3 1 Pattern Recognition Lab, Friedrich-Alexander-Universität

More information

Learning Efficient Linear Predictors for Motion Estimation

Learning Efficient Linear Predictors for Motion Estimation Learning Efficient Linear Predictors for Motion Estimation Jiří Matas 1,2, Karel Zimmermann 1, Tomáš Svoboda 1, Adrian Hilton 2 1 : Center for Machine Perception 2 :Centre for Vision, Speech and Signal

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing

More information

Object Recognition Algorithms for Computer Vision System: A Survey

Object Recognition Algorithms for Computer Vision System: A Survey Volume 117 No. 21 2017, 69-74 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Object Recognition Algorithms for Computer Vision System: A Survey Anu

More information

NIH Public Access Author Manuscript Proc Int Conf Image Proc. Author manuscript; available in PMC 2013 May 03.

NIH Public Access Author Manuscript Proc Int Conf Image Proc. Author manuscript; available in PMC 2013 May 03. NIH Public Access Author Manuscript Published in final edited form as: Proc Int Conf Image Proc. 2008 ; : 241 244. doi:10.1109/icip.2008.4711736. TRACKING THROUGH CHANGES IN SCALE Shawn Lankton 1, James

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow

More information

FAST-MATCH: FAST AFFINE TEMPLATE MATCHING

FAST-MATCH: FAST AFFINE TEMPLATE MATCHING Seminar on Sublinear Time Algorithms FAST-MATCH: FAST AFFINE TEMPLATE MATCHING KORMAN, S., REICHMAN, D., TSUR, G., & AVIDAN, S., 2013 Given by: Shira Faigenbaum-Golovin Tel-Aviv University 27.12.2015 Problem

More information

Long-term motion estimation from images

Long-term motion estimation from images Long-term motion estimation from images Dennis Strelow 1 and Sanjiv Singh 2 1 Google, Mountain View, CA, strelow@google.com 2 Carnegie Mellon University, Pittsburgh, PA, ssingh@cmu.edu Summary. Cameras

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Optical flow and tracking

Optical flow and tracking EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,

More information

COMPARING COMBINATIONS OF FEATURE REGIONS FOR PANORAMIC VSLAM

COMPARING COMBINATIONS OF FEATURE REGIONS FOR PANORAMIC VSLAM COMPARING COMBINATIONS OF FEATURE REGIONS FOR PANORAMIC VSLAM Arnau Ramisa, Ramón López de Mántaras Artificial Intelligence Research Institute, UAB Campus, 08193, Bellaterra, SPAIN aramisa@iiia.csic.es,

More information

Kanade Lucas Tomasi Tracking (KLT tracker)

Kanade Lucas Tomasi Tracking (KLT tracker) Kanade Lucas Tomasi Tracking (KLT tracker) Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 26,

More information

Ensemble of Bayesian Filters for Loop Closure Detection

Ensemble of Bayesian Filters for Loop Closure Detection Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information

More information

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Khawar Ali, Shoab A. Khan, and Usman Akram Computer Engineering Department, College of Electrical & Mechanical Engineering, National University

More information

Multi-Image Matching using Multi-Scale Oriented Patches

Multi-Image Matching using Multi-Scale Oriented Patches Multi-Image Matching using Multi-Scale Oriented Patches Matthew Brown Department of Computer Science University of British Columbia mbrown@cs.ubc.ca Richard Szeliski Vision Technology Group Microsoft Research

More information

3D Photography. Marc Pollefeys, Torsten Sattler. Spring 2015

3D Photography. Marc Pollefeys, Torsten Sattler. Spring 2015 3D Photography Marc Pollefeys, Torsten Sattler Spring 2015 Schedule (tentative) Feb 16 Feb 23 Mar 2 Mar 9 Mar 16 Mar 23 Mar 30 Apr 6 Apr 13 Apr 20 Apr 27 May 4 May 11 May 18 May 25 Introduction Geometry,

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

OPPA European Social Fund Prague & EU: We invest in your future.

OPPA European Social Fund Prague & EU: We invest in your future. OPPA European Social Fund Prague & EU: We invest in your future. Patch tracking based on comparing its piels 1 Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine

More information

Faster-than-SIFT Object Detection

Faster-than-SIFT Object Detection Faster-than-SIFT Object Detection Borja Peleato and Matt Jones peleato@stanford.edu, mkjones@cs.stanford.edu March 14, 2009 1 Introduction As computers become more powerful and video processing schemes

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

Comparison between Motion Analysis and Stereo

Comparison between Motion Analysis and Stereo MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2016 NAME: Problem Score Max Score 1 6 2 8 3 9 4 12 5 4 6 13 7 7 8 6 9 9 10 6 11 14 12 6 Total 100 1 of 8 1. [6] (a) [3] What camera setting(s)

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

Viewpoint Invariant Features from Single Images Using 3D Geometry

Viewpoint Invariant Features from Single Images Using 3D Geometry Viewpoint Invariant Features from Single Images Using 3D Geometry Yanpeng Cao and John McDonald Department of Computer Science National University of Ireland, Maynooth, Ireland {y.cao,johnmcd}@cs.nuim.ie

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

Robust feature matching in 2.3µs

Robust feature matching in 2.3µs Robust feature matching in 23µs Simon Taylor Edward Rosten Tom Drummond Department of Engineering, University of Cambridge Trumpington Street, Cambridge, CB2 1PZ, UK {sjt59, er258, twd20}@camacuk Abstract

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Real-Time Scene Reconstruction. Remington Gong Benjamin Harris Iuri Prilepov

Real-Time Scene Reconstruction. Remington Gong Benjamin Harris Iuri Prilepov Real-Time Scene Reconstruction Remington Gong Benjamin Harris Iuri Prilepov June 10, 2010 Abstract This report discusses the implementation of a real-time system for scene reconstruction. Algorithms for

More information

CS4670: Computer Vision

CS4670: Computer Vision CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know

More information

Map-Enhanced UAV Image Sequence Registration and Synchronization of Multiple Image Sequences

Map-Enhanced UAV Image Sequence Registration and Synchronization of Multiple Image Sequences Map-Enhanced UAV Image Sequence Registration and Synchronization of Multiple Image Sequences Yuping Lin and Gérard Medioni Computer Science Department, University of Southern California 941 W. 37th Place,

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

Feature Matching and Robust Fitting

Feature Matching and Robust Fitting Feature Matching and Robust Fitting Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Project 2 questions? This

More information

ECE Digital Image Processing and Introduction to Computer Vision

ECE Digital Image Processing and Introduction to Computer Vision ECE592-064 Digital Image Processing and Introduction to Computer Vision Depart. of ECE, NC State University Instructor: Tianfu (Matt) Wu Spring 2017 Recap, SIFT Motion Tracking Change Detection Feature

More information

A REAL-TIME REGISTRATION METHOD OF AUGMENTED REALITY BASED ON SURF AND OPTICAL FLOW

A REAL-TIME REGISTRATION METHOD OF AUGMENTED REALITY BASED ON SURF AND OPTICAL FLOW A REAL-TIME REGISTRATION METHOD OF AUGMENTED REALITY BASED ON SURF AND OPTICAL FLOW HONGBO LI, MING QI AND 3 YU WU,, 3 Institute of Web Intelligence, Chongqing University of Posts and Telecommunications,

More information

FAST AFFINE-INVARIANT IMAGE MATCHING BASED ON GLOBAL BHATTACHARYYA MEASURE WITH ADAPTIVE TREE. Jongin Son, Seungryong Kim, and Kwanghoon Sohn

FAST AFFINE-INVARIANT IMAGE MATCHING BASED ON GLOBAL BHATTACHARYYA MEASURE WITH ADAPTIVE TREE. Jongin Son, Seungryong Kim, and Kwanghoon Sohn FAST AFFINE-INVARIANT IMAGE MATCHING BASED ON GLOBAL BHATTACHARYYA MEASURE WITH ADAPTIVE TREE Jongin Son, Seungryong Kim, and Kwanghoon Sohn Digital Image Media Laboratory (DIML), School of Electrical

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2 Outline Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion Media IC & System Lab Po-Chen Wu 2 Outline Introduction System Overview Camera Calibration

More information