Behavior Based Robot Localisation Using Stereo Vision

Size: px
Start display at page:

Download "Behavior Based Robot Localisation Using Stereo Vision"

Transcription

1 Behavior Based Robot Localisation Using Stereo Vision Maria Sagrebin, Josef Pauli, and Johannes Herwig Fakultät für Ingenieurwissenschaften, Abteilung für Informatik und Angewandte Kognitionswissenschaft, Universität Duisburg-Essen, Germany Abstract. Accurate robot detection and localisation is fundamental in applications which involve robot navigation. Typical methods for robot detection require a model of a robot. However in most applications the availability of such model can not be warranted. This paper discusses a different approach. A method is presented to localise the robot in a complex and dynamic scene based only on the information that the robot is following a previously specified movement pattern. The advantage of this method lies in the ability to detect differently shaped and differently looking robots as long as they perform the previously defined movement. The method has been successfully tested in an indoor environment. 1 Introduction Successful robot detection and 3D localisation is one of the most desired features in many high level applications. Many algorithms for robot navigation depend on robust object localisation. Usually the problem of localising a robot by using two stereo cameras is solved by first creating a model of a robot and then mapping given image data to this model. Such an approach has two important drawbacks. First it requires an offline phase during which a model of a robot is learned. In many applications such an offline phase is hardly practicable. And second it is very hard to find a model which takes care for any possible appearance of the robot. Moreover sometimes it is simple impossible to extract or find features which are both discriminable against other objects and invariant to lighting conditions, changes in pose and different environment. A robot which is working on a construction side can change his appearance due to dirt and his form and structure due to possible damage. In such situation most of the common approaches would fail. An online learning algorithm which enables the learning of a model in real time could be a solution. But now we face a chicken and egg problem: to apply a learning algorithm we need the position of an object, and to get a position of an object we need a model. The answer to this problem lies in the behavior or motion based localisation of a robot. An algorithm which first tracks all moving objects in the scene and

2 then selects one which is performing a previously specified motion is the solution to this problem. In the example of a robot working on the construction side it would be sufficient to let the robot to perform some simple recurring movement. A system which is trained to recognise such movements would be able to localise the robot. Another important scenario is that of supplying robots. Differently looking robots which deliver packages to an office building do know to which office they have to go but don t know how. A system which is able to localise these different looking robots in the entrance hall could navigate them to the target office. However again the question here arises is how to recognise these robots. Apparently it is not possible to teach the system all the different models. But to train the system to recognise moving objects which perform a special systematic movement is practicable. 1.1 Previous Work The detection of objects which perform a recurring movement requires an object detection mechanism, appropriate object representation method and a suitable tracking algorithm. Common object detection mechanisms are point detectors (Moravec s detector [13], Harris detector [9], Scale Invariant Feature Transform [11]), background modeling techniques (Mixture of Gaussians [18], Eigenbackground) and supervised classifiers (Support Vector Machines [14], Neural Networks, Adaptive Boosting [21]). Commonly employed object representations for tracking are points [20] [16], primitive geometric shapes [6] or skeletal models [1]. Depending on the chosen object representation form different tracking algorithm can be used. Widely used are for example Kalman Filter [4], Mean-shift [6], Kanade-Lucas-Tomasi Feature Tracker (KLT) [17], SVM tracker [2]. For object trajectory representation also different approaches have been suggested. Chen [5] proposed a segmented trajectory based approach. The trajectories are segmented based on extrema in acceleration measured by high frequency wavelet coefficients. In [3], Bashir segment the trajectories based on dominant curvature zero-crossings and represent each subtrajectory by using Principal Component Analysis (PCA). In [10] different algorithm for 3D reconstruction and object localisation are presented. 1.2 Proposed Approach at a Glance In this paper a procedure is presented which solves the task of 3D localisation of a robot based on the robot s behavior. A system equipped with two stereo cameras was trained to recognise robots which perform periodic movements. The

3 specialisation of the system to periodic movements was chosen for two reasons. First it is the simplicity of this movement. Almost every mobile robot is able to drive back and forth or in a circle. And second it is the easy discriminability of this movement from the others. The overall procedure for 3D localisation of the robot is composed of the following steps: In both image sequences extract moving objects from the background using an adaptive background subtraction algorithm. In both image sequences track the moving objects over some period of time. In both image sequences extract the object which is performing a periodic movement. This step provides a pixel position of a robot in both images. By using a triangulation method compute the 3D location of the robot. Each of these steps is described in more detail in the following sections. 2 Adaptive Background Subtraction To extract moving objects from the background an adaptive background subtraction algorithm according to Grimson and Stauffer [18] has been used. It allows the timely updating of a background model to gradual illumination changes as well as the significant changes in the background. In their approach each pixel in the image is modelled by a mixture of K Gaussian distributions. K has a value from 3 to 5. The probability that a certain pixel has a color value of X at time t can be written as p(x t )= K w i,t η(x t ; θ i,t ) i=1 where w i,t is a weight parameter of the i-th Gaussian component at time t and η(x t ; θ i,t ) is the Normal distribution of i-th Gaussian component at time t represented by 1 η (X t ; θ i,t )=η(x t ; μ i,t,σ i,t )= e 1 (2π) n 2 Σ i,t 1 2 (Xt μi,t)t Σ 1 i,t (Xt μi,t) 2 where μ i,t is the mean and Σ i,t = σi,t 2 I is the covariance of the i-th component at time t. It is assumed that the red, green and blue pixel values are independent and have the same variances. The K Gaussians are then ordered by the value of w i,t /σ i,t. This value increases both as a distribution gains more evidence and the variance decreases. Thus this ordering causes that the most likely background distributions remain on the top. The first B distributions are then chosen as the background model. B is defined as ( b ) B = arg min w i >T b i=1

4 where the threshold T is the minimum portion of the background model. Background subtraction is done by marking a pixel as foreground pixel if its value is more than 2.5 standard deviations away from any of the B distributions. The first Gaussian component that matches the pixel values is updated by the following equations μ i,t =(1 ρ) μ i,t 1 + ρx t where σ 2 i,t =(1 ρ) σ 2 i,t 1 + ρ (X t μ i,t ) T (X t μ i,t ) ρ = αη (X t μ i,σ i ) The weights of the i-th distribution are adjusted as follows w i,t =(1 α) w i,t 1 + α(m i,t ) where α is the learning rate and M i,t is 1 for the distribution which matched and 0 for the remaining distributions. Figure 1 shows the extraction of the moving objects i.e. two robots and a person from the background. Fig. 1. Extraction of the moving objects from the background After labeling the connected components in the resulting foreground image axis parallel rectangles around these components have been computed. The centers of these rectangles have been used to specify the positions of the objects in the image. 3 The 2D Tracking Algorithm The tracking algorithm used in our application is composed of the following steps:

5 In two consecutive images compute the rectangles around the moving objects. From each of these consecutive images extract and find corresponding SIFT- Features (Scale-invariant feature transform)[11]. Identify corresponding rectangles (rectangles which surround the same object) in both images by taking into account the corresponding SIFT-Features. Two rectangles are considered as being corresponded, when the SIFT-Features they surround are corresponding to each other. In this algorithm SIFT-Features have been chosen because as is shown in [12] they do outperform most other point detectors and are more resilient to image deformations. Figure 2 shows the tracking of the objects in two consecutive images taking from one camera. Fig. 2. Tracking of moving objects in scene. Next the advances in using this algorithm are described. 3.1 Robust Clustering of SIFT-Features As one can see on the left image SIFT-Features are computed not only on the moving objects but all over the scene. Thus the task to solve is the clustering of those features which belong to each moving object, so that the position of an object can be tracked to the consecutive image. The clustering of the SIFT-Features which represent the same object is done here by the computation of the rectangles around the moving object. An example of it can be seen on the right image. Although the SIFT tracking algorithm tracks all SIFT features which have been found on the previous image, only those features have been selected which represent the moving object. The short lines

6 in the image indicate the trajectory of successfully tracked SIFT features. An alternative approach for clustering the corresponding SIFT features according to the lengths and the angles of the translation vectors defined by the two positions of the corresponding SIFT features has also been considered but led to very unstable results and imposed to many constraints on the scene. For example it is not justifiable to restrict the speed of the objects to a minimum velocity. The speed of an object has a direct influence on the lengths of the translation vector. Thus to separate SIFT features which represent a very slowly moving object from those from the background it would be necessary to define a threshold. This is not acceptable. 3.2 Accurate Specification of the Object s Position Another advantage of the computation of the rectangles is the accurate specification of the object s position in the image. As stated before the position of an object is specified by the center of the rectangle surrounding this object. Defining the position of an object as the average position of the SIFT- Features representing this object is not a valuable alternative. Due to possible changes in illumination or little rotation of the object it was shown that the amount of SIFT-Features and also the SIFT features themselves representing this object can not be assumed as being constant over the whole period of time. Figure 3 reflects this general situation graphically. SIFT Features which have been tracked from image 1 to image 2 SIFT Features which have been tracked from image 2 to image 3 image 2 image 1 image 3 Fig. 3. The number and the SIFT features themselves can not be assumed as being constant over the whole period of time.

7 As one can see the SIFT features which have been tracked from image 1 to image 2 are not the same as those which have been tracked from image 2 to image 3. Thus the average positions calculated from these two different amounts of SIFT features which do represent the same object vary. To accomplish the detection that these two average points belong to the same trajectory it is necessary to define a threshold. This is very inconvenient. Also the question of how to define this threshold has to be answered. The problem of defining a threshold is not an issue when computing rectangles around the objects. 3.3 Effective Cooperation between SIFT Features and Rectangles One final question arises, why do we need to compute and cluster SIFT features at all? Why it is not enough to compute only the rectangles and track the object s position by looking at which of the rectangles in the consecutive images overlap? The answer is simple. In the case of a small, very fast moving object the computed rectangles will not overlap and the system will fail miserably. Thus the rectangles are used to cluster the SIFT features which represent the same object, and the SIFT features are used to find corresponding rectangles in two consecutive images. It is exactly this kind of cooperation which makes the proposed algorithm this successful. Now that the exact position of every object in the scene can be determined in every image, the task to solve is the computation of the trajectory of every object and the selection of an object which performs a systematic movement. 4 Robot Detection in both Camera Images In our application robot detection was done by selecting a moving object which performed a periodic movement. An algorithm for detecting such a movement has been developed. It is composed of the following steps. Mapping of the object s trajectories to a more useful and applicable form. The trajectory of an object which is performing a periodic movement is mapped to a cyclic curve. Detection that a given curve has a periodic form. These steps are described in more detail below.

8 4.1 Mapping of Object s Trajectories The position of the object in the first image is chosen as the start position of this object. Then after every consecutive image the euclidean distance between the new position and the start position of this object is computed. Mapping these distances over the number of processed images in a diagram results in a curve which has a cyclic form when the object is performing a periodic movement. One example of such a curve is shown in figure daten_robot.txt u euclidean distance number of processed images Fig. 4. Example of a curve which results from a periodic movement. Often the amplitude and the frequency of the resulting curve can not be assumed as being constant over the whole period of time. As one can see both maximums and minimums vary but can be expected to lie in some predefined range. An algorithm has been developed which can deal with such impurities. 4.2 Detection of the Periodic Curve The detection of the periodic curve is based on the identification of maximums and minimums and also on the verification whether the founded maximums and minimums lie in the predefined ranges. The developed algorithm is an online algorithm and processed every new incoming data (in our case newly computed euclidean distance between the new and the start position of an object) on time. The algorithm is structured as follows:

9 Initialization Phase: During this phase the values of the parameters predicted maximum, predicted minimum and predicted period are initialized as the averages of the maximums, minimums and periods detected and computed so far. Later these values are used to verify if for example a newly detected maximum lies in the range of the predicted maximum. Working Phase: During this phase the detection of a periodic curve is performed. For every newly detected maximum or minimum the algorithm first checks if the reached maximum or minimum lies in the range of the predicted maximum or minimum respectively. Next it verifies if also the period (the difference in timestamps between the new and the last maximum or minimum detected) lies in the range of predicted period. When all of these conditions are true then the given curve is identified as representing a periodic movement. The values of the three parameters predicted maximum, predicted minimum and predicted period are updated every time a new maximum or minimum is detected. An implementation of this algorithm in pseudo code is given in figure for every new incoming data do { 2. if (maximum detected) { 3. if (no predicted maximum computed) { 4. compute predicted maximum; 5. } else { 6. if ( new maximum lies in the range of predicted maximum) { 7. if (no predicted period computed) { 8. compute predicted period; 9. } else { 10. compute new period; 11. if (if new period lies in the range of predicted period) { 12. set variable periodic_movement to true; 13. } else { 14. set variable periodic_movement to false; 15. } 16. update predicted period with new period; 17. } 18. } 19. update predicted maximum with new max; 20. } 21. } do the same steps if minimum has been detected. 23. } Fig. 5. Pseudo code for an algorithm to detect periodic curves. The values of the parameters predicted maximum, predicted minimum and predicted period are computed as the averages of the last three maximums, min-

10 imums and periods respectively. If no sufficient data is available the values are computed as the averages of the maximums, minimums and periods detected and computed so far. The thresholds of the valid ranges around the predicted maximum, predicted minimum and predicted period have been determined empirically. Through the variation of these thresholds the degree of accuracy of the desired periodic movement can be established. Now that the robot s position and the SIFT features representing the robot have been determined in both camera images, the next task to solve is the 3D localisation of a robot. 5 3D Localisation of a Robot The problem of 3D localisation is very common in the computer vision community. Depending on how much information about the scene, relative camera s positions and camera calibrations is provided different algorithm have been developed to solve this task. For the demonstration purpose a very simple experimental environment has been chosen. The two cameras were positioned with the 30 cm distance from each other facing the blackboard. The camera s orientation and calibration were known. The robot was moving in front of the blackboard in the view of both cameras. Figure 6 shows exemplary two images taken from the cameras. Fig. 6. Two stereo images taken from two cameras. The red points depict the corresponding SIFT features. The corresponding SIFT features in the two images were used as an input to the triangulation algorithm. Due to measurement errors the backprojected

11 rays usually don t intersect in space. Thus their intersection was estimated as the point of minimum distance from both rays. Figure 7 shows the triangulation results from the top view. Fig. 7. Triangulation of the corresponding SIFT features detected in the two stereo images. The red points correspond to the 3D reconstruction computed at timestamp t and the green points correspond to the reconstruction computed at timestamp t + 1. Due to the chosen simple environment it is easy to recognise which of the reconstructed points belong to which parts of the scene. The obvious procedure to compute the robot s 3D position would be the following: Reconstruction of the SIFT features which lie inside the robot s rectangle. Computation of the average of the reconstructed 3D points. Unfortunately one problem with this approach arises. Depending on the form of the robot it happens that some SIFT features inside the robot s rectangle don t represent the robot but lie on the background instead. Thus the simple computation of the average of the reconstructed 3D points often does not well approximate the true robot s position. This is especially the case if the distance between the robot and the background is long.

12 Hence before computing the 3D reconstruction of the SIFT features it is necessary to remove outliers. An outlier is here defined as a features which lies on the background. Efficiently this can be done by sorting out features whose position in the two consecutive images taken from one camera didn t change. 6 Conclusions In this paper a behavior based object detection algorithm has been presented. The developed method allows the detection of robots in the scene independently of their appearance and shape. The only requirement imposed on the robots was the performance of a periodic recurring movement. The algorithm presented circumvents the usual requirement of creating a model of a robot of interest. Therefore it is very good suited for applications where a creation of a model is not affordable or practicable for some reasons. Moreover the presented method builds a preliminary step for an online learning algorithm. After the position of a robot is known a system can start to learn the appearance and the shape of the robot online. Having the model a wide range of already developed algorithms can then be used for example for tracking or navigation. Although the system described in this paper was trained to recognise periodic movements it also can be easily modified to recognise another movement patterns. One possible utilisation would then be the recognition of thieves in a shop or in a parking lot based on their suspicious behavior. References 1. Ali A., Aggarwal J.: Segmentation and recognition of continuous human activity. In IEEE Workshop on Detection and Recognition of events in Video , Avidan S.: Support vector tracking. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), , Bashir F., Schonfeld D., Khokhar A.: Segmented trajectory based indexing and retrieval of video data. IEEE International Conference on Image Processing, ICIP 2003, Barcelona, Spain. 4. Broida T., Chellappa R.: Estimation of object motion parameters from noisy images. IEEE Trans. Patt. Analy. Mach. Intell. 8, 1, 90-99, Chen W., Chang S. F.: Motion trajectory matching of video objects. IS&T/SPIE. San Jose, CA, Jan Comaniciu D., Ramesh V., Meer P.: Kernel-based object tracking. IEEE Trans. Patt. Analy. Mach. Intell. 25, , Duda R. O., Hart P. E., Stork D. G.: Pattern Classification. Second Edition, A Wiley-Interscience Publication, Edwards G., Taylor C., Cootes T.: Interpreting face images using active appearance models. In International Conference on Face and Gesture Recognition , 1998

13 9. Harris C., Stephens M.: A combined corner and edge detector. In 4th Alvey Vision Conference, , Hartley R., Zisserman A.: Multiple View Geometry in Computer Vision. Second Edition, Cambridge University Press 11. Lowe. D.: Distinctive image features from scale-invaliant keypoints. Int. J. Comput. Visio 60, 2, , Mikolajczyk K., Schmid C.: A performance evaluation of local descriptors. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , Moravec H.: Visual mapping by a robot rover. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), , Papageorgiou C., Oren M., Poggio T.: A general framework for object detection. In IEEE International Conference on Computer Vision (ICCV), , Serby D., Koller-Meier S., Gool L.V.: Probabilistic object tracking using multiple features. In IEEE International Conference of Pattern Recognition (ICPR) , Shafique K., Shah M.: A non-iterative greedy algorithm for multi-frame point correspondence. In IEEE International Conference on Computer Vision (ICCV), , Shi J., Tomasi C.: Good features to track. In IEEE Conference on Computer Vision and Patterns Recognition (CVPR), , Stauffer C., Grimson W. E. L.: Adaptive background mixture models for real-time tracking. in Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149). IEEE Comput. Soc. Part Vol. 2, Trucco E., Verri A.: Introductory Techniques for 3D Computer Vision. Prentice- Hall, Inc Veenman C., Reinders M., Backer E.: Resolving motion correspondence for densely moving points. IEEE Trans. Patt. Analy. Mach. Intell. 23, 1, 54-72, Viola P., Jones M., Snow D.: Detecting pedestrians using patterns of motion and appearance. In IEEE International Conference on Computer Vision (ICCV), , Yilmaz A., Javed O., Shah M.: Object tracking: A survey. ACM Comput.Surv. 38, 4, Article 13, 2006

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 2 Issue 11, November 2015. Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

CS229: Action Recognition in Tennis

CS229: Action Recognition in Tennis CS229: Action Recognition in Tennis Aman Sikka Stanford University Stanford, CA 94305 Rajbir Kataria Stanford University Stanford, CA 94305 asikka@stanford.edu rkataria@stanford.edu 1. Motivation As active

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Tracking in image sequences

Tracking in image sequences CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Tracking in image sequences Lecture notes for the course Computer Vision Methods Tomáš Svoboda svobodat@fel.cvut.cz March 23, 2011 Lecture notes

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

Using Geometric Blur for Point Correspondence

Using Geometric Blur for Point Correspondence 1 Using Geometric Blur for Point Correspondence Nisarg Vyas Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA Abstract In computer vision applications, point correspondence

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

Adaptive Feature Extraction with Haar-like Features for Visual Tracking

Adaptive Feature Extraction with Haar-like Features for Visual Tracking Adaptive Feature Extraction with Haar-like Features for Visual Tracking Seunghoon Park Adviser : Bohyung Han Pohang University of Science and Technology Department of Computer Science and Engineering pclove1@postech.ac.kr

More information

Image correspondences and structure from motion

Image correspondences and structure from motion Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.

More information

Specular 3D Object Tracking by View Generative Learning

Specular 3D Object Tracking by View Generative Learning Specular 3D Object Tracking by View Generative Learning Yukiko Shinozuka, Francois de Sorbier and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku 223-8522 Yokohama, Japan shinozuka@hvrl.ics.keio.ac.jp

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO Makoto Arie, Masatoshi Shibata, Kenji Terabayashi, Alessandro Moro and Kazunori Umeda Course

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Categorization by Learning and Combining Object Parts

Categorization by Learning and Combining Object Parts Categorization by Learning and Combining Object Parts Bernd Heisele yz Thomas Serre y Massimiliano Pontil x Thomas Vetter Λ Tomaso Poggio y y Center for Biological and Computational Learning, M.I.T., Cambridge,

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

Selection of Scale-Invariant Parts for Object Class Recognition

Selection of Scale-Invariant Parts for Object Class Recognition Selection of Scale-Invariant Parts for Object Class Recognition Gy. Dorkó and C. Schmid INRIA Rhône-Alpes, GRAVIR-CNRS 655, av. de l Europe, 3833 Montbonnot, France fdorko,schmidg@inrialpes.fr Abstract

More information

Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement

Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement Daegeon Kim Sung Chun Lee Institute for Robotics and Intelligent Systems University of Southern

More information

Object detection using non-redundant local Binary Patterns

Object detection using non-redundant local Binary Patterns University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Object detection using non-redundant local Binary Patterns Duc Thanh

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Detecting Multiple Symmetries with Extended SIFT

Detecting Multiple Symmetries with Extended SIFT 1 Detecting Multiple Symmetries with Extended SIFT 2 3 Anonymous ACCV submission Paper ID 388 4 5 6 7 8 9 10 11 12 13 14 15 16 Abstract. This paper describes an effective method for detecting multiple

More information

A Fast Moving Object Detection Technique In Video Surveillance System

A Fast Moving Object Detection Technique In Video Surveillance System A Fast Moving Object Detection Technique In Video Surveillance System Paresh M. Tank, Darshak G. Thakore, Computer Engineering Department, BVM Engineering College, VV Nagar-388120, India. Abstract Nowadays

More information

Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems

Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems Xiaoyan Jiang, Erik Rodner, and Joachim Denzler Computer Vision Group Jena Friedrich Schiller University of Jena {xiaoyan.jiang,erik.rodner,joachim.denzler}@uni-jena.de

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Multibody reconstruction of the dynamic scene surrounding a vehicle using a wide baseline and multifocal stereo system

Multibody reconstruction of the dynamic scene surrounding a vehicle using a wide baseline and multifocal stereo system Multibody reconstruction of the dynamic scene surrounding a vehicle using a wide baseline and multifocal stereo system Laurent Mennillo 1,2, Éric Royer1, Frédéric Mondot 2, Johann Mousain 2, Michel Dhome

More information

A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion

A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion Marek Schikora 1 and Benedikt Romba 2 1 FGAN-FKIE, Germany 2 Bonn University, Germany schikora@fgan.de, romba@uni-bonn.de Abstract: In this

More information

Car tracking in tunnels

Car tracking in tunnels Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern

More information

An Overview of Matchmoving using Structure from Motion Methods

An Overview of Matchmoving using Structure from Motion Methods An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu

More information

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds 9 1th International Conference on Document Analysis and Recognition Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds Weihan Sun, Koichi Kise Graduate School

More information

Local Image Features. Synonyms. Definition. Introduction

Local Image Features. Synonyms. Definition. Introduction L Local Image Features KRYSTIAN MIKOLAJCZYK 1,TINNE TUYTELAARS 2 1 School of Electronics and Physical Sciences, University of Surrey, Guildford, Surrey, UK 2 Department of Electrical Engineering, Katholieke

More information

An Object Detection System using Image Reconstruction with PCA

An Object Detection System using Image Reconstruction with PCA An Object Detection System using Image Reconstruction with PCA Luis Malagón-Borja and Olac Fuentes Instituto Nacional de Astrofísica Óptica y Electrónica, Puebla, 72840 Mexico jmb@ccc.inaoep.mx, fuentes@inaoep.mx

More information

Multi-Channel Adaptive Mixture Background Model for Real-time Tracking

Multi-Channel Adaptive Mixture Background Model for Real-time Tracking Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 2073-4212 Ubiquitous International Volume 7, Number 1, January 2016 Multi-Channel Adaptive Mixture Background Model for Real-time

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

Scale Invariant Segment Detection and Tracking

Scale Invariant Segment Detection and Tracking Scale Invariant Segment Detection and Tracking Amaury Nègre 1, James L. Crowley 1, and Christian Laugier 1 INRIA, Grenoble, France firstname.lastname@inrialpes.fr Abstract. This paper presents a new feature

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Simuntaneous Localisation and Mapping with a Single Camera. Abhishek Aneja and Zhichao Chen

Simuntaneous Localisation and Mapping with a Single Camera. Abhishek Aneja and Zhichao Chen Simuntaneous Localisation and Mapping with a Single Camera Abhishek Aneja and Zhichao Chen 3 December, Simuntaneous Localisation and Mapping with asinglecamera 1 Abstract Image reconstruction is common

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic

More information

IMPACT OF SUBPIXEL PARADIGM ON DETERMINATION OF 3D POSITION FROM 2D IMAGE PAIR Lukas Sroba, Rudolf Ravas

IMPACT OF SUBPIXEL PARADIGM ON DETERMINATION OF 3D POSITION FROM 2D IMAGE PAIR Lukas Sroba, Rudolf Ravas 162 International Journal "Information Content and Processing", Volume 1, Number 2, 2014 IMPACT OF SUBPIXEL PARADIGM ON DETERMINATION OF 3D POSITION FROM 2D IMAGE PAIR Lukas Sroba, Rudolf Ravas Abstract:

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Visual Tracking (1) Pixel-intensity-based methods

Visual Tracking (1) Pixel-intensity-based methods Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Crowd Event Recognition Using HOG Tracker

Crowd Event Recognition Using HOG Tracker Crowd Event Recognition Using HOG Tracker Carolina Gárate Piotr Bilinski Francois Bremond Pulsar Pulsar Pulsar INRIA INRIA INRIA Sophia Antipolis, France Sophia Antipolis, France Sophia Antipolis, France

More information

Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion

Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion Zhe Lin, Larry S. Davis, David Doermann, and Daniel DeMenthon Institute for Advanced Computer Studies University of

More information

Adaptive Background Mixture Models for Real-Time Tracking

Adaptive Background Mixture Models for Real-Time Tracking Adaptive Background Mixture Models for Real-Time Tracking Chris Stauffer and W.E.L Grimson CVPR 1998 Brendan Morris http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Motivation Video monitoring and surveillance

More information

Face Tracking in Video

Face Tracking in Video Face Tracking in Video Hamidreza Khazaei and Pegah Tootoonchi Afshar Stanford University 350 Serra Mall Stanford, CA 94305, USA I. INTRODUCTION Object tracking is a hot area of research, and has many practical

More information

Face detection and recognition. Detection Recognition Sally

Face detection and recognition. Detection Recognition Sally Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification

More information

Human-Robot Interaction

Human-Robot Interaction Human-Robot Interaction Elective in Artificial Intelligence Lecture 6 Visual Perception Luca Iocchi DIAG, Sapienza University of Rome, Italy With contributions from D. D. Bloisi and A. Youssef Visual Perception

More information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Mustafa Berkay Yilmaz, Hakan Erdogan, Mustafa Unel Sabanci University, Faculty of Engineering and Natural

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

Corner Detection. GV12/3072 Image Processing.

Corner Detection. GV12/3072 Image Processing. Corner Detection 1 Last Week 2 Outline Corners and point features Moravec operator Image structure tensor Harris corner detector Sub-pixel accuracy SUSAN FAST Example descriptor: SIFT 3 Point Features

More information

CS4670: Computer Vision

CS4670: Computer Vision CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know

More information

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Akitsugu Noguchi and Keiji Yanai Department of Computer Science, The University of Electro-Communications, 1-5-1 Chofugaoka,

More information

Image Processing. Image Features

Image Processing. Image Features Image Processing Image Features Preliminaries 2 What are Image Features? Anything. What they are used for? Some statements about image fragments (patches) recognition Search for similar patches matching

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

A GEOMETRIC SEGMENTATION APPROACH FOR THE 3D RECONSTRUCTION OF DYNAMIC SCENES IN 2D VIDEO SEQUENCES

A GEOMETRIC SEGMENTATION APPROACH FOR THE 3D RECONSTRUCTION OF DYNAMIC SCENES IN 2D VIDEO SEQUENCES A GEOMETRIC SEGMENTATION APPROACH FOR THE 3D RECONSTRUCTION OF DYNAMIC SCENES IN 2D VIDEO SEQUENCES Sebastian Knorr, Evren mre, A. Aydın Alatan, and Thomas Sikora Communication Systems Group Technische

More information

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882 Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk

More information

Factorization with Missing and Noisy Data

Factorization with Missing and Noisy Data Factorization with Missing and Noisy Data Carme Julià, Angel Sappa, Felipe Lumbreras, Joan Serrat, and Antonio López Computer Vision Center and Computer Science Department, Universitat Autònoma de Barcelona,

More information

Local features: detection and description. Local invariant features

Local features: detection and description. Local invariant features Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms

More information

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Fast Natural Feature Tracking for Mobile Augmented Reality Applications Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION. Maral Mesmakhosroshahi, Joohee Kim

IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION. Maral Mesmakhosroshahi, Joohee Kim IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION Maral Mesmakhosroshahi, Joohee Kim Department of Electrical and Computer Engineering Illinois Institute

More information

A Real Time Human Detection System Based on Far Infrared Vision

A Real Time Human Detection System Based on Far Infrared Vision A Real Time Human Detection System Based on Far Infrared Vision Yannick Benezeth 1, Bruno Emile 1,Hélène Laurent 1, and Christophe Rosenberger 2 1 Institut Prisme, ENSI de Bourges - Université d Orléans

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections

More information

Object Tracking using HOG and SVM

Object Tracking using HOG and SVM Object Tracking using HOG and SVM Siji Joseph #1, Arun Pradeep #2 Electronics and Communication Engineering Axis College of Engineering and Technology, Ambanoly, Thrissur, India Abstract Object detection

More information

Feature Based Registration - Image Alignment

Feature Based Registration - Image Alignment Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html

More information

A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods

A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.5, May 2009 181 A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods Zahra Sadri

More information

Invariant Features from Interest Point Groups

Invariant Features from Interest Point Groups Invariant Features from Interest Point Groups Matthew Brown and David Lowe {mbrown lowe}@cs.ubc.ca Department of Computer Science, University of British Columbia, Vancouver, Canada. Abstract This paper

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

Global localization from a single feature correspondence

Global localization from a single feature correspondence Global localization from a single feature correspondence Friedrich Fraundorfer and Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology {fraunfri,bischof}@icg.tu-graz.ac.at

More information

Improved Hand Tracking System Based Robot Using MEMS

Improved Hand Tracking System Based Robot Using MEMS Improved Hand Tracking System Based Robot Using MEMS M.Ramamohan Reddy P.G Scholar, Department of Electronics and communication Engineering, Malla Reddy College of engineering. ABSTRACT: This paper presents

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Generic Face Alignment Using an Improved Active Shape Model

Generic Face Alignment Using an Improved Active Shape Model Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn

More information

Object and Class Recognition I:

Object and Class Recognition I: Object and Class Recognition I: Object Recognition Lectures 10 Sources ICCV 2005 short courses Li Fei-Fei (UIUC), Rob Fergus (Oxford-MIT), Antonio Torralba (MIT) http://people.csail.mit.edu/torralba/iccv2005

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

A Novel Extreme Point Selection Algorithm in SIFT

A Novel Extreme Point Selection Algorithm in SIFT A Novel Extreme Point Selection Algorithm in SIFT Ding Zuchun School of Electronic and Communication, South China University of Technolog Guangzhou, China zucding@gmail.com Abstract. This paper proposes

More information

A Two-stage Scheme for Dynamic Hand Gesture Recognition

A Two-stage Scheme for Dynamic Hand Gesture Recognition A Two-stage Scheme for Dynamic Hand Gesture Recognition James P. Mammen, Subhasis Chaudhuri and Tushar Agrawal (james,sc,tush)@ee.iitb.ac.in Department of Electrical Engg. Indian Institute of Technology,

More information

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Hauke Strasdat, Cyrill Stachniss, Maren Bennewitz, and Wolfram Burgard Computer Science Institute, University of

More information

Video Processing for Judicial Applications

Video Processing for Judicial Applications Video Processing for Judicial Applications Konstantinos Avgerinakis, Alexia Briassouli, Ioannis Kompatsiaris Informatics and Telematics Institute, Centre for Research and Technology, Hellas Thessaloniki,

More information

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision Stephen Karungaru, Atsushi Ishitani, Takuya Shiraishi, and Minoru Fukumi Abstract Recently, robot technology has

More information

Multi-Object Tracking Based on Tracking-Learning-Detection Framework

Multi-Object Tracking Based on Tracking-Learning-Detection Framework Multi-Object Tracking Based on Tracking-Learning-Detection Framework Songlin Piao, Karsten Berns Robotics Research Lab University of Kaiserslautern Abstract. This paper shows the framework of robust long-term

More information

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn

More information

Eye Detection by Haar wavelets and cascaded Support Vector Machine

Eye Detection by Haar wavelets and cascaded Support Vector Machine Eye Detection by Haar wavelets and cascaded Support Vector Machine Vishal Agrawal B.Tech 4th Year Guide: Simant Dubey / Amitabha Mukherjee Dept of Computer Science and Engineering IIT Kanpur - 208 016

More information

An Image Based 3D Reconstruction System for Large Indoor Scenes

An Image Based 3D Reconstruction System for Large Indoor Scenes 36 5 Vol. 36, No. 5 2010 5 ACTA AUTOMATICA SINICA May, 2010 1 1 2 1,,,..,,,,. : 1), ; 2), ; 3),.,,. DOI,,, 10.3724/SP.J.1004.2010.00625 An Image Based 3D Reconstruction System for Large Indoor Scenes ZHANG

More information

People detection in complex scene using a cascade of Boosted classifiers based on Haar-like-features

People detection in complex scene using a cascade of Boosted classifiers based on Haar-like-features People detection in complex scene using a cascade of Boosted classifiers based on Haar-like-features M. Siala 1, N. Khlifa 1, F. Bremond 2, K. Hamrouni 1 1. Research Unit in Signal Processing, Image Processing

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

Learning a Sparse, Corner-based Representation for Time-varying Background Modelling

Learning a Sparse, Corner-based Representation for Time-varying Background Modelling Learning a Sparse, Corner-based Representation for Time-varying Background Modelling Qiang Zhu 1, Shai Avidan 2, Kwang-Ting Cheng 1 1 Electrical & Computer Engineering Department University of California

More information