Combining Multiple Tracking Modalities for Vehicle Tracking in Traffic Intersections
|
|
- Ginger Ferguson
- 5 years ago
- Views:
Transcription
1 Combining Multiple Tracking Modalities for Vehicle Tracking in Traffic Intersections Harini Veeraraghavan Nikolaos Papanikolopoulos Artificial Intelligence, Vision and Robotics Lab Department of Computer Science and Engineering University of Minnesota Abstract This paper presents a camera-based system for tracking vehicles in outdoor scenes such as traffic intersections. Two different tracking systems, namely, a blob tracker and a Mean Shift tracker provide the position of each target. These results are then fused sequentially using an Extended Kalman filter. The tracking reliability of the blob tracker is improved by using oriented bounding boxes (which provide a much tighter fit than axis aligned boxes) to represent the blobs and a Joint Probabilistic Data Association filter for dealing with data association ambiguity. The Mean Shift tracker is as proposed by Comaniciu et al. [3]. We show that the above tracking formulation can provide reasonable tracking despite the stop-and-go motion of vehicles and clutter in traffic intersections. Keywords Blob tracking, Mean Shift tracking, Joint Probabilistic Data Association filter, stop-and-go traffic. 1 Introduction The main requirement for any vision-based outdoor tracking system is robustness to adapt to the variabilities in the visual data presented by the environment. Changes in viewpoint, occlusions, illumination changes, different postures of target constantly modify the information presented to the sensing system. In the context of robotics, consider the example of landmark based navigation in outdoors. In such cases, reliance only on color or edges or shape of landmark for example, would not be sufficient for localization. The goal of our project is to monitor activities at traffic intersections. The tracking system has to deal with the usual problems of uncontrolled outdoor scenes such as varying illumination, shadows, clutter etc., but also with the nonfree flowing traffic and increased congestion at the intersections. Most adaptive background learning schemes use the author to whom all correspondence should be sent. assumption that background pixels generally are non varying. However, one problem with this is that, stopped foreground targets will be modeled into the background. In the context of intersection monitoring, this presents a problem as vehicles and pedestrians stop regularly at intersections. Ignoring such vehicles not only results in tracking failure but also affects the performance of the incident monitoring system. Further, increased congestion at intersections results in increased target-data association complexity. Input Image Image Segmentation Blob Tracking Mean Shift Tracking Incident Detection and Visualization Connected Region Extraction Figure 1: Tracking approach. Tracking systems that make use of the image content such as target features templates and color distributions are not affected by ineffective segmentation. However, changes in pose of target or change in viewpoint, might alter the color distribution. Similarly, template matching methods are generally affected by target rotation. Focus of attention or identifying regions of interest is important in several robotic applications. Most attention systems work by looking for specific predefined features such as templates, dark spots in images depending on the application. In our case, targets of interest are vehicles and pedestrians. These generally appear in the scene and are in constant motion (other than stopping at intersections). Hence, they can be easily segmented out using a segmentation algorithm. In this paper, we address the problem of target tracking 1
2 in outdoor environments by making use of the following cues: foreground regions (obtained from image segmentation) and the individual target s color distribution. Foreground region segmentation is achieved by an adaptive background segmentation scheme based on the mixture of Gaussian models proposed by Stauffer and Grimson [11]. Target color distributions (represented by color histograms) are initialized automatically on the detected foreground target regions. Initialized targets are tracked across frames using extended Kalman filter. Predicted target positions are used as starting guesses for searching for targets in the next frame. The results from the tracking module are presented to the incident detection module. The workflow of the system is presented in Figure 1. Multiple cue based tracking has been used frequently in outdoor tracking. Hong et al. [4] combined Ladar with color camera mounted on unmanned vehicle for navigating in an outdoor environment (avoiding obstacles such as puddles, finding road etc.,). In the context of vehicle tracking, Malik et al. [5] used a tracker based on two linear Kalman filters, one for estimating the position and the other for estimating the shape of the vehicles moving in highway scenes. Similar to this approach, Meyer et al. [7] combined a motion filter for estimating the affine parameters of an object for position estimation with a geometric Kalman filter for shape estimation. Cues such as motion derived from optical flow, shading and edges have been recently used by Lu et al. [6] for tracking hands. The cues are integrated using a model-based approach. Sidenbladh and Black [9] used cues such as edges, ridges, and motion cues to learn probabilistic models of people s motion. Rasmussen and Hager [8] use different cues obtained from homogeneous regions, snakes, and textures for tracking targets under occlusion using constrained Joint Probabilistic Data Association filters. Soto and Khosla et al. [10] combined cues from multiple visual cues dynamically in an agent framework for human tracking. The novelty of our approach is that we use two different tracking modalities that essentially provide the same information, namely, the target s position. Instead of using the most dominant or promising cue at a time, we make use of the individual position information obtained from both the cues. Further, instead of representing the individual information as a full measurement vector, the position measurements are treated as arising from two independant sensing modalities and hence they are fused sequentially in a Kalman filter. The blob tracking is made more robust by using a Joint Probabilistic Data Association filter. This paper is arranged as follows. Section 2 discusses the theory behind cue fusion. Section 3 discusses target detection and tracking initialization methodology. The blob tracking method is discussed in Section 4 while the Mean Shift Tracking method is briefly discussed in Section 5. Results and their discussion are in Section 7 and conclusions are drawn in Section 8. 2 Background At time t, when n measurements are available from n different sensors, the standard method for updating the state in a Kalman filter is to represent the entire measurement set as the measurement vector and update them all simultaneously. However, when the measurements are uncorrelated, that is their individual noises w i (t), i = 1,..., n are uncorrelated, R(t) = E[w 1 (t)...w n (t)] = diag[r 1 (t),..., r n (t)] (1) then one can carry out the update step sequentially, incorporating the measurements one after the other as explained by BarShalom et al.in [2]. This is because, the measurement noises are still white (due to the uncorrelatedness) for use in the Kalman filter. Further, note that, uncorrelatedness in measurement errors means that the sequence of application of the individual measurements does not matter. In other words, the measurements from the individual sensors can be applied in any order. In our case, position measurement for each target is obtained from two different tracking modalities, namely, the region or blob tracker and the Mean Shift Tracker. The two different trackers operate independently of each other and the noise in the target localization from the two trackers is independent. In other words, the target localization accuracy in the region tracker depends on the accuracy of the segmentation algorithm and the presence or absence of multiple measurements arising from target or background occlusions. The noise in the target localization in the mean shift tracker arises primarily due to the target getting occluded, and some changes in its color distribution that might occur due to pose change. Hence, the position measurements from the two trackers can be applied sequentially to the Kalman filter. Targets are modeled as first-order translational models in 2D. In other words, the targets are assumed to move with constant velocities. This doesn t hold true always as vehicles stop at intersections and decelerate before turning. However, in our case as the vehicles are generally slow moving at the intersections, the acceleration is not very large and hence a simpler model can still be used. The vehicles are tracked in scene coordinates. Hence, extended Kalman filters are used for tracking due to the nonlinear mapping of the measurements from the image space to the target state space. 2
3 3 Target Detection An adaptive region segmentation method is used for detecting targets of interest. Our region segmentation method is based on the mixture of Gaussian models approach as proposed by Stauffer and Grimson [11]. Each pixel has a mixture of (in our case 6) Gaussian distributions associated with it and is represented by its sufficient statistics and a weight ω indicative of its frequency of occurrence. This method classifies a pixel on the basis that non-varying pixels (those having a lower variance and a larger weight) are background while others are foreground. Foreground regions are extracted using connected components extraction. In order to prevent false detection, only regions that can be tracked successively in the first few frames are classified as targets. Each blob is initialized as a Moving Object (MO) which is then tracked in subsequent frames using an extended Kalman filter as described in Section 4 and Section 5. A MO is just an abstraction of the blobs and can have more than one blob associated with it in each frame. This is to account for the fact that, blobs of vehicles and pedestrians can split and merge due to occlusions. Every MO is initialized with the blob s position and velocity (computed from tracking over the past few frames), and its color distribution. The color distribution is computed as a histogram (consisting of 16 bins) of the blob region bounded by its axis aligned bounding box. It is necessary to include only regions belonging to the target in the histogram computation for good localization. As it is difficult to delineate the target correctly with axis aligned bounding boxes, only half the bounding box size around the center is used for computing the histogram. Furthermore, the histograms are weighted relatively to the surrounding background which increases the relevance of features distinct from the background. The histogram weighting is based on the method described by Comaniciu et al. [3] and the weights are computed as w i = min( ˆB ˆB i, 1), i = 1...m (2) where, ˆBi, i = 1...m, is the discrete representation of the surrounding background s histogram and ˆB is the minimum of the histogram. The term w i represents the weighting of each bin and its effect is to reduce the importance of portions of foreground model similar to the background model. 4 Blob Tracking The blobs obtained from region segmentation are represented as oriented bounding boxes (obtained from principal component analysis of the blobs) which are then tracked from frame to frame. The reason for using oriented bounding boxes as opposed to conventional axis aligned boxes is owing to their tighter target representation which makes data association in case of multiple target tracking easier. These are illustrated in Figure 2. The tracked blobs are represented as Moving Objects (MOs) which are tracked using their respective extended Kalman filters. The relation between the blobs and the MOs are represented in a bipartite graph. The details of the graph construction and pruning are explained in our previous work [13]. (a) Figure 2: Oriented bounding boxes vs Axis aligned boxes fit. The oriented bounding boxes provide much closer fit to the vehicles than axis aligned boxes. 4.1 Joint Probabilistic Data Association Presenting reliable measurements to the filter is necessary to ensure reliable tracking of targets. Data association ambiguity in the context of region tracking for multiple targets arises due to the presence of occlusions from target-target and target-background occlusion. In general, data association ambiguity in scenes may arise due to: 1. noise-like visual occurrences, 2. persistent known scene elements (i.e., other tracked targets), and 3. persistent, unknown scene elements (occurring due to uninstantiated targets). In addition, occlusions can also render the target partially or totally unobservable. These problems manifest in our case as the number of blob associations a vehicle has in a frame. The Joint Probabilistic Data Association Filter (JPDAF) is a suboptimal Bayesian approach for tracking a known number of targets in clutter. The problem of tracking can be formulated as a problem of assigning a set of measurements m 1,..., m M to the known set of targets x 1,..., x N at each time instant t. This is evaluated from the sets of joint events (consisting of target measurement association pairs (x i, m j )) at time t. A joint event is formed with the constraint that no two targets can share the same measurments, and no more than one measurement can be associated with a target in one joint event. However, a different measurement can be associated (b) 3
4 with the same target in a different joint event at ths same time t. The target measuremnet association is then evaluated from marginalizing all the joint event pairs at time t. For details, interested readers may refer to [1]. In the traditional JPDAF framework, all measurements are assumed to be related to all targets. However, using a full target measurement association would make the event calculation very computationally expensive. Gating based methods using Mahalanobis distances are common to limit the number of target measurement associations. In our case, the results from blob tracking are used to limit the number of associations. In other words, only those blobs related to a given target through its child blob(s) from previous frame are used. 5 Mean Shift Tracking The method is based on target representation using a nonparametric isotropic kernel. Tracking involves target localization using a gradient descent based search procedure consisting of comparing the target model with the image. Detailed explanation of the method can be found in [3]. The target model is represented using m bin histograms. The model is normalized to eliminate the influence of target dimensions by independently rescaling the row and column dimensions. Thus, the target model is centered at 0. In each subsequent frame, the target candidates are computed and each compared with the target model using a similarity function ˆp = {ρ[ p(y)ˆq]} ˆ where, p(y) ˆ and ˆq are the target candidate and the target model respectively, and y is the target location. The target candidates are also normalized based on the target model. The function ˆp acts as a likelihood function so that its local maxima indicate the target s position in the current frame. The similarity function defines a distance between the target model and candidate as d(y) = 1 ρ[p(y), q] (3) where, ρ[p(y), q] is an estimate of the Bhattacharya coefficient where, ρ[p(y), q] = σ m u=1 pu (y)q u (4) The goal of the tracking algorithm is the minimization of the distance as a function of y which is equivalent to maximizing the Bhattacharya coefficient. In each frame, the model is searched starting from the predicted target position. In order to account for scale changes of the target as it translates in the image, its scaling factors along x and y are computed to adapt the kernel bandwidth in each frame. 6 Measurement Error The measurement error covariance R k for the filter is given by R ( k) = [ ] σk x σ k y 2 which represent the error in the target position measurement in x and y coordinates in the image. The measurement error standard deviations σ k x 2 and σ k y 2 for the blob tracker are obtained based on the variance in the percentage difference in the measured and previously measured size (area). In the case of the Mean Shift Tracker, the measurement uncertainty is computed as the standard deviation of a scaled Gaussian distribution fitted to three different points on the surface for each coordinate (x and y). The three points along each coordinate direction are chosen as the center of the localized target and two points on either side of the center at a distance equal to the dimension of the bounding box along the particular coordinate axis. 7 Results and Discussion Figure 3 shows tracking in an intersection scene. The vehicle number 34 is tracked consistently although it stops at for 650 frames. The sequence also shows how vehicles 34 and 44 are tracked consistently despite occlusion. Tracking under multiple occlusions is shown in Figure 4 where the vehicle 6 undergoes multiple occlusions. As shown, it is picked up even when it was occluded for several frames behind the truck. Figure 5 shows the tracking performance of just blob tracking and tracking using both region and mean shift tracking. Blob tracking in this case fails because the tracker jumps to the white vehicle from the black vehicle due to occlusion occurring between the two before a track was initialized for the white car. However, adding the Mean Shift Tracker allows to track the black vehicle reliably. Using two cues instead of one clearly improves the tracking performance in cases where one cue fails to provide good measurements as shown in Figure 5. Triesch and Malsburg [12] used five different cues such as color, shape, motion prediction, contrast, and motion detection for tracking faces. Each cue provides a quality metric indicative of the goodness of fit which is then used for integrating the cues. In our approach, the different modalities provide this quality metric in terms of the measurement error (provided to the Kalman filter) computed in each frame for each tracker. The tracking system provides reliable data for most targets but fails when reliable data cannot be obtained from either of the sources. This occurs when the target is occluded behind a structure or another target for example. This clearly suggests, that we need to make use of more information for achieving reliable tracking. The targets are initialized based (5) 4
5 (a) Frame 6531 (b) Frame 7189 (c) Frame 7468 Figure 3: Tracking sequence showing tracking of stop-and-go traffic. The sequence also illustrates occlusion handling between vehicles 34 and 44. (a) Frame 296 (b) Frame 346 (c) Frame 384 Figure 4: Tracking sequence under occlusions. Vehicle 6 is successfully tracked despite occlusions. on the results from the blob tracking. Hence, when two targets enter the scene together, they will be segmented as one blob and hence can be initialized as one target. This is illustrated in Figure 4, where vehicle 6 is actually two cars but gets initialized as one as they enter the scene together. Currently, there is no way to distinguish two targets in such cases. Another problem that must be obvious to the reader from some of the results shown in this paper is the one due to false targets. False targets are those blobs that are produced due to segmentation failure during illumination changes. These blobs are difficult to remove as they will be tracked consistently by the mean shift tracker. One of the ways they are removed currently is by aging the targets not tracked by blob tracking at all and removing them after a certain length of time. Currently, each tracking method is executed serially one after the other which slows down the execution speed. One scope for improvement will be to run the two tracking modalilities in parallel and feed the measurments based on time of arrival to the filter. 8 surement error) to an extended Kalman filter. We demonstrated how such a system can not only improve tracking reliability but can also can be employed successfully for traffic intersection scenes with vehicles exhibiting stop-and-go kind of motion. 9 Acknowledgements This work has been supported in part by the Minnesota Department of Transportation, the ITS institute at the University of Minnesota, and the National Science Foundation through grants #CMS and #IIS We would also like to thank Guillame Gasser for providing the code for mean shift tracking. References [1] Y. Bar-Shalom and T. E. Fortmann. Tracking and data association. Academic Press, Conclusions This paper presented a method for tracking targets in outdoor scenes using two different cues. Each cue is an independent tracking system which provides an estimate of the target s position with uncertainty (which is modeled as mea- [2] Y. Bar-Shalom, X. Rongli, and T. Kirubarajan. Estimation with applications to tracking and navigation. John-Wiley and Sons,
6 (a) Frame 1933 (b) Frame 2063 (c) Frame 2163 Figure 5: Tracking sequence showing performance of blob tracking and tracking using both blob and mean shift tracking. As can be seen, the blob tracker tracking the black car, jumps to track the white (uninitialized) car while it is successfully tracked by the tracker using both blob and Mean Shift tracking. [3] D. Comaniciu, V. Ramesh, and P. Meer. Kernel-based object tracking. In IEEE Trans. Pattern Analysis and Machine Intelligence, volume 25, [4] T. Hong, T. Chang, C. Rasmussen, and M. Shneier. Feature detection and tracking for mobile robots using a combination of ladar and color images. In IEEE International Conf. on Robotics and Automation, [5] D. Koller, J. Weber, and J. Malik. Robust multiple car tracking with occlusion reasoning. In ECCV (1), pages , [6] S. Lu, D. Metaxas, D. Samaras, and J. Oliensis. Using multiple cues for hand tracking and model refinement. In Proc. Computer Vision and Pattern Recognition Conf., pages , [11] C. Stauffer and W. E. L. Grimson. Adaptive background mixture models for real-time tracking. In Proc. Computer Vision and Pattern Recognition Conf. (CVPR 99), June [12] J. Triesch and C.von der Malsburg. Democratic integration: Self-organized integration of adaptive cues. Neural Computation, 13(9): , [13] H. Veeraraghavan, O. Masoud, and N. P. Papanikolopoulos. Vision-based monitoring of intersections. In IEEE Conf. on Intelligent Transportation Systems, pages 7 12, [7] F. G. Meyer and P. Bouthemy. Region-based tracking using affine motion models in long image sequences. In Computer Vision, Graphics and Image Processing: Image Processing, volume 60, pages , September [8] C. Rasmussen and G. Hager. Joint probabilistic techniques for tracking multi-part objects. In Proc. Computer Vision and Pattern Recognition Conf., pages 16 21, [9] H. Sidenbladh and M. Black. Learning image statistics for bayesian tracking. In Int. Conf. on Computer Vision, volume II, pages , [10] A. Soto and P. Khosla. Probabilistic adaptive agent based system for dynamic state estimation using multiple visual cues. In 10th International Symposium of Robotics Research (ISRR 2001). November 9th - 12th, 2001Lorne, Victoria, Australia, November
Switching Kalman Filter-Based Approach for Tracking and Event Detection at Traffic Intersections
Switching Kalman Filter-Based Approach for Tracking and Event Detection at Traffic Intersections Harini Veeraraghavan Paul Schrater Nikolaos Papanikolopoulos Department of Computer Science and Engineering
More informationSwitching Kalman Filter-Based Approach for Tracking and Event Detection at Traffic Intersections
Proceedings of the 13th Mediterranean Conference on Control and Automation Limassol, Cyprus, June 27-29, 2005 WeA06-1 Switching Kalman Filter-Based Approach for Tracking and Event Detection at Traffic
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationA Fast Moving Object Detection Technique In Video Surveillance System
A Fast Moving Object Detection Technique In Video Surveillance System Paresh M. Tank, Darshak G. Thakore, Computer Engineering Department, BVM Engineering College, VV Nagar-388120, India. Abstract Nowadays
More informationEvaluation of Moving Object Tracking Techniques for Video Surveillance Applications
International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation
More informationROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL
ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte
More informationObject Tracking with an Adaptive Color-Based Particle Filter
Object Tracking with an Adaptive Color-Based Particle Filter Katja Nummiaro 1, Esther Koller-Meier 2, and Luc Van Gool 1,2 1 Katholieke Universiteit Leuven, ESAT/VISICS, Belgium {knummiar,vangool}@esat.kuleuven.ac.be
More informationTRANSPORTATION STUDIES INSTITUTE MANAGING SUBURBAN INTERSECTIONS THROUGH SENSING. Harini Veeraraghavan Osama Masoud Nikolaos P.
CENTER FOR TRANSPORTATION STUDIES ITS INSTITUTE MANAGING SUBURBAN INTERSECTIONS THROUGH SENSING Harini Veeraraghavan Osama Masoud Nikolaos P. Papanikolopoulos Artificial Intelligence, Robotics, and Vision
More informationCar tracking in tunnels
Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern
More informationChapter 9 Object Tracking an Overview
Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging
More informationAnnouncements. Computer Vision I. Motion Field Equation. Revisiting the small motion assumption. Visual Tracking. CSE252A Lecture 19.
Visual Tracking CSE252A Lecture 19 Hw 4 assigned Announcements No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in WLH Room 2112 Motion Field Equation Measurements I x = I x, T: Components
More informationAdaptive Geometric Templates for Feature Matching
Proceedings of the 2006 IEEE International Conference on Robotics and Automation Orlando, Florida - May 2006 Adaptive Geometric Templates for Feature Matching Harini Veeraraghavan Paul Schrater Nikolaos
More informationA Real-Time Collision Warning System for Intersections
A Real-Time Collision Warning System for Intersections Kristen Stubbs, Hemanth Arumugam, Osama Masoud, Colin McMillen, Harini Veeraraghavan, Ravi Janardan, Nikos Papanikolopoulos 1 {kstubbs, hemanth, masoud,
More informationAutomatic Tracking of Moving Objects in Video for Surveillance Applications
Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering
More informationOn Road Vehicle Detection using Shadows
On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu
More informationApplying Synthetic Images to Learning Grasping Orientation from Single Monocular Images
Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic
More informationSwitching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking
Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Yang Wang Tele Tan Institute for Infocomm Research, Singapore {ywang, telctan}@i2r.a-star.edu.sg
More informationA Texture-based Method for Detecting Moving Objects
A Texture-based Method for Detecting Moving Objects M. Heikkilä, M. Pietikäinen and J. Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500
More informationModel-based Visual Tracking:
Technische Universität München Model-based Visual Tracking: the OpenTL framework Giorgio Panin Technische Universität München Institut für Informatik Lehrstuhl für Echtzeitsysteme und Robotik (Prof. Alois
More informationIntroduction to behavior-recognition and object tracking
Introduction to behavior-recognition and object tracking Xuan Mo ipal Group Meeting April 22, 2011 Outline Motivation of Behavior-recognition Four general groups of behaviors Core technologies Future direction
More informationFace Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation
Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition
More informationPairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement
Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement Daegeon Kim Sung Chun Lee Institute for Robotics and Intelligent Systems University of Southern
More informationHuman Motion Detection and Tracking for Video Surveillance
Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,
More informationProbabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences
Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences Presentation of the thesis work of: Hedvig Sidenbladh, KTH Thesis opponent: Prof. Bill Freeman, MIT Thesis supervisors
More informationA MIXTURE OF DISTRIBUTIONS BACKGROUND MODEL FOR TRAFFIC VIDEO SURVEILLANCE
PERIODICA POLYTECHNICA SER. TRANSP. ENG. VOL. 34, NO. 1 2, PP. 109 117 (2006) A MIXTURE OF DISTRIBUTIONS BACKGROUND MODEL FOR TRAFFIC VIDEO SURVEILLANCE Tamás BÉCSI and Tamás PÉTER Department of Control
More informationVisual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.
Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:
More informationLaserscanner Based Cooperative Pre-Data-Fusion
Laserscanner Based Cooperative Pre-Data-Fusion 63 Laserscanner Based Cooperative Pre-Data-Fusion F. Ahlers, Ch. Stimming, Ibeo Automobile Sensor GmbH Abstract The Cooperative Pre-Data-Fusion is a novel
More informationMoving Shadow Detection with Low- and Mid-Level Reasoning
Moving Shadow Detection with Low- and Mid-Level Reasoning Ajay J. Joshi, Stefan Atev, Osama Masoud, and Nikolaos Papanikolopoulos Dept. of Computer Science and Engineering, University of Minnesota Twin
More informationParticle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore
Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction
More informationA Spatio-Spectral Algorithm for Robust and Scalable Object Tracking in Videos
A Spatio-Spectral Algorithm for Robust and Scalable Object Tracking in Videos Alireza Tavakkoli 1, Mircea Nicolescu 2 and George Bebis 2,3 1 Computer Science Department, University of Houston-Victoria,
More informationDetection and Classification of Vehicles
Detection and Classification of Vehicles Gupte et al. 2002 Zeeshan Mohammad ECG 782 Dr. Brendan Morris. Introduction Previously, magnetic loop detectors were used to count vehicles passing over them. Advantages
More informationLecture 28 Intro to Tracking
Lecture 28 Intro to Tracking Some overlap with T&V Section 8.4.2 and Appendix A.8 Recall: Blob Merge/Split merge occlusion occlusion split When two objects pass close to each other, they are detected as
More informationRecall: Blob Merge/Split Lecture 28
Recall: Blob Merge/Split Lecture 28 merge occlusion Intro to Tracking Some overlap with T&V Section 8.4.2 and Appendix A.8 occlusion split When two objects pass close to each other, they are detected as
More informationStochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen
Stochastic Road Shape Estimation, B. Southall & C. Taylor Review by: Christopher Rasmussen September 26, 2002 Announcements Readings for next Tuesday: Chapter 14-14.4, 22-22.5 in Forsyth & Ponce Main Contributions
More informationVisual Motion Analysis and Tracking Part II
Visual Motion Analysis and Tracking Part II David J Fleet and Allan D Jepson CIAR NCAP Summer School July 12-16, 16, 2005 Outline Optical Flow and Tracking: Optical flow estimation (robust, iterative refinement,
More informationObject Detection by Point Feature Matching using Matlab
Object Detection by Point Feature Matching using Matlab 1 Faishal Badsha, 2 Rafiqul Islam, 3,* Mohammad Farhad Bulbul 1 Department of Mathematics and Statistics, Bangladesh University of Business and Technology,
More informationPeople Tracking and Segmentation Using Efficient Shape Sequences Matching
People Tracking and Segmentation Using Efficient Shape Sequences Matching Junqiu Wang, Yasushi Yagi, and Yasushi Makihara The Institute of Scientific and Industrial Research, Osaka University 8-1 Mihogaoka,
More informationLong-term motion estimation from images
Long-term motion estimation from images Dennis Strelow 1 and Sanjiv Singh 2 1 Google, Mountain View, CA, strelow@google.com 2 Carnegie Mellon University, Pittsburgh, PA, ssingh@cmu.edu Summary. Cameras
More informationElliptical Head Tracker using Intensity Gradients and Texture Histograms
Elliptical Head Tracker using Intensity Gradients and Texture Histograms Sriram Rangarajan, Dept. of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634 srangar@clemson.edu December
More informationMulti-Camera Calibration, Object Tracking and Query Generation
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Multi-Camera Calibration, Object Tracking and Query Generation Porikli, F.; Divakaran, A. TR2003-100 August 2003 Abstract An automatic object
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationVisual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Visual Tracking Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 11 giugno 2015 What is visual tracking? estimation
More informationObject Tracking using HOG and SVM
Object Tracking using HOG and SVM Siji Joseph #1, Arun Pradeep #2 Electronics and Communication Engineering Axis College of Engineering and Technology, Ambanoly, Thrissur, India Abstract Object detection
More informationPerformance Evaluation Metrics and Statistics for Positional Tracker Evaluation
Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Chris J. Needham and Roger D. Boyle School of Computing, The University of Leeds, Leeds, LS2 9JT, UK {chrisn,roger}@comp.leeds.ac.uk
More informationLEARNING TO GENERATE CHAIRS WITH CONVOLUTIONAL NEURAL NETWORKS
LEARNING TO GENERATE CHAIRS WITH CONVOLUTIONAL NEURAL NETWORKS Alexey Dosovitskiy, Jost Tobias Springenberg and Thomas Brox University of Freiburg Presented by: Shreyansh Daftry Visual Learning and Recognition
More informationLow Cost Motion Capture
Low Cost Motion Capture R. Budiman M. Bennamoun D.Q. Huynh School of Computer Science and Software Engineering The University of Western Australia Crawley WA 6009 AUSTRALIA Email: budimr01@tartarus.uwa.edu.au,
More informationDetecting and Identifying Moving Objects in Real-Time
Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary
More informationLocal features: detection and description. Local invariant features
Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms
More informationSensor Modalities. Sensor modality: Different modalities:
Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature
More informationRobust Model-Free Tracking of Non-Rigid Shape. Abstract
Robust Model-Free Tracking of Non-Rigid Shape Lorenzo Torresani Stanford University ltorresa@cs.stanford.edu Christoph Bregler New York University chris.bregler@nyu.edu New York University CS TR2003-840
More informationLast week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints
Last week Multi-Frame Structure from Motion: Multi-View Stereo Unknown camera viewpoints Last week PCA Today Recognition Today Recognition Recognition problems What is it? Object detection Who is it? Recognizing
More informationDigital Image Processing
Digital Image Processing Traffic Congestion Analysis Dated: 28-11-2012 By: Romil Bansal (201207613) Ayush Datta (201203003) Rakesh Baddam (200930002) Abstract Traffic estimate from the static images is
More informationAdaptive Background Mixture Models for Real-Time Tracking
Adaptive Background Mixture Models for Real-Time Tracking Chris Stauffer and W.E.L Grimson CVPR 1998 Brendan Morris http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Motivation Video monitoring and surveillance
More informationFace detection in a video sequence - a temporal approach
Face detection in a video sequence - a temporal approach K. Mikolajczyk R. Choudhury C. Schmid INRIA Rhône-Alpes GRAVIR-CNRS, 655 av. de l Europe, 38330 Montbonnot, France {Krystian.Mikolajczyk,Ragini.Choudhury,Cordelia.Schmid}@inrialpes.fr
More informationObserving people with multiple cameras
First Short Spring School on Surveillance (S 4 ) May 17-19, 2011 Modena,Italy Course Material Observing people with multiple cameras Andrea Cavallaro Queen Mary University, London (UK) Observing people
More informationVehicle Detection & Tracking
Vehicle Detection & Tracking Gagan Bansal Johns Hopkins University gagan@cis.jhu.edu Sandeep Mullur Johns Hopkins University sandeepmullur@gmail.com Abstract Object tracking is a field of intense research
More informationHuman Upper Body Pose Estimation in Static Images
1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project
More informationDetecting and Segmenting Humans in Crowded Scenes
Detecting and Segmenting Humans in Crowded Scenes Mikel D. Rodriguez University of Central Florida 4000 Central Florida Blvd Orlando, Florida, 32816 mikel@cs.ucf.edu Mubarak Shah University of Central
More informationINTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A REVIEW ON ILLUMINATION COMPENSATION AND ILLUMINATION INVARIANT TRACKING METHODS
More informationFitting: The Hough transform
Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data
More informationMulti-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems
Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems Xiaoyan Jiang, Erik Rodner, and Joachim Denzler Computer Vision Group Jena Friedrich Schiller University of Jena {xiaoyan.jiang,erik.rodner,joachim.denzler}@uni-jena.de
More informationScale Invariant Segment Detection and Tracking
Scale Invariant Segment Detection and Tracking Amaury Nègre 1, James L. Crowley 1, and Christian Laugier 1 INRIA, Grenoble, France firstname.lastname@inrialpes.fr Abstract. This paper presents a new feature
More informationVideo Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi
IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 2 Issue 11, November 2015. Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi
More informationA Bayesian Approach to Background Modeling
A Bayesian Approach to Background Modeling Oncel Tuzel Fatih Porikli Peter Meer CS Department & ECE Department Mitsubishi Electric Research Laboratories Rutgers University Cambridge, MA 02139 Piscataway,
More informationCs : Computer Vision Final Project Report
Cs 600.461: Computer Vision Final Project Report Giancarlo Troni gtroni@jhu.edu Raphael Sznitman sznitman@jhu.edu Abstract Given a Youtube video of a busy street intersection, our task is to detect, track,
More informationAnnouncements. Recognition. Recognition. Recognition. Recognition. Homework 3 is due May 18, 11:59 PM Reading: Computer Vision I CSE 152 Lecture 14
Announcements Computer Vision I CSE 152 Lecture 14 Homework 3 is due May 18, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying Images Chapter 17: Detecting Objects in Images Given
More informationDesigning Applications that See Lecture 7: Object Recognition
stanford hci group / cs377s Designing Applications that See Lecture 7: Object Recognition Dan Maynes-Aminzade 29 January 2008 Designing Applications that See http://cs377s.stanford.edu Reminders Pick up
More informationAn Adaptive Fusion Architecture for Target Tracking
An Adaptive Fusion Architecture for Target Tracking Gareth Loy, Luke Fletcher, Nicholas Apostoloff and Alexander Zelinsky Department of Systems Engineering Research School of Information Sciences and Engineering
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationMotion Detection Algorithm
Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection
More informationAUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S
AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S Radha Krishna Rambola, Associate Professor, NMIMS University, India Akash Agrawal, Student at NMIMS University, India ABSTRACT Due to the
More informationProbabilistic Tracking in Joint Feature-Spatial Spaces
Probabilistic Tracking in Joint Feature-Spatial Spaces Ahmed Elgammal Department of Computer Science Rutgers University Piscataway, J elgammal@cs.rutgers.edu Ramani Duraiswami UMIACS University of Maryland
More informationComputer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier
Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this
More informationAn Approach for Real Time Moving Object Extraction based on Edge Region Determination
An Approach for Real Time Moving Object Extraction based on Edge Region Determination Sabrina Hoque Tuli Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,
More informationResearch Motivations
Intelligent Video Surveillance Stan Z. Li Center for Biometrics and Security Research (CBSR) & National Lab of Pattern Recognition (NLPR) Institute of Automation, Chinese Academy of Sciences ASI-07, Hong
More informationShape Descriptor using Polar Plot for Shape Recognition.
Shape Descriptor using Polar Plot for Shape Recognition. Brijesh Pillai ECE Graduate Student, Clemson University bpillai@clemson.edu Abstract : This paper presents my work on computing shape models that
More informationMobile Human Detection Systems based on Sliding Windows Approach-A Review
Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg
More informationImage Segmentation. Shengnan Wang
Image Segmentation Shengnan Wang shengnan@cs.wisc.edu Contents I. Introduction to Segmentation II. Mean Shift Theory 1. What is Mean Shift? 2. Density Estimation Methods 3. Deriving the Mean Shift 4. Mean
More informationTarget Tracking Based on Mean Shift and KALMAN Filter with Kernel Histogram Filtering
Target Tracking Based on Mean Shift and KALMAN Filter with Kernel Histogram Filtering Sara Qazvini Abhari (Corresponding author) Faculty of Electrical, Computer and IT Engineering Islamic Azad University
More informationTextural Features for Image Database Retrieval
Textural Features for Image Database Retrieval Selim Aksoy and Robert M. Haralick Intelligent Systems Laboratory Department of Electrical Engineering University of Washington Seattle, WA 98195-2500 {aksoy,haralick}@@isl.ee.washington.edu
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationTracking Occluded Objects Using Kalman Filter and Color Information
Tracking Occluded Objects Using Kalman Filter and Color Information Malik M. Khan, Tayyab W. Awan, Intaek Kim, and Youngsung Soh Abstract Robust visual tracking is imperative to track multiple occluded
More informationTracking. Establish where an object is, other aspects of state, using time sequence Biggest problem -- Data Association
Tracking Establish where an object is, other aspects of state, using time sequence Biggest problem -- Data Association Key ideas Tracking by detection Tracking through flow Track by detection (simple form)
More informationScene Text Detection Using Machine Learning Classifiers
601 Scene Text Detection Using Machine Learning Classifiers Nafla C.N. 1, Sneha K. 2, Divya K.P. 3 1 (Department of CSE, RCET, Akkikkvu, Thrissur) 2 (Department of CSE, RCET, Akkikkvu, Thrissur) 3 (Department
More informationBSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy
BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving
More informationDetection and recognition of moving objects using statistical motion detection and Fourier descriptors
Detection and recognition of moving objects using statistical motion detection and Fourier descriptors Daniel Toth and Til Aach Institute for Signal Processing, University of Luebeck, Germany toth@isip.uni-luebeck.de
More informationBackground Subtraction Techniques
Background Subtraction Techniques Alan M. McIvor Reveal Ltd PO Box 128-221, Remuera, Auckland, New Zealand alan.mcivor@reveal.co.nz Abstract Background subtraction is a commonly used class of techniques
More informationMulti-Targets Tracking Based on Bipartite Graph Matching
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 14, Special Issue Sofia 014 Print ISSN: 1311-970; Online ISSN: 1314-4081 DOI: 10.478/cait-014-0045 Multi-Targets Tracking Based
More informationPedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016
edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationA Statistical Consistency Check for the Space Carving Algorithm.
A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationLocal Feature Detectors
Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,
More informationMoving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation
IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial
More informationDynamic Shape Tracking via Region Matching
Dynamic Shape Tracking via Region Matching Ganesh Sundaramoorthi Asst. Professor of EE and AMCS KAUST (Joint work with Yanchao Yang) The Problem: Shape Tracking Given: exact object segmentation in frame1
More informationBackground Initialization with A New Robust Statistical Approach
Background Initialization with A New Robust Statistical Approach Hanzi Wang and David Suter Institute for Vision System Engineering Department of. Electrical. and Computer Systems Engineering Monash University,
More informationReal Time Unattended Object Detection and Tracking Using MATLAB
Real Time Unattended Object Detection and Tracking Using MATLAB Sagar Sangale 1, Sandip Rahane 2 P.G. Student, Department of Electronics Engineering, Amrutvahini College of Engineering, Sangamner, Maharashtra,
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationMean shift based object tracking with accurate centroid estimation and adaptive Kernel bandwidth
Mean shift based object tracking with accurate centroid estimation and adaptive Kernel bandwidth ShilpaWakode 1, Dr. Krishna Warhade 2, Dr. Vijay Wadhai 3, Dr. Nitin Choudhari 4 1234 Electronics department
More informationEnsemble Tracking. Abstract. 1 Introduction. 2 Background
Ensemble Tracking Shai Avidan Mitsubishi Electric Research Labs 201 Broadway Cambridge, MA 02139 avidan@merl.com Abstract We consider tracking as a binary classification problem, where an ensemble of weak
More information