Switching Kalman Filter-Based Approach for Tracking and Event Detection at Traffic Intersections
|
|
- Winifred Dickerson
- 5 years ago
- Views:
Transcription
1 Switching Kalman Filter-Based Approach for Tracking and Event Detection at Traffic Intersections Harini Veeraraghavan Paul Schrater Nikolaos Papanikolopoulos Department of Computer Science and Engineering University of Minnesota Abstract Automatic event detection from video sequences has applications in several areas such as automatic visual surveillance, traffic monitoring for Intelligent Transportation Systems, key frame detection for video compression, and virtual reality applications. In this work, we present a computer vision-based approach for event detection and data collection at traffic intersections. Vision-based systems applied to monitoring outdoor scenes such as traffic intersections face several problems owing to the unconstrained nature of the environment. In other words, the tracker has to be robust to poor target registration resulting from varying lighting conditions, shadows, and occlusions resulting from scene clutter. Further, the targets of interest in the traffic intersections, namely vehicles, exhibit non-uniform motion which renders a single motion model to describe the target s motion inadequate. In this work, we present a multiple cue-based approach combined with a switching Kalman filter for robust estimation of targets. The results of tracking are used for detecting events such as turning, stopped or slow moving vehicles, and the collection of various statistics such as average speeds and vehicle accelerations in a scene. Keywords Event detection, switching Kalman filter, multiple cue-based tracking, cue combination. 1 Introduction Trajectory-based event recognition is the basis of most automated surveillance and monitoring applications. The events of interest for a given application domain could be a collection of pre-specified events or unusual or rarely occurring events. While the former just requires specification of the events of interest as domain specific knowledge or models, the latter requires identification of unique events not conforming to the standard pattern of learned activities in the scene. The events of interest are generally learnt or specified us- author to whom all correspondence should be sent. ing supervised or unsupervised learning methods. Detection and learning methods vary in the degree of complexity ranging from just specification of particular events directly in the form of constraints in the scene [1], specific motion models, patterns of motion represented as hidden Markov models, or Markov random fields [2, 3], etc. Other examples include, methods such as linear vector quantization, local linear embedding methods that learn a low dimensional representation of the normal pattern of activities in the scene, such as [4, 5, 6], etc. Learning of activities is also cast as learning of the parameters of a hidden Markov model [7, 8]. In this work, we use a simple representation of the events of interest in the scene. The events of interest are based on the trajectory of the targets such as turn directions, lane changes, and the motion characteristics such as slow moving or stopped, speeding etc. In this work, the events are detected in conjunction with the motion models used for estimating the target s motion. The constraints for detecting the turn directions are specified using relatively simple rules as described in later section in the paper. The basic requirement for reliably detecting events using trajectory data is accurate estimation of the target trajectories. Using vision-based methods for tracking in outdoor environments incurs problems in good target registration owing to the increased uncertainties in the scene. The uncertainties are introduced due to occlusions arising between targets and the background, as well as between targets. Occlusions can be particularly severe in scenes as traffic intersections where the traffic density is increased around the intersections. For a system that makes use of motion segmented foreground blobs as cues for a target, the problem is more severe owing to the increased ambiguity in uniquely identifying the target. Solutions to data association include, probabilistic data association [9, 10], interacting multiple models, multiple hypothesis tracking approaches [11], and sampling approaches such as [12]. Illumination variations, particularly shadows are another source of poor target registration, owing to the change in the target s appearance to 1
2 the viewing camera. In this work, we make use of two different cues, namely, foreground blobs obtained by a motion segmentation methods and the target color for registering the targets. Using multiple cues helps in obtaining reliable target registration under a variety of conditions thereby improving the accuracy of the tracking. To deal with occlusions in case of blob tracking, we use a joint probabilistic data association method as proposed by [10]. Uncertain motion of the targets especially near intersections, makes a single model-based estimator unsuitable for accurate tracking. Methods for tracking maneuvering targets include, multiple hypothesis approaches, interacting multiple models, switching state space models [13, 14], mixture of filters [15] etc. In this work, we use a mixture of Kalman filters for modeling the different motions exhibited by the targets such as slow moving or stopping, turning, accelerating, and uniform velocity motion. The approach for tracking is depicted in Figure 1. Given the real time constraints of the system, using a switching Kalman filter for each cue, especially coupled with data association will be computationally intensive. Keeping in mind the computational requirements, we use a two step approach. In the first step, the combined estimates using the two measurements are incorporated sequentially in to an extended Kalman filter as described in our previous work [16]. The results of the estimator, namely the state estimate and the covariance are then incorporated into a switching Kalman filter to obtain the state estimate and the covariance. Our approach differs from the previously mentioned approaches for tracking and event detection in that we make use of multiple simple vision cues simultaneously in conjunction with a multiple model approach for estimating the position of the targets in the scene, keeping the real-time constraints for tracking. The paper is organized as follows. Section 2 discusses the details of the tracking method, namely, the different tracking modalities used, the switching Kalman filter and the method for cue combination. Section 3 presents the details of event detection. The results of the tracking method and event detection are presented in Section 4, while discussion of the results and future work is in Section 5. Section 6 concludes the paper. 2 Tracking Approach The basic tracking approach is outlined in the Figure 1. The system makes use of two different cues, namely, foreground blobs obtained through region segmentation, and the target color represented as histogram. Region segmentation is performed using an adaptive background subtraction method based on [17]. The results of the blob tracking are used for initializing new targets as well as position measurement for the targets. To deal with the ambiguities owing to occlusions, a joint probabilistic data association filter formulation is used for estimating the position of the targets. The target color distribution is initialized using the blob used for initializing a target. The target color is represented as a three channel histogram. The color tracking method makes use of a mean shift tracking procedure based on [18] for tracking the targets in the scene. Given that the target color is more descriptive of the target, the tracking method uses the results of the tracking procedure directly in an extended Kalman filter. Details of the blob and color tracking method are described in our previous work [16]. The measurement Blob Tracking Joint Probabilistic data association filter Color Tracking Extended Kalman filter Switching Kalman filter Event Detection Module Figure 1: Tracking approach. from the two different tracking modalities are combined sequentially in the estimator using two different filter settings. In the case of blob tracking, an extended joint probabilistic data association filter is used. The estimated positions are then updated using an extended Kalman filter based on the measurements from the color tracking procedure. The cues can be combined sequentially in our case due to the independence of the cues and their measurement errors as described in [19]. Also since the measurements are incorporated based on their errors in the measurement, the cues are weighted differently based on their relative certainties. Details of computation of the measurement errors are discussed in Section 2.1. The above two filters use a first order motion model, namely, constant velocity motion model in the filter. The result of the position estimates obtained after combining the two measurements is then passed to a switching Kalman filter. The switching filter makes use of three different motion models of different degrees, namely, a zeroth order or constant position, first order or constant velocity, and a second order or constant acceleration motion models. The motivation for choosing these models is that the motion 2
3 of vehicles in a scene can generally be described by using one or a combination of the motion models. The advantage of using a switching Kalman filter framework is that at any given time, the motion ascribed to a vehicle is determined as a weighted combination of the three models, thereby providing more flexibility to describe their motion. The results of the target trajectory is then used by the event detection module, the details of which are in Section Measurement Errors The measurements from the blob tracking method consists of the center of the blobs in the image and the error in the measurement. A blob is a very abstract representation of a target. The only information that can be obtained from the blob about the target is the approximate area of the target in the image and it s position. The error in the position measurement obtained from blob segmentation depends on the accuracy of the segmentation itself and the occluding entities present in the vicinity of the target. Currently, we quantify this error as two constant values depending on whether a target was seen connected uniquely to a blob or was detected as occluded with other targets or connected to multiple blobs. This can be expressed as, T 1 T 2 if no occlusion otherwise where T 1 and T 2 are predefined errors such that, T 2 > T 1. This simple scheme for assigning measurement errors works as long as the occlusions can be detected reliably. It is more easier to detect occlusions between tracked targets. On the other hand, detecting occlusions between a tracked target and an uninitialized target or between a target and the background is not very reliable when just using the blob area or the number of associated blobs as a measure of detecting occlusions. The procedure for locating targets using color is a mean shift tracking method proposed by [18], which uses a gradient descent procedure for locating the target. The procedure searches for the location in the image which minimizes the difference between the original target density q and the target density p computed at a given location. The kernel function used for locating the target candidates and the model is based on the Epanechikov kernel. The target density is computed at a new location z is computed as, p(z) = C N i=1 (1) f( z x i h 2 )K (2) where C is the normalization constant, x 1,..., x N are the pixels used for target model computation, h is the bandwidth of the kernel function which corresponds to the scale or dimensions of the target in the image, and K is the Kronecker delta function. From the kernel function it should be evident that the error in target localization is sensitive to the bandwidth of the chosen kernel. The procedure computes the bandwidth at every step based on [18]. The error in a measurement is computed using the sum of squared differences metric as in [20] metric. The error in measurement is computed as the standard deviation of a scaled Gaussian centered at the measured target location. This error corresponds directly to the certainty of the target location since the more peaked the distribution, higher the confidence and lower the error. 2.2 Switching Kalman Filter Switching Kalman filters are basically switching state space models which maintain a discrete set of (hidden) states and switch between or take a linear combination of them. In essence, these can be viewed as a hybrid over hidden Markov models and state space models. Hidden Markov models represent information from the past sequence through a discrete switch variable representative of the hidden state. State space models on the other hand, represent the information based on the past observations through a real valued state vector, frequently called the switch variable. The switch variable allows to switch between the different state space models, thereby allowing to model the different operating conditions of the system. To overcome the problem of exponential belief growth with time, operations such as collapsing, selection, variational methods are frequently used [21]. The standard Kalman fil- x 1 t 1 t 1,P1 t 1 t 1 Filter 1 Filter 2 Merge x 1 t t,p 1 t t x 2 t 1 t 1,P2 t 1 t 1 Filter 1 Filter 2 Merge x 2 t t,p 2 t t Figure 2: Moment matching of filters. ter recursions are performed in each of the filters with the exception of computation of an innovation likelihood term. The likelihood term corresponds to the likelihood that the current model at time t is j, given the model at time t 1 was i. It is computed as, L t = N( t ; 0, C t ). (3) 3
4 where Delta t is the filter residual and C t is the information covariance. The switch parameter is updated as, E t 1,t t (i, j) = P (S t 1 = i, S t = j y 1:t ) = L t (i, j)φ(i, j)et 1 t 1 j L t(i, j)φ(i, j)et 1 t 1 E t t(j) = i i (i) (4) E t 1,t t 1 (i, j) (5) W i j = P (S t 1 = i S t = j, 1 : y t ) = Et 1,t t (i, j) E t t(j) where i, j correspond to filter models. Φ(i, j) corresponds to the conditional probability of the switch variable being j at time t, given that the switch variable at time t 1 is i with the data from 1 : t 1. The collapse operation is the same as the moment matching of Gaussian distributions. The equations for collapse operation are as follows: x t t(j) = i P t t (j) = i (6) x t ti(j)p t t i(j) (7) P t t i(j)w i j t (8) where the value of the switch node inside the parenthesis corresponds to the switch node at time t and the one outside to time t 1. The collapsing operation is pictorially depicted in Figure 2. The targets are tracked in the world coordinates. The outputs from the single model Kalman filters used for incorporating the two measurements are used as inputs to the filters in the switching filter. The estimated state covariance from the single model Kalman filter is used as a measurement error for these filters. The filters used in the switching framework consist of: Constant position [x y] where x and y are the position of the target in the scene, Constant velocity [x y ẋ ẏ] where ẋ and ẏ are the velocity of the target in the scene, and Constant acceleration [x y ẋ ẏ ẍÿ] where ẍ and ÿ are the target accelerations in the scene. 3 Event Detection The input to the event detection module consists of the target s trajectory and the filter mode probabilities obtained from the switching Kalman filter. As mentioned earlier, all the events of interest are specified using simple domain rules and discussed in Section 3.1, and Section 3.2. The events of interest consist of the following: Target trajectory based: turning, lane changes, and Motion based: slow moving or stopped, speeding vehicles. t+2w V2 t+w V V1 Figure 3: Turn computation. A target s motion along a path is characterized into any of the following modes, namely, (a) right turn, (b) left turn, (c) uniform motion (or motion with constant velocity), (d) accelerating or decelerating, (e) velocity over the speeding threshold, and (f), slow moving or stopped. At any instant, the estimates of a target s motion is used to classify its motion into any of the above mentioned modes. These modes are then collected as runs along the entire sequence of a target s trajectory. An example run is shown in Figure 4. From these runs, the motion of the target during a certain time duration can be extracted. For instance, in Figure 4, the most significant motion during the time interval (0 to 30 frames) is acceleration motion indicated by A. In order to discount noise in the position estimates, we take samples of a target s motion separated by certain time period. Further, it is necessary that the samples of the position trajectory be separated by at-least a minimum time interval δt for turn detection. The details of the different event detection is discussed in Section 3.1 and Section 3.2. SSSSAAAAAAAAAAAAAAAAAAUUUUU Stopped Acceleration t Uniform velocity Figure 4: Event representation using runs. The different letters marked U, A, and S correspond to different events such as, uniform velocity, uniform acceleration, stopped or slow moving respectively. 3.1 Trajectory-based event detection The events detected based on the trajectory of the target are turns (right and left), and lane changes. Lane changes are treated similar to turn detection with the difference that lane changes are shorter turns compared to actual turns. Hence the main difference between a lane change and turn is the length of a turn run. The basic method for computing the turn direction is depicted in the Figure 3. As shown in the 4
5 Left turn to be tracked reliably despite the occlusions and absence of any information from the blob tracker. The contours around the vehicles correspond to the blobs returned by the blob tracker. Right turn Figure 5: Direction of motion related to resultant vector direction. figure, the turn direction is computed from the resultant of the vector connecting the positions sampled at the time intervals, t, t + w, and t + 2w. The direction of the resultant vector corresponds to the direction of motion. The angle of the resultant vector has a specific relation to the turn direction as shown in the Figure 5. This can be expressed as follows: if a > λ1 a < λ2, left if a > λ2 a < π λ1, right otherwise, straight (9) where λ1 is the threshold angle below which the motion is along a straight path and λ2 is the threshold for separating the motions in the right and left directions. This is illustrated in the Figure Motion-based event detection These events are detected based on the motion parameters such as velocity and acceleration of the target. The detected events include, slow moving or stopped, moving at speeds over the pre-specified speed limit, acceleration, and uniform motion. Vehicles moving at speeds over speed limits are simply detected by comparing their mean speed during a fixed interval with the speed threshold. Slow moving or stopped motion is obtained whenever the filter corresponding to constant position gets the highest probability. Similarly, acceleration and uniform motions are obtained from the corresponding filters for vehicles detected to be moving along a straight path. Figure 7: Right turning sequence. Figure 7 shows the result for a right turning vehicle. The stopped or slow moving portions correspond to the vehicle waiting for pedestrians before completing its right turn. The results are in world coordinates, expressed in cms. The events are overlaid on the target s trajectory. The distribution of the events over different periods of time is shown in Figure 8. Each successive plot corresponds to the distribution of events at half the time scale as that of the previous plot. 4 Results Figure 6 shows an example of a tracking in a fairly crowded scene. As shown in most of the stopped targets continue Figure 9: Speeding event detection. 5
6 (a) Frame 2730 (b) Frame 2814 (c) Frame 2847 Figure 6: Lane change sequence. Figure 9 shows the trajectory with the detected events for a vehicle moving at speeds higher than the user defined speed limits (34mph in our experiments). Figure 10 shows the speed and acceleration of the vehicle over time. The acceleration as shown in the Figure 10 remains a constant after sometime as a consequence of higher probability being placed on the constant velocity model (as shown by the corresponding uniform velocity event detected in the trajectory), as a result of which, acceleration is updated by very small amounts. Missing observations occur particularly for blob tracker when a target stops at a scene for a long period of time owing to the target portion being modeled into the background by the background update algorithm. For targets with distinct color distribution, the mean shift tracker continues to produce reasonably good observation. However for targets with colors resembling the background (dark colored target moving under a shadowed portion of the scene for example), or multiple targets with similar coloration, the mean shift tracker is no longer reliable. Another source of poor target registration from mean shift tracking can occur due to incorrect scale or bandwidth selection in the kernel function used for localizing the target. This corresponds to the h term in Equation 2. The scale is recomputed in each step based on the previous scale and the level of uncertainty in the current target registration. Hence, a large uncertainty in the observation can cause the bandwidth to increase or decrease in order to use more or less portions of the image for locating the target which in turn can introduce more errors in the computation of the target histogram match. One another source of poor performance is due to the false classification of background artifacts as targets. Falsely identified targets can disrupt the accurate tracking of true targets due to ghost occlusions detected between them by the system. False target identification occurs as a result of incorrect segmentation in case of a sudden illumination change, and camera shaking, or multiple targets identified as a single target. False identification of multiple targets as a single target occurs in the case when multiple targets enter the scene together (occluded) and segmented as a single blob. The percentage of correctly tracked vehicles is about 88%. The tracker was tested on two different traffic intersection image sequences, each approximately 20 minutes long. Target registration can be improved by the incorporation of more cues or multiple cameras for increasing the reliability of the viewed target. The cues are combined based on the measurement error provided by the individual tracking module. The measurement errors that are used cur- Figure 10: Speeding and acceleration plots. 5 Discussion and Future Work While the addition of multiple cues and multiple motion models increases the reliability of tracking, the tracker is still prone to errors particularly in cases of very poor visibility to the viewing camera such as poor resolution of distant targets, persistent occlusions, missing or bad observations. 6
7 Figure 8: Event distribution over varying time periods for a right turning sequence. rently, especially for the blob tracker are based on values that were chosen by trial and error. Future work involves learning of these error covariances based on the tracked data. The event detection rules are fairly simple and are detected on the fly based on the mode probabilities of the switching filters and some simple thresholds. Hence, the event detection can be performed in real time without any additional intensive computation. The performance of the event detection system is constrained by the quality of the target state estimator. Poor state estimates arising due to increased ambiguities in the target registration caused occasional misclassification of events. The percentage of falsely classified vehicle modes was about 3%. Most of the falsely identified modes were due to misclassification of initial portion of left turns and right turns. Currently, the detected events are very simple based on simple rules for detection. For example, the system makes no distinction between a target stopped at an intersection versus a target stopped at non-stopping portions of the scene, although the later is a more interesting event compared to the former. Similarly, while the system can detect turns, it cannot detect more complex events such as U-turns. Future work in the event detection involves developing more complex rules or learning mechanisms for detecting more complex events such as mentioned above. 6 Conclusions This paper presented a method for automatic event detection for outdoor traffic applications. Event detection is based on the results of a switching Kalman filter for modeling different target motions and simple rules for deployment in a real-time application such as traffic monitoring. The paper also presented a method for robust target tracking and estimation based on combination multiple cues such as motion segmented blobs and target color combined with the switching Kalman filter. We also showed the results obtained for a data collection system on two different traffic scenes. 7 Acknowledgements This work has been supported in part by the Minnesota Department of Transportation, and the ITS institute at the University of Minnesota. References [1] G. Medioni, I. Cohen, F. Bremond, S. Hongeng, and R. Nevatia. Event detection and analysis from video streams. IEEE Trans. on Pattern Analysis and Machine Intelligence, 23(8): , August [2] S. Kamijo, Y. Matsushita, K. Ikeuchi, and M. Sakauchi. Occlusion robust tracking utilizing spatio-temporal Markov 7
8 random field model. In International Conference on Pattern Recognition, volume 1, pages , September [3] S. Kamijo, Y. Matsushita, K. Ikeuchi, and M. Sakauchi. Traffic monitoring and accident detection at intersections. IEEE Transactions on Intelligent Transportation Systems, 1(2): , June [4] C. Stauffer. Automatic hierarchical classification using timebased co-occurrences. In Proc. Computer Vision and Pattern Recognition Conf., volume 2, pages , [5] L. Lee, R. Romano, and G. Stein. Monitoring activities from multiple video streams: establishing a common coordinate frame. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8): , August [6] H. Zhong, J. Shi, and M. Visontai. Detecting unusual activities in video. To appear in Proc. Computer Vision and Pattern Recognition Conf., [7] M. Brand and V. Kettnaker. Discovery and segmentation of activities in video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8): , August [8] M. Brand, N. Oliver, and A. Pentland. Coupled hidden Markov models for complex action recognition. In Proc. Computer Vision and Pattern Recognition Conf., pages , June [9] C. Rasmussen and G.D. Hager. Probabilistic data association methods for tracking complex visual objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6): , [10] Y. Bar-Shalom and T. E. Fortmann. Tracking and data association. Academic Press, [11] T. Cham and J. M. Rehg. A multiple hypothesis approach to figure tracking. In Proc. Computer Vision and Pattern Recognition Conf. CVPR 99, pages , June [12] M. Isard and A. Blake. Condensation conditional density propagation for visual tracking. International Journal of Computer Vision, 29(1):5 28, [13] V. Pavlovíc, J.M. Rehg, and J. MacCormick. Learning switching linear models of human motion. In Nueral Information Processing Systems, pages , [14] Z. Ghahramani and G.E. Hinton. Switching state-space models. Neural Computation, 12(4): , April [15] R. Chen and J.S. Liu. Mixture Kalman filters. Journal of Royal Statistical Society Series-B Statistical Methodology, 62(Part 3): , [16] H. Veeraraghavan and N. P. Papanikolopoulos. Combining multiple tracking modalities for vehicle tracking in traffic intersections. In IEEE Conf. on Robotics and Automation, [17] C. Stauffer and W. E. L. Grimson. Adaptive background mixture models for real-time tracking. In Proc. Computer Vision and Pattern Recognition Conf. CVPR 99, June [18] D. Comaniciu, V. Ramesh, and P. Meer. Kernel-based object tracking. In IEEE Trans. Pattern Analysis and Machine Intelligence, volume 25, [19] Y. Bar-Shalom, X. Rongli, and T. Kirubarajan. Estimation with applications to tracking and navigation. John-Wiley and Sons, [20] K. Nickels and S. Hutchinson. Estimating uncertainty in SSD-based feature tracking. Image and Vision Computing, 20(1):47 58, January [21] K.P. Murphy. Switching Kalman filters. Technical report, U.C. Berkeley,
Switching Kalman Filter-Based Approach for Tracking and Event Detection at Traffic Intersections
Proceedings of the 13th Mediterranean Conference on Control and Automation Limassol, Cyprus, June 27-29, 2005 WeA06-1 Switching Kalman Filter-Based Approach for Tracking and Event Detection at Traffic
More informationCombining Multiple Tracking Modalities for Vehicle Tracking in Traffic Intersections
Combining Multiple Tracking Modalities for Vehicle Tracking in Traffic Intersections Harini Veeraraghavan Nikolaos Papanikolopoulos Artificial Intelligence, Vision and Robotics Lab Department of Computer
More informationSwitching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking
Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Yang Wang Tele Tan Institute for Infocomm Research, Singapore {ywang, telctan}@i2r.a-star.edu.sg
More informationEvaluation of Moving Object Tracking Techniques for Video Surveillance Applications
International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation
More informationA Fast Moving Object Detection Technique In Video Surveillance System
A Fast Moving Object Detection Technique In Video Surveillance System Paresh M. Tank, Darshak G. Thakore, Computer Engineering Department, BVM Engineering College, VV Nagar-388120, India. Abstract Nowadays
More informationObject Tracking with an Adaptive Color-Based Particle Filter
Object Tracking with an Adaptive Color-Based Particle Filter Katja Nummiaro 1, Esther Koller-Meier 2, and Luc Van Gool 1,2 1 Katholieke Universiteit Leuven, ESAT/VISICS, Belgium {knummiar,vangool}@esat.kuleuven.ac.be
More informationA Texture-based Method for Detecting Moving Objects
A Texture-based Method for Detecting Moving Objects M. Heikkilä, M. Pietikäinen and J. Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500
More informationAdaptive Background Mixture Models for Real-Time Tracking
Adaptive Background Mixture Models for Real-Time Tracking Chris Stauffer and W.E.L Grimson CVPR 1998 Brendan Morris http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Motivation Video monitoring and surveillance
More informationTRANSPORTATION STUDIES INSTITUTE MANAGING SUBURBAN INTERSECTIONS THROUGH SENSING. Harini Veeraraghavan Osama Masoud Nikolaos P.
CENTER FOR TRANSPORTATION STUDIES ITS INSTITUTE MANAGING SUBURBAN INTERSECTIONS THROUGH SENSING Harini Veeraraghavan Osama Masoud Nikolaos P. Papanikolopoulos Artificial Intelligence, Robotics, and Vision
More informationIntroduction to behavior-recognition and object tracking
Introduction to behavior-recognition and object tracking Xuan Mo ipal Group Meeting April 22, 2011 Outline Motivation of Behavior-recognition Four general groups of behaviors Core technologies Future direction
More informationAdaptive Geometric Templates for Feature Matching
Proceedings of the 2006 IEEE International Conference on Robotics and Automation Orlando, Florida - May 2006 Adaptive Geometric Templates for Feature Matching Harini Veeraraghavan Paul Schrater Nikolaos
More informationHuman Upper Body Pose Estimation in Static Images
1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project
More informationCar tracking in tunnels
Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern
More informationVisual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.
Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:
More informationParticle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore
Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction
More informationPairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement
Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement Daegeon Kim Sung Chun Lee Institute for Robotics and Intelligent Systems University of Southern
More informationA Real-Time Collision Warning System for Intersections
A Real-Time Collision Warning System for Intersections Kristen Stubbs, Hemanth Arumugam, Osama Masoud, Colin McMillen, Harini Veeraraghavan, Ravi Janardan, Nikos Papanikolopoulos 1 {kstubbs, hemanth, masoud,
More informationINTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A REVIEW ON ILLUMINATION COMPENSATION AND ILLUMINATION INVARIANT TRACKING METHODS
More informationA Street Scene Surveillance System for Moving Object Detection, Tracking and Classification
A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification Huei-Yung Lin * and Juang-Yu Wei Department of Electrical Engineering National Chung Cheng University Chia-Yi
More informationFace Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation
Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition
More informationMoving Shadow Detection with Low- and Mid-Level Reasoning
Moving Shadow Detection with Low- and Mid-Level Reasoning Ajay J. Joshi, Stefan Atev, Osama Masoud, and Nikolaos Papanikolopoulos Dept. of Computer Science and Engineering, University of Minnesota Twin
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationVisual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Visual Tracking Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 11 giugno 2015 What is visual tracking? estimation
More informationAutomatic Tracking of Moving Objects in Video for Surveillance Applications
Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering
More informationA MIXTURE OF DISTRIBUTIONS BACKGROUND MODEL FOR TRAFFIC VIDEO SURVEILLANCE
PERIODICA POLYTECHNICA SER. TRANSP. ENG. VOL. 34, NO. 1 2, PP. 109 117 (2006) A MIXTURE OF DISTRIBUTIONS BACKGROUND MODEL FOR TRAFFIC VIDEO SURVEILLANCE Tamás BÉCSI and Tamás PÉTER Department of Control
More informationChapter 9 Object Tracking an Overview
Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging
More informationLearning Switching Linear Models of Human Motion
To appear in Neural Information Processing Systems 13, MIT Press, Cambridge, MA Learning Switching Linear Models of Human Motion Vladimir Pavlović and James M. Rehg Compaq - Cambridge Research Lab Cambridge,
More informationBackground Subtraction in Video using Bayesian Learning with Motion Information Suman K. Mitra DA-IICT, Gandhinagar
Background Subtraction in Video using Bayesian Learning with Motion Information Suman K. Mitra DA-IICT, Gandhinagar suman_mitra@daiict.ac.in 1 Bayesian Learning Given a model and some observations, the
More informationROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL
ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte
More informationDefinition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos
Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Sung Chun Lee, Chang Huang, and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu,
More informationMotion Tracking and Event Understanding in Video Sequences
Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!
More informationHuman Motion Detection and Tracking for Video Surveillance
Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,
More informationFace Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm
Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Dirk W. Wagener, Ben Herbst Department of Applied Mathematics, University of Stellenbosch, Private Bag X1, Matieland 762,
More informationVehicle Detection & Tracking
Vehicle Detection & Tracking Gagan Bansal Johns Hopkins University gagan@cis.jhu.edu Sandeep Mullur Johns Hopkins University sandeepmullur@gmail.com Abstract Object tracking is a field of intense research
More informationVisual Motion Analysis and Tracking Part II
Visual Motion Analysis and Tracking Part II David J Fleet and Allan D Jepson CIAR NCAP Summer School July 12-16, 16, 2005 Outline Optical Flow and Tracking: Optical flow estimation (robust, iterative refinement,
More informationMixture Models and EM
Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering
More informationMulti-View Face Tracking with Factorial and Switching HMM
Multi-View Face Tracking with Factorial and Switching HMM Peng Wang, Qiang Ji Department of Electrical, Computer and System Engineering Rensselaer Polytechnic Institute Troy, NY 12180 Abstract Dynamic
More informationFace Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN
2016 International Conference on Artificial Intelligence: Techniques and Applications (AITA 2016) ISBN: 978-1-60595-389-2 Face Recognition Using Vector Quantization Histogram and Support Vector Machine
More informationSupport Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization
Journal of Computer Science 6 (9): 1008-1013, 2010 ISSN 1549-3636 2010 Science Publications Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization
More informationVideo Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin
Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods
More informationEnsemble Tracking. Abstract. 1 Introduction. 2 Background
Ensemble Tracking Shai Avidan Mitsubishi Electric Research Labs 201 Broadway Cambridge, MA 02139 avidan@merl.com Abstract We consider tracking as a binary classification problem, where an ensemble of weak
More informationVideo Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi
IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 2 Issue 11, November 2015. Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi
More informationShape Descriptor using Polar Plot for Shape Recognition.
Shape Descriptor using Polar Plot for Shape Recognition. Brijesh Pillai ECE Graduate Student, Clemson University bpillai@clemson.edu Abstract : This paper presents my work on computing shape models that
More informationVideo Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin
Final Report Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This report describes a method to align two videos.
More informationFusion of Radar and EO-sensors for Surveillance
of Radar and EO-sensors for Surveillance L.J.H.M. Kester, A. Theil TNO Physics and Electronics Laboratory P.O. Box 96864, 2509 JG The Hague, The Netherlands kester@fel.tno.nl, theil@fel.tno.nl Abstract
More informationTarget Tracking Based on Mean Shift and KALMAN Filter with Kernel Histogram Filtering
Target Tracking Based on Mean Shift and KALMAN Filter with Kernel Histogram Filtering Sara Qazvini Abhari (Corresponding author) Faculty of Electrical, Computer and IT Engineering Islamic Azad University
More informationAn Approach for Real Time Moving Object Extraction based on Edge Region Determination
An Approach for Real Time Moving Object Extraction based on Edge Region Determination Sabrina Hoque Tuli Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,
More informationA Bayesian Approach to Background Modeling
A Bayesian Approach to Background Modeling Oncel Tuzel Fatih Porikli Peter Meer CS Department & ECE Department Mitsubishi Electric Research Laboratories Rutgers University Cambridge, MA 02139 Piscataway,
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationFace Detection and Recognition in an Image Sequence using Eigenedginess
Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras
More informationhuman vision: grouping k-means clustering graph-theoretic clustering Hough transform line fitting RANSAC
COS 429: COMPUTER VISON Segmentation human vision: grouping k-means clustering graph-theoretic clustering Hough transform line fitting RANSAC Reading: Chapters 14, 15 Some of the slides are credited to:
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections
More informationAutomatic visual recognition for metro surveillance
Automatic visual recognition for metro surveillance F. Cupillard, M. Thonnat, F. Brémond Orion Research Group, INRIA, Sophia Antipolis, France Abstract We propose in this paper an approach for recognizing
More informationRobust tracking for processing of videos of communication s gestures
Robust tracking for processing of videos of communication s gestures Frédérick Gianni, Christophe Collet, and Patrice Dalle Institut de Recherche en Informatique de Toulouse, Université Paul Sabatier,
More informationDet De e t cting abnormal event n s Jaechul Kim
Detecting abnormal events Jaechul Kim Purpose Introduce general methodologies used in abnormality detection Deal with technical details of selected papers Abnormal events Easy to verify, but hard to describe
More informationSimultaneous Appearance Modeling and Segmentation for Matching People under Occlusion
Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion Zhe Lin, Larry S. Davis, David Doermann, and Daniel DeMenthon Institute for Advanced Computer Studies University of
More informationMulti-Cue Based Visual Tracking in Clutter Scenes with Occlusions
2009 Advanced Video and Signal Based Surveillance Multi-Cue Based Visual Tracking in Clutter Scenes with Occlusions Jie Yu, Dirk Farin, Hartmut S. Loos Corporate Research Advance Engineering Multimedia
More informationMulti-Camera Occlusion and Sudden-Appearance-Change Detection Using Hidden Markovian Chains
1 Multi-Camera Occlusion and Sudden-Appearance-Change Detection Using Hidden Markovian Chains Xudong Ma Pattern Technology Lab LLC, U.S.A. Email: xma@ieee.org arxiv:1610.09520v1 [cs.cv] 29 Oct 2016 Abstract
More informationVisual Saliency Based Object Tracking
Visual Saliency Based Object Tracking Geng Zhang 1,ZejianYuan 1, Nanning Zheng 1, Xingdong Sheng 1,andTieLiu 2 1 Institution of Artificial Intelligence and Robotics, Xi an Jiaotong University, China {gzhang,
More informationProbabilistic Detection and Tracking of Motion Discontinuities
Probabilistic Detection and Tracking of Motion Discontinuities Michael J. Black David J. Fleet Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, CA 94304 black,fleet @parc.xerox.com http://www.parc.xerox.com/
More informationPROBLEM FORMULATION AND RESEARCH METHODOLOGY
PROBLEM FORMULATION AND RESEARCH METHODOLOGY ON THE SOFT COMPUTING BASED APPROACHES FOR OBJECT DETECTION AND TRACKING IN VIDEOS CHAPTER 3 PROBLEM FORMULATION AND RESEARCH METHODOLOGY The foregoing chapter
More informationA Spatio-Spectral Algorithm for Robust and Scalable Object Tracking in Videos
A Spatio-Spectral Algorithm for Robust and Scalable Object Tracking in Videos Alireza Tavakkoli 1, Mircea Nicolescu 2 and George Bebis 2,3 1 Computer Science Department, University of Houston-Victoria,
More informationTracking Occluded Objects Using Kalman Filter and Color Information
Tracking Occluded Objects Using Kalman Filter and Color Information Malik M. Khan, Tayyab W. Awan, Intaek Kim, and Youngsung Soh Abstract Robust visual tracking is imperative to track multiple occluded
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationTri-modal Human Body Segmentation
Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4
More informationBackpack: Detection of People Carrying Objects Using Silhouettes
Backpack: Detection of People Carrying Objects Using Silhouettes Ismail Haritaoglu, Ross Cutler, David Harwood and Larry S. Davis Computer Vision Laboratory University of Maryland, College Park, MD 2742
More informationPeople Tracking and Segmentation Using Efficient Shape Sequences Matching
People Tracking and Segmentation Using Efficient Shape Sequences Matching Junqiu Wang, Yasushi Yagi, and Yasushi Makihara The Institute of Scientific and Industrial Research, Osaka University 8-1 Mihogaoka,
More informationCS 664 Segmentation. Daniel Huttenlocher
CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical
More informationMultiple camera fusion based on DSmT for tracking objects on ground plane
Multiple camera fusion based on DSmT for tracking objects on ground plane Esteban Garcia and Leopoldo Altamirano National Institute for Astrophysics, Optics and Electronics Puebla, Mexico eomargr@inaoep.mx,
More informationIdle Object Detection in Video for Banking ATM Applications
Research Journal of Applied Sciences, Engineering and Technology 4(24): 5350-5356, 2012 ISSN: 2040-7467 Maxwell Scientific Organization, 2012 Submitted: March 18, 2012 Accepted: April 06, 2012 Published:
More informationSpatio-Temporal Stereo Disparity Integration
Spatio-Temporal Stereo Disparity Integration Sandino Morales and Reinhard Klette The.enpeda.. Project, The University of Auckland Tamaki Innovation Campus, Auckland, New Zealand pmor085@aucklanduni.ac.nz
More informationProbabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences
Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences Presentation of the thesis work of: Hedvig Sidenbladh, KTH Thesis opponent: Prof. Bill Freeman, MIT Thesis supervisors
More informationMoving Object Detection for Video Surveillance
International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Moving Object Detection for Video Surveillance Abhilash K.Sonara 1, Pinky J. Brahmbhatt 2 1 Student (ME-CSE), Electronics and Communication,
More informationA NOVEL MOTION DETECTION METHOD USING BACKGROUND SUBTRACTION MODIFYING TEMPORAL AVERAGING METHOD
International Journal of Computer Engineering and Applications, Volume XI, Issue IV, April 17, www.ijcea.com ISSN 2321-3469 A NOVEL MOTION DETECTION METHOD USING BACKGROUND SUBTRACTION MODIFYING TEMPORAL
More informationDetecting and Identifying Moving Objects in Real-Time
Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary
More informationClassification of objects from Video Data (Group 30)
Classification of objects from Video Data (Group 30) Sheallika Singh 12665 Vibhuti Mahajan 12792 Aahitagni Mukherjee 12001 M Arvind 12385 1 Motivation Video surveillance has been employed for a long time
More informationФУНДАМЕНТАЛЬНЫЕ НАУКИ. Информатика 9 ИНФОРМАТИКА MOTION DETECTION IN VIDEO STREAM BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING
ФУНДАМЕНТАЛЬНЫЕ НАУКИ Информатика 9 ИНФОРМАТИКА UDC 6813 OTION DETECTION IN VIDEO STREA BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING R BOGUSH, S ALTSEV, N BROVKO, E IHAILOV (Polotsk State University
More informationAnnouncements. Computer Vision I. Motion Field Equation. Revisiting the small motion assumption. Visual Tracking. CSE252A Lecture 19.
Visual Tracking CSE252A Lecture 19 Hw 4 assigned Announcements No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in WLH Room 2112 Motion Field Equation Measurements I x = I x, T: Components
More informationCounting People from Multiple Cameras
Counting People from Multiple Cameras Vera Kettnaker Ramin Zabih Cornel1 University Ithaca, NY 14853 kettnake,rdz@cs.cornell.edu Abstract We are interested in the content analysis of video from a collection
More information2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes
2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or
More informationModel-based Visual Tracking:
Technische Universität München Model-based Visual Tracking: the OpenTL framework Giorgio Panin Technische Universität München Institut für Informatik Lehrstuhl für Echtzeitsysteme und Robotik (Prof. Alois
More informationA Feature Point Matching Based Approach for Video Objects Segmentation
A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer
More informationFace detection in a video sequence - a temporal approach
Face detection in a video sequence - a temporal approach K. Mikolajczyk R. Choudhury C. Schmid INRIA Rhône-Alpes GRAVIR-CNRS, 655 av. de l Europe, 38330 Montbonnot, France {Krystian.Mikolajczyk,Ragini.Choudhury,Cordelia.Schmid}@inrialpes.fr
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 17 130402 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Background Subtraction Stauffer and Grimson
More informationUnusual Activity Analysis using Video Epitomes and plsa
Unusual Activity Analysis using Video Epitomes and plsa Ayesha Choudhary 1 Manish Pal 1 Subhashis Banerjee 1 Santanu Chaudhury 2 1 Dept. of Computer Science and Engineering, 2 Dept. of Electrical Engineering
More informationMotion Detection Using Adaptive Temporal Averaging Method
652 B. NIKOLOV, N. KOSTOV, MOTION DETECTION USING ADAPTIVE TEMPORAL AVERAGING METHOD Motion Detection Using Adaptive Temporal Averaging Method Boris NIKOLOV, Nikolay KOSTOV Dept. of Communication Technologies,
More informationBackground Subtraction Techniques
Background Subtraction Techniques Alan M. McIvor Reveal Ltd PO Box 128-221, Remuera, Auckland, New Zealand alan.mcivor@reveal.co.nz Abstract Background subtraction is a commonly used class of techniques
More informationObject Tracking using HOG and SVM
Object Tracking using HOG and SVM Siji Joseph #1, Arun Pradeep #2 Electronics and Communication Engineering Axis College of Engineering and Technology, Ambanoly, Thrissur, India Abstract Object detection
More informationReal-time Detection of Illegally Parked Vehicles Using 1-D Transformation
Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Jong Taek Lee, M. S. Ryoo, Matthew Riley, and J. K. Aggarwal Computer & Vision Research Center Dept. of Electrical & Computer Engineering,
More informationTracking of Human Body using Multiple Predictors
Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,
More informationWP1: Video Data Analysis
Leading : UNICT Participant: UEDIN Fish4Knowledge Final Review Meeting - November 29, 2013 - Luxembourg Workpackage 1 Objectives Fish Detection: Background/foreground modeling algorithms able to deal with
More informationStochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen
Stochastic Road Shape Estimation, B. Southall & C. Taylor Review by: Christopher Rasmussen September 26, 2002 Announcements Readings for next Tuesday: Chapter 14-14.4, 22-22.5 in Forsyth & Ponce Main Contributions
More informationReal-Time Tracking of Multiple People through Stereo Vision
Proc. of IEE International Workshop on Intelligent Environments, 2005 Real-Time Tracking of Multiple People through Stereo Vision S. Bahadori, G. Grisetti, L. Iocchi, G.R. Leone, D. Nardi Dipartimento
More informationTracking Multiple Moving Objects with a Mobile Robot
Tracking Multiple Moving Objects with a Mobile Robot Dirk Schulz 1 Wolfram Burgard 2 Dieter Fox 3 Armin B. Cremers 1 1 University of Bonn, Computer Science Department, Germany 2 University of Freiburg,
More informationInternational Journal of Advanced Research in Electronics and Communication Engineering (IJARECE) Volume 4, Issue 4, April 2015
A Survey on Human Motion Detection and Surveillance Miss. Sapana K.Mishra Dept. of Electronics and Telecommunication Engg. J.T.M.C.O.E., Faizpur, Maharashtra, India Prof. K.S.Bhagat Dept. of Electronics
More informationCs : Computer Vision Final Project Report
Cs 600.461: Computer Vision Final Project Report Giancarlo Troni gtroni@jhu.edu Raphael Sznitman sznitman@jhu.edu Abstract Given a Youtube video of a busy street intersection, our task is to detect, track,
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical
More informationTracking Algorithms. Lecture16: Visual Tracking I. Probabilistic Tracking. Joint Probability and Graphical Model. Deterministic methods
Tracking Algorithms CSED441:Introduction to Computer Vision (2017F) Lecture16: Visual Tracking I Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Deterministic methods Given input video and current state,
More informationHierarchical Part-Based Human Body Pose Estimation
Hierarchical Part-Based Human Body Pose Estimation R. Navaratnam A. Thayananthan P. H. S. Torr R. Cipolla University of Cambridge Oxford Brookes Univeristy Department of Engineering Cambridge, CB2 1PZ,
More informationContinuous Multi-View Tracking using Tensor Voting
Continuous Multi-View Tracking using Tensor Voting Jinman Kang, Isaac Cohen and Gerard Medioni Institute for Robotics and Intelligent Systems University of Southern California {jinmanka, icohen, medioni}@iris.usc.edu
More information