Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems
|
|
- Erik Warner
- 5 years ago
- Views:
Transcription
1 Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems Xiaoyan Jiang, Erik Rodner, and Joachim Denzler Computer Vision Group Jena Friedrich Schiller University of Jena Abstract. In this paper, we present an approach for tackling the problem of automatically detecting and tracking a varying number of people in complex scenes. We follow a robust and fast framework to handle unreliable detections from each camera by extensively making use of multi-camera systems to handle occlusions and ambiguities. Instead of using the detections of each frame directly for tracking, we associate and combine the detections to form so called tracklets. From the triangulation relationship between two views, the 3D trajectory is estimated and back-projected to provide valuable cues for particle filter tracking. Most importantly, a novel motion model considering different velocity cues is proposed for particle filter tracking. Experiments are done on the challenging dataset PETS 09 to show the benefits of our approach and the integrated multi-camera extensions. 1 Introduction Multi-object tracking is important for various applications in computer vision, such as visual surveillance, traffic control, sports analysis, or activity recognition. Since cameras are getting cheaper and tracking definitely benefits from organized multiple cameras with different views, it is sensible to track multiple objects based on calibrated multi-camera systems. In general, tracking multiple objects in real-time in an accurate way is very challenging due to background clutter, occlusion between objects and background, and the appearance similarity between objects to be tracked. The difficulties additionally arise from the aspect of how to reliably fuse the information from individual cameras and how to perform robust global tracking in an efficient manner. With the improvement of detection algorithms [1,2] both in accuracy and computational feasibility, tracking-by-detection is one of the most popular concepts for tracking [3,4]. Targets to be tracked can be initialized by continuously applying detectors to single image frames. Typically, the output of a detector is a set of image regions with confidence scores. Incorporating temporal context here is necessary due to the high amount of false positives and missing detections as shown in the left part of Fig. 1. Recent tracking approaches [3,4,5,6] try to associate the detections and track objects from uncalibrated single cameras.
2 Camera 1 multiple views... Camera n object detection object detection detections data association data association tracklets particle filter 3d trajectory estimation particle filter detections tracklets Fig. 1. Overview of our framework. Left images show outputs of a person detector [1] from two views of the PETS 09 database highlighting the necessity of tracking and 3D reasoning. Additionally, multi-camera systems are also used for efficient tracking. Correspondences of observations of walking humans across multiple cameras can be established by geometric constraints like a planar homography [7]. The work of [8] reconstructs the top-view of the ground plane and map the vertical axes of a person in each view to the top-view to intersect at a single point that is assumed to be the location of the person on the ground. Many other papers also make full use of the ground plane assumption [9,10]. We intend to utilize two or more calibrated cameras and fuse the information in a joint tracking-by-detection framework without using homography restrictions or top-view images. As we will see later, this yields more precise results. In details, detections in single camera images with lower confidence scores which are considered to belong to the same objects are rejected first. Afterwards, detections are associated to form more reliable tracklets. Once tracklets are found, they are used to update the motion model as well as the target model for particle filter tracking. Secondly, global tracking based on estimating the 3D position is realized, where we focus primarily on occlusion reasoning from a geometrical point of view. Thirdly, particle filters are initialized with tracklets and the motion is estimated in subsequent frames. The paper is structured as follows. The details of our algorithm are presented in Section 2. Experiments on PETS 09 1 are evaluated qualitatively and compared to other state-of-the-art algorithms in Section 3. Finally, Section 4 concludes the paper with a summary and an outlook. 2 Tracking-by-Detection with Multi-Camera Systems A block diagram of our tracking approach is shown in Fig. 1. The algorithm mainly consists of three parts: data association, particle filters for tracking, and 3D trajectory estimation. First of all, we use data association techniques to 1
3 combine detections after k consecutive frames to form tracklets. Each tracklet is then associated with a particle filter. The key idea is that the particle filter additionally uses the estimated 3D trajectory to fuse the information from multiple cameras. Furthermore, back-projecting this 3D trajectory into every view allows for recovering from missed detections. To eliminate duplicate tracking results, we use several similarity measurements based on appearance features as well as geometry reasoning. 2.1 Data Association Initializing a particle filter tracker for each detection directly in each frame may lead to unreliable tracking results. Therefore, we use an intuitive matching criteria to find corresponding detections in k subsequent frames. If such a correspondence is found, the detections in all k frames define a tracklet and are taken into account for particle filter tracking. This reduces the number of false positive detections to a large extent. The correspondences are found by greedy association, which showed results comparable to the assignment problem solved by the Hungarian algorithm [3]. We basically follow the work of [3], but modify it in two important aspects: first, we associate the detections in single cameras to get more reliable tracklets. Furthermore, a calibrated multi-camera system is used to estimate the 3D trajectory and use the projections in each camera to provide valuable cues especially in the case of occlusions and ambiguities. To find the best associated detections in time t in camera i, we consider the Manhattan distance and the overlap ratio between each current detection d with the previous detection from a tracklet T using a gating function: { 1 if o(d, T ) σ and M(d, T ) ξ(t ) g(d, T ) =. (1) 0 otherwise The value g(d, T ) = 1 indicates that this detection belongs to the tracklet considered. In equation (1), the overlap ratio o(d, T ) between the two regions is defined as follows: d T o(d, T ) = 2, (2) d T where we interpret d and T as sets of image pixels. The parameter σ is set experimentally. The Manhattan distance M(d, T ) is thresholded depending on the size of the tracklet: ξ(t ) = α [height(t ) + width(t )], (3) where the parameter α < 1 is manually chosen. If a detection passes the gating function, it will be associated to the corresponding tracklet. Furthermore, the input to the thresholding operation is a set of ranked detections of which the detections with higher scores that satisfy the conditions stated above will be selected primarily.
4 2.2 Data Fusion from Multiple Views Fusing information from multiple views is done by reconstructing the 3D position of the centroid of the object. With the knowledge of epipolar geometry, it is known that a corresponding pair of points in two cameras is limited to epipolar lines [11]. The correspondence of detections is obtained by the combination of Euclidean distances between their centers to respective epipolar lines and appearance similarity between them. Afterwards, we estimate the 3D position of the object center from two views using triangulation [11]. 3D Trajectory After obtaining all possible 3D points from all the cameras, 3D points which are near to each other are considered to belong to the same object, which are merged by simple averaging of their 3D vectors. The 3D trajectory of an object is formed based on associating the 3D points frame by frame. Occlusion Reasoning The most challenging part of multi-object tracking is how to tackle tracking under occlusion. When there exists inter-object occlusion or the object is occluded by background objects, the target in some views will be partially (or totally) invisible. This might lead to the failure of both detection and tracking. Therefore, many works consider occlusion reasoning to improve the results [3]. The inter-object occlusion reasoning is done by considering the intermediate detection confidence as a part of the observation model of particle filters. In [12], occlusion is taken into account by calculating the visible parts of the object and trajectory estimation is achieved by energy function minimization. However, we want to take advantage of multiple views and intend to perform inter-object occlusion reasoning from a geometrical point of view. We assume two kinds of occlusion: first situation, if an object was trackable previously and there is another one or more trackable objects nearby, then we regard this object as occluded by another object, no matter how much proportion of the object is invisible; second situation, if one detection in a view has more than one corresponding detections in other views for several frames, then it is supposed that there is occlusion between these detections. After obtaining tracklets and the 3D trajectories of the objects, they are used to update the target model and the motion model of the particle filter, which is explained in the next section. 2.3 Kernel based Particle Filter In case of clutter environments, the assumption of Gaussian distributed object states does not hold. In contrast, particle filters are able to model flexible and multi-modal distributions, and are due to this reason better suited for tracking than Kalman filters [13]. Particle filters use a set of weighted samples, referred to as particles, to model the posterior distribution [13]: χ t = {x [1] t, x [2] t,..., x [M] t }, (4) where M is the number of particles which is constant in our case and x [m] t (with 1 m M) is a hypothesis of the 2D state (x, y, hx, hy) (object center and half of the width and height of the object) at time t.
5 Initialization and Termination Initialization of trackers is done using two different sources: the newly assigned tracklets and the projections of 3D trajectories of the object which are within detections. We also check whether these cues have not been associated with any existing tracker. Initially, particles are distributed uniformly over the initial region with the same weight 1/M. The appearance model of a target or a candidate is a kernel-based RGB histogram [14]. The kernel assigns higher weights to the samples close to the target region centroid to reduce the effect of peripheral samples, which might be affected by occlusions from the background. After initialization, particles will be propagated to the new hypothesis states according to the motion model. Propagation In some papers like [3], people are assumed to walk with constant velocity and the motion model is empirically configured in advance. This assumption is not valid, when people stop walking during a period of time or increase speed. Furthermore, this leads to particles, which may converge to totally wrong positions or scale because of ambiguities in the scene. Therefore, we decide to utilize a robust motion model to guide the particles. The motion model is composed of three different velocities: v = β v T + η v O + γ v S with β + η + γ = 1 (5) where v T is the velocity of the associated tracklet at the current time step t, v O is the velocity of the tracker at previous time step t 1, and v S is the velocity of the back projection from the corresponding 3D trajectory at current time step t. In normal situations β, η, and γ are set equally. During occlusion, the detections associated with a tracker in a single view are considered to be unreliable. Thus, β is set to be lower, while γ is given higher weight to incorporate useful information from other cameras. Besides that, v should not be greater than a maximum velocity. For people who are walking, this maximum velocity could be defined by twice of the normal speed of a person. Otherwise, v equals to the previous velocity to reduce abrupt movement. One advantage of utilization of this motion model is that the particles can still propagate correctly even during occlusion by fusing useful cues from other cameras. Observation Given the propagated particles, the weights of individual particles are obtained by the Bhattacharyya coefficient [14] between the candidate and the target. Most importantly, the target model is updated by the associated detection which is totally visible and satisfies object appearance consistency. Appearance consistency is based on the reasonable assumption that the appearance of a person in two consecutive frames does not change significantly. The final state is estimated by the combination of the mean and the maximum of the modeled posterior distribution: x t = 1 2 ( N m=1 w [m] t x [m] t + arg max m=1...n x [m] t ), (6)
6 where x t is the final state at time t and w [m] t is the corresponding weight of the particle x [m] t. 3 Experiments Datasets There are not many publicly suitable datasets for multi-person tracking in multi-camera systems. Since PETS09/S2.L1 contains different types of human movements, different types of occlusion, cameras located in different angles with different illumination and different resolutions, we choose this challenging dataset for testing our algorithm. In all our experiments, we do not train the detector specifically for this dataset and do not perform background extraction. We use the detector of [1] and the corresponding source code and learned models. Furthermore, no special information of this dataset is used, which allows for applying our algorithm to other scenarios and datasets. The ground truth of the first view was provided by Anton Andriyenko 2. Experimental Setup We use the CLEAR MOT metrics [15] and the evaluation program of [6] to analyze the tracking performance. We compare our algorithm with the state-of-the-art results on PETS 09 S2.L1 in Table 1, where the performance of other methods are provided in [3]. MOTP considers the precision of estimated positions and MOTA takes misses, false positives, and mismatches into account [15]. The parameter k in this dataset is set to 3 and we use 1000 units in the world coordinate system [6] as a threshold to recognize 3D points belonging to the same object. Furthermore, 8 bins per color channel are used to compute the RGB histograms and σ is set to 0.4. M = 100 particles are used in one tracker. Evaluation Fig. 2 shows six sample images from the dataset. Different colors identify the different objects with corresponding 2D trajectories. From the images, we can see that most of the people can be tracked correctly even during occlusion. There are sometimes double or more assigned trackers to the same object, partially arised from initialization of trackers by back projected points of 3D positions that within detections or from missing detections that may separate long tracklets into several shorter tracklets. The precision of multi-camera system calibration has heavy effect on the final results, since the 3D positions of the objects has large impact especially during occlusion. Evaluation results are shown in Table 1. We can see that our approach outperforms other methods with respect to the MOTP value. This is mainly due to the utilization of cues from multiple cameras, which is especially useful because of missing detections in the first view. Compared to other state-of-the-art methods, the MOTA value of our approach is lower. This is mainly caused by reassignment of the same objects or the model update of the particle filters when groups of people split and merge again. We plan to overcome these issues by not 2
7 Fig. 2. Tracking results: top and bottom row show results in view 1 and 5, respectively. Table 1. Results of our approach on PETS 09/S2.L1 compared to state-of-the-art. Algorithm MOTP MOTA Our approach Breitenstein et al. [3] Yang et al. [16] Berclaz et al. [9] Andriyenko et al. [6] 78.8% 56.3% 53.8% 60.0% 76.1% 60.8% 79.7% 75.9% 66.0% 81.4% only considering the centroid of tracked objects but also the complete 3D shape to allow for a more exact data association. Runtime Performance The entire system is implemented in C++, except that the detection is the MATLAB source code of [1], without taking advantage of GPU processing. For tracking without considering the time used for detection, the average runtime is 2.6 seconds for each time step processing images from 7 cameras. Time measurements were done on a standard PC with Intel Core i5 2.8GHz processor. 4 Conclusion In this paper, we proposed a tracking-by-detection framework for multi-camera systems. We showed that estimating the 3D position of the tracked objects can help to solve for ambiguities and provides more robustness to occlusions. Our framework is based on an efficient detection algorithm, several intuitive rejection rules, and the combination of several appearance as well as geometry cues. Specifically speaking, the performance during occlusion is improved by integrat-
8 ing cues from multiple cameras. This is reflected in the novel motion model incorporated in particle filters, where the importance from 3D trajectory is higher during occlusion. For future research, we plan to integrate more complex motion models and uncertainties derived from 3D position estimation. Additionally, a more accurate target model is also worth to be considered for future investigation. Furthermore, we will record our own large-scale dataset within a challenging outdoor environment to allow for a more realistic evaluation. Acknowledgements: We would like to thank our colleagues for their comments and suggestions. References 1. Felzenszwalb, P., Girshick, R., McAllester, D.: Cascade object detection with deformable part models. In: CVPR. (2010) 2. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR. (2005) 3. Breitenstein, M.D., Reichlin, F., Leibe, B., Koller-Meier, E., Gool, L.V.: Online muti-person tracking-by-detection from a single, uncalibrated camera. PAMI 10 (2010) Huang, C., Wu, B., Nevatia, R.: Robust object tracking by hierarchical association of detection responses. In: ECCV. (2008) 5. Wu, B., Nevatia, R.: Detection and tracking of multiple, partially occluded humans by bayesian combination of edgelet based part detectors. International Journal of Computer Vision 75(2) (2007) Andriyenko, A., Schindler, K.: Multi-target tracking by continuous energy minimization. In: CVPR. (2011) 7. Khan, S.M., Shah, M.: A multiview approach to tracking people in crowded scenes using a planar homography constraint. In: ECCV. (2006) 8. Kim, K., Davis, L.S.: Multi-camera tracking and segmentation of occluded people on ground plane using search-guided particle filtering. In: ECCV. (2006) 9. Berclaz, J., Fleuret, F., Fua, P.: Robust people tracking with global trajectory optimizaton. In: CVPR. (2006) 10. Fleuret, F., Berclaz, J., Lengagne, R., Fua, P.: Multi-camera people tracking with a probabilistic occupancy map. IEEE Transactions on Pattern Analysis and Machine Intelligence 30 (2008) Hartley, R., Zisserman, A.: Multiple View Geometry. the United Kingdom at the University Press, Cambridge (2006) 12. Andriyenko, A., Roth, S., Schindler, K.: An analytical formulation of global occlusion reasoning for multi-target tracking. In: ICCV Workshops. (2011) 13. Nummiaro, K., Koller-Meier, E., Gool, L.V.: Color features for tracking non-rigid object. In: ACTA Automatica Sinica. (2003) 14. Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. PAMI 25 (2003) Bernardin, K., Stiefelhagen, R.: Evaluating multiple object tracking performance: The clear mot metrics. EURASIP J. on Image and Video Processing (2008) Yang, J., Shi, Z., Vela, P., Teizer, J.: Probabilistic multiple people tracking through complex situations. In: IEEE Workshop Performance Evaluation of Tracking and Surveillance. (2009)
Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos
Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Sung Chun Lee, Chang Huang, and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu,
More informationOnline Multi-Object Tracking based on Hierarchical Association Framework
Online Multi-Object Tracking based on Hierarchical Association Framework Jaeyong Ju, Daehun Kim, Bonhwa Ku, Hanseok Ko School of Electrical Engineering Korea University Anam-dong, Seongbuk-gu, Seoul, Korea
More informationOnline Tracking Parameter Adaptation based on Evaluation
2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Online Tracking Parameter Adaptation based on Evaluation Duc Phu Chau Julien Badie François Brémond Monique Thonnat
More informationSimultaneous Appearance Modeling and Segmentation for Matching People under Occlusion
Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion Zhe Lin, Larry S. Davis, David Doermann, and Daniel DeMenthon Institute for Advanced Computer Studies University of
More informationAutomatic Parameter Adaptation for Multi-Object Tracking
Automatic Parameter Adaptation for Multi-Object Tracking Duc Phu CHAU, Monique THONNAT, and François BREMOND {Duc-Phu.Chau, Monique.Thonnat, Francois.Bremond}@inria.fr STARS team, INRIA Sophia Antipolis,
More informationMULTIPLE OBJECT TRACKING BASED ON SPARSE GENERATIVE APPEARANCE MODELING. Dorra Riahi, Guillaume-Alexandre Bilodeau. LITIV Lab., Polytechnique Montréal
MULTIPLE OBJECT TRACKING BASED ON SPARSE GENERATIVE APPEARANCE MODELING Dorra Riahi, Guillaume-Alexandre Bilodeau LITIV Lab., Polytechnique Montréal ABSTRACT This paper addresses multiple object tracking
More informationPETS2009 and Winter-PETS 2009 Results: A Combined Evaluation
PETS2009 and Winter-PETS 2009 Results: A Combined Evaluation A.Ellis, A.Shahrokni and J.M.Ferryman Computational Vision Group School of Systems Engineering University of Reading Whiteknights, Reading,
More informationReal-Time Human Detection using Relational Depth Similarity Features
Real-Time Human Detection using Relational Depth Similarity Features Sho Ikemura, Hironobu Fujiyoshi Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai, Aichi, 487-8501 Japan. si@vision.cs.chubu.ac.jp,
More informationPairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement
Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement Daegeon Kim Sung Chun Lee Institute for Robotics and Intelligent Systems University of Southern
More informationRobust Global Tracker based on an Online Estimation of Tracklet Descriptor Reliability
Robust Global Tracker based on an Online Estimation of Tracklet Descriptor Reliability Nguyen Thi Lan Anh Chau Duc Phu Francois Bremond INRIA Sophia Antipolis 2004 Route des Lucioles -BP93 06902 Sophia
More informationA Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion
A Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion Paper ID:086 Abstract Multi-view approach has been proposed to solve occlusion and lack of visibility
More informationOnline Motion Agreement Tracking
WU,ZHANG,BETKE: ONLINE MOTION AGREEMENT TRACKING 1 Online Motion Agreement Tracking Zheng Wu wuzheng@bu.edu Jianming Zhang jmzhang@bu.edu Margrit Betke betke@bu.edu Computer Science Department Boston University
More informationObject Tracking with an Adaptive Color-Based Particle Filter
Object Tracking with an Adaptive Color-Based Particle Filter Katja Nummiaro 1, Esther Koller-Meier 2, and Luc Van Gool 1,2 1 Katholieke Universiteit Leuven, ESAT/VISICS, Belgium {knummiar,vangool}@esat.kuleuven.ac.be
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More informationFundamental Matrices from Moving Objects Using Line Motion Barcodes
Fundamental Matrices from Moving Objects Using Line Motion Barcodes Yoni Kasten (B), Gil Ben-Artzi, Shmuel Peleg, and Michael Werman School of Computer Science and Engineering, The Hebrew University of
More informationLearning People Detectors for Tracking in Crowded Scenes
Learning People Detectors for Tracking in Crowded Scenes Siyu Tang1 Mykhaylo Andriluka1 Konrad Schindler3 Stefan Roth2 1 Anton Milan2 Bernt Schiele1 Max Planck Institute for Informatics, Saarbru cken,
More informationOcclusion Geodesics for Online Multi-Object Tracking
Occlusion Geodesics for Online Multi-Object Tracking Horst Possegger Thomas Mauthner Peter M. Roth Horst Bischof Institute for Computer Graphics and Vision, Graz University of Techlogy {possegger,mauthner,pmroth,bischof}@icg.tugraz.at
More informationAutomatic Adaptation of a Generic Pedestrian Detector to a Specific Traffic Scene
Automatic Adaptation of a Generic Pedestrian Detector to a Specific Traffic Scene Meng Wang and Xiaogang Wang {mwang,xgwang}@ee.cuhk.edu.hk Department of Electronic Engineering, The Chinese University
More informationAn Analytical Formulation of Global Occlusion Reasoning for Multi-Target Tracking
An Analytical Formulation of Global Occlusion Reasoning for Multi-Target Tracking Anton Andriyenko 1 Stefan Roth 1 Konrad Schindler 2 1 Department of Computer Science, TU Darmstadt, Germany 2 Photogrammetry
More informationMulti-camera Pedestrian Tracking using Group Structure
Multi-camera Pedestrian Tracking using Group Structure Zhixing Jin Center for Research in Intelligent Systems University of California, Riverside 900 University Ave, Riverside, CA 92507 jinz@cs.ucr.edu
More informationFAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO
FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO Makoto Arie, Masatoshi Shibata, Kenji Terabayashi, Alessandro Moro and Kazunori Umeda Course
More informationRobust Object Tracking by Hierarchical Association of Detection Responses
Robust Object Tracking by Hierarchical Association of Detection Responses Chang Huang, Bo Wu, and Ramakant Nevatia University of Southern California, Los Angeles, CA 90089-0273, USA {huangcha bowu nevatia}@usc.edu
More informationLeft-Luggage Detection using Bayesian Inference
Left-Luggage Detection using Bayesian Inference Fengjun Lv Xuefeng Song Bo Wu Vivek Kumar Singh Ramakant Nevatia Institute for Robotics and Intelligent Systems University of Southern California Los Angeles,
More informationMultiple-Person Tracking by Detection
http://excel.fit.vutbr.cz Multiple-Person Tracking by Detection Jakub Vojvoda* Abstract Detection and tracking of multiple person is challenging problem mainly due to complexity of scene and large intra-class
More informationLearning People Detectors for Tracking in Crowded Scenes
2013 IEEE International Conference on Computer Vision Learning People Detectors for Tracking in Crowded Scenes Siyu Tang 1 Mykhaylo Andriluka 1 Anton Milan 2 Konrad Schindler 3 Stefan Roth 2 Bernt Schiele
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationPerson Tracking-by-Detection with Efficient Selection of Part-Detectors
2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Person Tracking-by-Detection with Efficient Selection of Part-Detectors Arne Schumann, Martin Bäuml, Rainer Stiefelhagen
More informationPEDESTRIAN DETECTION IN CROWDED SCENES VIA SCALE AND OCCLUSION ANALYSIS
PEDESTRIAN DETECTION IN CROWDED SCENES VIA SCALE AND OCCLUSION ANALYSIS Lu Wang Lisheng Xu Ming-Hsuan Yang Northeastern University, China University of California at Merced, USA ABSTRACT Despite significant
More informationTracklet Association with Online Target-Specific Metric Learning
racklet Association with Online arget-specific Metric Learning Bing Wang, Gang Wang, Kap Luk Chan, Li Wang School of Electrical and Electronic Engineering, Nanyang echnological University 50 Nanyang Avenue,
More informationPEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE
PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE Hongyu Liang, Jinchen Wu, and Kaiqi Huang National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science
More informationStable Multi-Target Tracking in Real-Time Surveillance Video (Preprint)
Stable Multi-Target Tracking in Real-Time Surveillance Video (Preprint) Ben Benfold Ian Reid Abstract The majority of existing pedestrian trackers concentrate on maintaining the identities of targets,
More informationRobust False Positive Detection for Real-Time Multi-Target Tracking
Robust False Positive Detection for Real-Time Multi-Target Tracking Henrik Brauer, Christos Grecos, and Kai von Luck University of the West of Scotland University of Applied Sciences Hamburg Henrik.Brauer@HAW-Hamburg.de
More informationHuman Upper Body Pose Estimation in Static Images
1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationDeep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks
Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Si Chen The George Washington University sichen@gwmail.gwu.edu Meera Hahn Emory University mhahn7@emory.edu Mentor: Afshin
More informationGMCP-Tracker: Global Multi-object Tracking Using Generalized Minimum Clique Graphs
GMCP-Tracker: Global Multi-object Tracking Using Generalized Minimum Clique Graphs Amir Roshan Zamir, Afshin Dehghan, and Mubarak Shah UCF Computer Vision Lab, Orlando, FL 32816, USA Abstract. Data association
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationJoint Vanishing Point Extraction and Tracking. 9. June 2015 CVPR 2015 Till Kroeger, Dengxin Dai, Luc Van Gool, Computer Vision ETH Zürich
Joint Vanishing Point Extraction and Tracking 9. June 2015 CVPR 2015 Till Kroeger, Dengxin Dai, Luc Van Gool, Computer Vision Lab @ ETH Zürich Definition: Vanishing Point = Intersection of 2D line segments,
More informationOnline Multi-Face Detection and Tracking using Detector Confidence and Structured SVMs
Online Multi-Face Detection and Tracking using Detector Confidence and Structured SVMs Francesco Comaschi 1, Sander Stuijk 1, Twan Basten 1,2 and Henk Corporaal 1 1 Eindhoven University of Technology,
More informationA Factorization Method for Structure from Planar Motion
A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College
More informationReal Time Person Detection and Tracking by Mobile Robots using RGB-D Images
Real Time Person Detection and Tracking by Mobile Robots using RGB-D Images Duc My Vo, Lixing Jiang and Andreas Zell Abstract Detecting and tracking humans are key problems for human-robot interaction.
More informationInternational Journal of Advance Engineering and Research Development
Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Comparative
More information3D Spatial Layout Propagation in a Video Sequence
3D Spatial Layout Propagation in a Video Sequence Alejandro Rituerto 1, Roberto Manduchi 2, Ana C. Murillo 1 and J. J. Guerrero 1 arituerto@unizar.es, manduchi@soe.ucsc.edu, acm@unizar.es, and josechu.guerrero@unizar.es
More informationTRACKING OF MULTIPLE SOCCER PLAYERS USING A 3D PARTICLE FILTER BASED ON DETECTOR CONFIDENCE
Advances in Computer Science and Engineering Volume 6, Number 1, 2011, Pages 93-104 Published Online: February 22, 2011 This paper is available online at http://pphmj.com/journals/acse.htm 2011 Pushpa
More informationNon-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method
Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs
More informationMultiple Target Tracking Using Frame Triplets
Multiple Target Tracking Using Frame Triplets Asad A. Butt and Robert T. Collins Dept. of Computer Science and Engineering, The Pennsylvania State University, University Park, PA. 16802 {asad, rcollins}@cse.psu.edu
More informationMulti-Targets Tracking Based on Bipartite Graph Matching
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 14, Special Issue Sofia 014 Print ISSN: 1311-970; Online ISSN: 1314-4081 DOI: 10.478/cait-014-0045 Multi-Targets Tracking Based
More informationEfficient Acquisition of Human Existence Priors from Motion Trajectories
Efficient Acquisition of Human Existence Priors from Motion Trajectories Hitoshi Habe Hidehito Nakagawa Masatsugu Kidode Graduate School of Information Science, Nara Institute of Science and Technology
More informationCascaded Confidence Filtering for Improved Tracking-by-Detection
Cascaded Confidence Filtering for Improved Tracking-by-Detection Severin Stalder 1, Helmut Grabner 1, and Luc Van Gool 1,2 1 Computer Vision Laboratory, ETH Zurich, Switzerland {sstalder,grabner,vangool}@vision.ee.ethz.ch
More informationHomography Based Multiple Camera Detection and Tracking of People in a Dense Crowd
Homography Based Multiple Camera Detection and Tracking of People in a Dense Crowd Ran Eshel and Yael Moses Efi Arazi School of Computer Science, The Interdisciplinary Center, Herzliya 46150, Israel Abstract
More informationOcclusion Robust Symbol Level Fusion for Multiple People Tracking
Nyan Bo Bo, Peter Veelaert and Wilfried Philips imec-ipi-ugent, Sint-Pietersnieuwstraat 41, Ghent 9000, Belgium {Nyan.BoBo, Peter.Veelaert, Wilfried.Philips}@ugent.be Keywords: Abstract: Multi-camera Tracking,
More informationA novel template matching method for human detection
University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2009 A novel template matching method for human detection Duc Thanh Nguyen
More informationMulti-Camera Calibration, Object Tracking and Query Generation
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Multi-Camera Calibration, Object Tracking and Query Generation Porikli, F.; Divakaran, A. TR2003-100 August 2003 Abstract An automatic object
More informationROBUST MULTIPLE OBJECT TRACKING BY DETECTION WITH INTERACTING MARKOV CHAIN MONTE CARLO. S.Santhoshkumar, S. Karthikeyan, B.S.
ROBUST MULTIPLE OBJECT TRACKING BY DETECTION WITH INTERACTING MARKOV CHAIN MONTE CARLO S.Santhoshkumar, S. Karthikeyan, B.S. Manjunath Department of Electrical and Computer Engineering University of California
More informationContinuous Multi-View Tracking using Tensor Voting
Continuous Multi-View Tracking using Tensor Voting Jinman Kang, Isaac Cohen and Gerard Medioni Institute for Robotics and Intelligent Systems University of Southern California {jinmanka, icohen, medioni}@iris.usc.edu
More informationTarget Tracking Using Mean-Shift And Affine Structure
Target Tracking Using Mean-Shift And Affine Structure Chuan Zhao, Andrew Knight and Ian Reid Department of Engineering Science, University of Oxford, Oxford, UK {zhao, ian}@robots.ox.ac.uk Abstract Inthispaper,wepresentanewapproachfortracking
More informationDiscrete-Continuous Optimization for Multi-Target Tracking
Discrete-Continuous Optimization for Multi-Target Tracking Anton Andriyenko 1 Konrad Schindler 2 Stefan Roth 1 1 Department of Computer Science, TU Darmstadt 2 Photogrammetry and Remote Sensing Group,
More informationDeformable Part Models
CS 1674: Intro to Computer Vision Deformable Part Models Prof. Adriana Kovashka University of Pittsburgh November 9, 2016 Today: Object category detection Window-based approaches: Last time: Viola-Jones
More informationAutomatic Tracking of Moving Objects in Video for Surveillance Applications
Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering
More informationCar tracking in tunnels
Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern
More informationHypergraphs for Joint Multi-View Reconstruction and Multi-Object Tracking
2013 IEEE Conference on Computer Vision and Pattern Recognition Hypergraphs for Joint Multi-View Reconstruction and Multi-Object racking Martin Hofmann 1, Daniel Wolf 1,2, Gerhard Rigoll 1 1 Institute
More informationMulti-Object Tracking Based on Tracking-Learning-Detection Framework
Multi-Object Tracking Based on Tracking-Learning-Detection Framework Songlin Piao, Karsten Berns Robotics Research Lab University of Kaiserslautern Abstract. This paper shows the framework of robust long-term
More informationLearning to Associate: HybridBoosted Multi-Target Tracker for Crowded Scene
Learning to Associate: HybridBoosted Multi-Target Tracker for Crowded Scene Yuan Li, Chang Huang and Ram Nevatia University of Southern California, Institute for Robotics and Intelligent Systems Los Angeles,
More informationMotion Tracking and Event Understanding in Video Sequences
Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!
More informationarxiv: v1 [cs.cv] 10 Jul 2014
ARTOS Adaptive Real-Time Object Detection System Björn Barz Erik Rodner bjoern.barz@uni-jena.de erik.rodner@uni-jena.de arxiv:1407.2721v1 [cs.cv] 10 Jul 2014 Joachim Denzler Computer Vision Group Friedrich
More informationarxiv: v2 [cs.cv] 25 Aug 2014
ARTOS Adaptive Real-Time Object Detection System Björn Barz Erik Rodner bjoern.barz@uni-jena.de erik.rodner@uni-jena.de arxiv:1407.2721v2 [cs.cv] 25 Aug 2014 Joachim Denzler Computer Vision Group Friedrich
More informationFace detection in a video sequence - a temporal approach
Face detection in a video sequence - a temporal approach K. Mikolajczyk R. Choudhury C. Schmid INRIA Rhône-Alpes GRAVIR-CNRS, 655 av. de l Europe, 38330 Montbonnot, France {Krystian.Mikolajczyk,Ragini.Choudhury,Cordelia.Schmid}@inrialpes.fr
More informationSelective Search for Object Recognition
Selective Search for Object Recognition Uijlings et al. Schuyler Smith Overview Introduction Object Recognition Selective Search Similarity Metrics Results Object Recognition Kitten Goal: Problem: Where
More informationColour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation
ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology
More informationECSE-626 Project: An Adaptive Color-Based Particle Filter
ECSE-626 Project: An Adaptive Color-Based Particle Filter Fabian Kaelin McGill University Montreal, Canada fabian.kaelin@mail.mcgill.ca Abstract The goal of this project was to discuss and implement a
More informationOnline multi-object tracking by detection based on generative appearance models
Online multi-object tracking by detection based on generative appearance models Dorra Riahi a,, Guillaume-Alexandre Bilodeau a a LITIV lab., Department of Computer and Software Engineering, Polytechnique
More informationMulti-view stereo. Many slides adapted from S. Seitz
Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window
More informationDetection of Multiple, Partially Occluded Humans in a Single Image by Bayesian Combination of Edgelet Part Detectors
Detection of Multiple, Partially Occluded Humans in a Single Image by Bayesian Combination of Edgelet Part Detectors Bo Wu Ram Nevatia University of Southern California Institute for Robotics and Intelligent
More informationPeople Detection and Tracking with World-Z Map from a Single Stereo Camera
Author manuscript, published in "Dans The Eighth International Workshop on Visual Surveillance - VS2008 (2008)" People Detection and Tracking with World-Z Map from a Single Stereo Camera Sofiane Yous Trinity
More informationMultiple camera people detection and tracking using support integration
Multiple camera people detection and tracking using support integration Thiago T. Santos, Carlos H. Morimoto Institute of Mathematics and Statistics University of São Paulo Rua do Matão, 1010-05508-090
More informationDetecting and Identifying Moving Objects in Real-Time
Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More informationZheng Wu 1, Nickolay I. Hristov 2, Thomas H. Kunz 3, Margrit Betke 1 1 Department of Computer Science, Boston University. Abstract. 1.
Tracking-Reconstruction or Reconstruction-Tracking? Comparison of Two Multiple Hypothesis Tracking Approaches to Interpret 3D Object Motion from Several Camera Views Zheng Wu 1, Nickolay I. Hristov 2,
More informationScale and Rotation Invariant Approach to Tracking Human Body Part Regions in Videos
Scale and Rotation Invariant Approach to Tracking Human Body Part Regions in Videos Yihang Bo Institute of Automation, CAS & Boston College yihang.bo@gmail.com Hao Jiang Computer Science Department, Boston
More informationThree-Dimensional Object Detection and Layout Prediction using Clouds of Oriented Gradients
ThreeDimensional Object Detection and Layout Prediction using Clouds of Oriented Gradients Authors: Zhile Ren, Erik B. Sudderth Presented by: Shannon Kao, Max Wang October 19, 2016 Introduction Given an
More informationFast and robust multi-people tracking from RGB-D data for a mobile robot
Fast and robust multi-people tracking from RGB-D data for a mobile robot Filippo Basso, Matteo Munaro, Stefano Michieletto, and Emanuele Menegatti Department of Information Engineering University of Padova
More informationLatent Data Association: Bayesian Model Selection for Multi-target Tracking
2013 IEEE International Conference on Computer Vision Latent Data Association: Bayesian Model Selection for Multi-target Tracking Aleksandr V. Segal Department of Engineering Science, University of Oxford
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationReal-time Detection of Illegally Parked Vehicles Using 1-D Transformation
Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Jong Taek Lee, M. S. Ryoo, Matthew Riley, and J. K. Aggarwal Computer & Vision Research Center Dept. of Electrical & Computer Engineering,
More informationObject Detection with Partial Occlusion Based on a Deformable Parts-Based Model
Object Detection with Partial Occlusion Based on a Deformable Parts-Based Model Johnson Hsieh (johnsonhsieh@gmail.com), Alexander Chia (alexchia@stanford.edu) Abstract -- Object occlusion presents a major
More informationTowards the evaluation of reproducible robustness in tracking-by-detection
Towards the evaluation of reproducible robustness in tracking-by-detection Francesco Solera Simone Calderara Rita Cucchiara Department of Engineering Enzo Ferrari University of Modena and Reggio Emilia
More informationObject Tracking using HOG and SVM
Object Tracking using HOG and SVM Siji Joseph #1, Arun Pradeep #2 Electronics and Communication Engineering Axis College of Engineering and Technology, Ambanoly, Thrissur, India Abstract Object detection
More informationRobust Online Multi-Object Tracking based on Tracklet Confidence and Online Discriminative Appearance Learning
Robust Online Multi-Object Tracking based on Tracklet Confidence and Online Discriminative Appearance Learning Seung-Hwan Bae and Kuk-Jin Yoon Computer Vision Laboratory, GIST, Korea {bshwan, kjyoon}@gist.ac.kr
More informationMultiple camera fusion based on DSmT for tracking objects on ground plane
Multiple camera fusion based on DSmT for tracking objects on ground plane Esteban Garcia and Leopoldo Altamirano National Institute for Astrophysics, Optics and Electronics Puebla, Mexico eomargr@inaoep.mx,
More informationA GEOMETRIC SEGMENTATION APPROACH FOR THE 3D RECONSTRUCTION OF DYNAMIC SCENES IN 2D VIDEO SEQUENCES
A GEOMETRIC SEGMENTATION APPROACH FOR THE 3D RECONSTRUCTION OF DYNAMIC SCENES IN 2D VIDEO SEQUENCES Sebastian Knorr, Evren mre, A. Aydın Alatan, and Thomas Sikora Communication Systems Group Technische
More informationGMCP-Tracker: Global Multi-object Tracking Using Generalized Minimum Clique Graphs
GMCP-Tracker: Global Multi-object Tracking Using Generalized Minimum Clique Graphs Amir Roshan Zamir, Afshin Dehghan, and Mubarak Shah UCF Computer Vision Lab, Orlando, FL 32816, USA Abstract. Data association
More informationContinuous Multi-Views Tracking using Tensor Voting
Continuous Multi-Views racking using ensor Voting Jinman Kang, Isaac Cohen and Gerard Medioni Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA 90089-073.
More informationA multiple hypothesis tracking method with fragmentation handling
A multiple hypothesis tracking method with fragmentation handling Atousa Torabi École Polytechnique de Montréal P.O. Box 6079, Station Centre-ville Montréal (Québec), Canada, H3C 3A7 atousa.torabi@polymtl.ca
More informationOcclusion Robust Multi-Camera Face Tracking
Occlusion Robust Multi-Camera Face Tracking Josh Harguess, Changbo Hu, J. K. Aggarwal Computer & Vision Research Center / Department of ECE The University of Texas at Austin harguess@utexas.edu, changbo.hu@gmail.com,
More informationGeometry-Based Optic Disk Tracking in Retinal Fundus Videos
Geometry-Based Optic Disk Tracking in Retinal Fundus Videos Anja Kürten, Thomas Köhler,2, Attila Budai,2, Ralf-Peter Tornow 3, Georg Michelson 2,3, Joachim Hornegger,2 Pattern Recognition Lab, FAU Erlangen-Nürnberg
More informationFast and robust multi-people tracking from RGB-D data for a mobile robot
Fast and robust multi-people tracking from RGB-D data for a mobile robot Filippo Basso, Matteo Munaro, Stefano Michieletto, Enrico Pagello, and Emanuele Menegatti Abstract This paper proposes a fast and
More informationEE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm
EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant
More informationObserving people with multiple cameras
First Short Spring School on Surveillance (S 4 ) May 17-19, 2011 Modena,Italy Course Material Observing people with multiple cameras Andrea Cavallaro Queen Mary University, London (UK) Observing people
More informationTri-modal Human Body Segmentation
Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4
More informationEECS 442: Final Project
EECS 442: Final Project Structure From Motion Kevin Choi Robotics Ismail El Houcheimi Robotics Yih-Jye Jeffrey Hsu Robotics Abstract In this paper, we summarize the method, and results of our projective
More information