Robust Data Fusion in a Visual Sensor Multi-Agent Architecture

Size: px
Start display at page:

Download "Robust Data Fusion in a Visual Sensor Multi-Agent Architecture"

Transcription

1 Robust Data Fusion in a Visual Sensor Multi-Agent Architecture F. Castanedo, M.A. Patricio, J. García and J.M. Molina University Carlos III of Madrid Computer Science Department Applied Artificial Intelligence Group Avda. Universidad Carlos III 22, Colmenarejo (Madrid) {fcastane, mpatrici, jgherrer}@inf.uc3m.es, molina@ia.uc3m.es Abstract A surveillance system that fuses data from several data sources is more robust than those which depends on a single source of input. Fusing the information acquired by a vision system is a difficult task since the system needs to use reliable models for errors and take into account bad performance when taking measurements. In this research, we use a bidimensional object correspondence and tracking method based on the ground plane projection of the blob centroid. We propose a robust method that employs a two phase algorithm which uses a heuristic value and context information to automatically combine each source of information. The fusion process is carried out by a fusion agent in a multi-agent surveillance system. The experimental results on real video sequences have showed the effectiveness and robustness of the system. Keywords: Multi Camera Image Fusion, Distributed Surveillance Systems, Multi-Agent Systems I. INTRODUCTION Data fusion from multiple cameras involving the same object is one of the main challenges in multi camera surveillance systems [1] and it is related to the data combination from different sources in an optimal way [2]. The data fusion process in multi-sensor networks is the main aspect to build a coherent time-space description of interesting objects in the area (called level-one fusion task [3] [5]). To do that, it is necessary to estimate the reliability of the available sensors and processes, so that complementary information could be combined (by removing redundant information) in areas with multiple views, to solve problems for specific sensors such as occlusions, overlaps, shadows, etc. Some traditional advantages, besides extended spatial coverage are, improving the accuracy with combination by means of covariance reduction, robustness by identification of sensors malfunctioning and improved continuity with complementary detections, etc. [5] [6] [7]. To achieve these goals, some of the basic aspects to be taken into account for data fusing from different cameras could be as follows: changing into a common coordinate space (global coordinates) and synchronization in a common time basis. dynamic time-space alignment (or recalibration) to guarantee unbiased information to fuse. removing corrupted or wrong objects with analysis tests. data association at the right level. combinating estimates obtained with different sensors and local processors. One of the key steps in data fusion is determining how to represent information and uncertainty. For this purpose, a lot of literature on data fusion includes Bayesian approaches, a technique which uses probabilities to represent degrees of belief. However, Hall [4] describes a list of problems associated with such techniques: difficulty in defining prior likelihoods. complexity when there are many potential hypotheses and many condition dependent events. hypotheses must be mutually exclusive. the ability to describe uncertainty in decisions. To deal with the last problem, Dempster [11] and later Shafer [12], generalized the traditional Bayesian belief model to allow explicit representation of uncertainty with no model for the sources to fuse. On the other hand, the effect of one sensor on another, also known as data correlation, has studied in depth. Some authors use measurement reconstruction [8], a technique which can be used in a global fusion node, which compares remote estimates received with its own version of the global estimate. Julier and Uhlmann proposed the Covariance Intersection (CI) algorithm [13]. CI works to solve the problem of correlated inputs, but it is undefined for inconsistent inputs. To solve this, Uhlmann develop Covariance Union [14] to handle inconsistent sources and robust fusion. In vision applications, Snidaro et. al [15] proposed track-to-track scheme without feedback for combined data from different sensors together. They also propose a confidence measure to automatically control the fusion process automatically according to the performance of the sensors called appearance ratio (AR) [16]. In [17] the authors proposed the application of fuzzy and neuro-fuzzy techniques in a multi target tracking video system in order to ponder update decisions both for the trajectories and shapes estimated for targets. The main objective of this contribution is to make an accurate and robust fusion process to track people using a multi camera surveillance system embedded in a multi-agent architecture. The overlapped regions of each camera and the calibration of them using a common reference system allow

2 the fusion node to correct measurements in order to improve the tracking accuracy of the surveillance system. Indeed, an operator could easily select the most suitable visual sensor for any situation, but having a system to automatically pick the right camera or set of cameras is less trivial. In the next section we will show the multi-agent architecture, then section III will explain the two phase fusion algorithm. Section IV is related to the experiments and results, finally section V discusses the conclusions of the research. II. MULTI AGENT ARCHITECTURE Using a multi-agent architecture for video surveillance have several advantages [19] [18] [20]. First of all, the loosely coupled nature of the multi-agent architecture allows more flexibility for the communication processes, also the ability to assign responsibilities for each agent is ideal to solve complex task in a surveillance system. This complex task involves the use of mechanisms such as coordination, dynamic configuration and cooperation that are widely studied in the multi-agent community. Another ideas from the multi-agent and distributed artificial intelligence, for example the dynamic role distribution, could be applied [9]. In our system, the data fusion process is carried out by the fusion agent of the multi-agent architecture depicted in Figure 1. processors. The inter-sectional region between cameras is used to track targets as they transit between different fields of view to fuse the output and compute the corrections. Figure 2. Multi-camera geometry for global coverage A. Surveillance-sensor agent: Synchronization Each surveillance-sensor agent S i acquires images I(x, y) at a certain frame rate, V i. The tracking process provides for each target X Tj, an associated track vector of features ˆX [n], containing the numeric description of their features and state: location, velocity, dimensions, etc. and associated error covariance matrix, ˆP i S [n]. Usually, video frame grabbers, with A/D conversion in the case of analog cameras, or directly with digital cameras, provide a sequence of frames, f n, which can be assigned to a time stamp by knowing the initial time of capture, t 0, and the grabbing rate, V i (frames per second): f k = V c (t t 0 ) (1) Although external time stamps fix the problem of numbering each frame, the clocks of the different machines which generate the time stamps must be synchronized. Using the Network Time Protocol (NTP) [22] as an external clock source to stabilize the local clocks of each machine is one way to solve the local clocks differences. Figure 1. MAS architecture In the multi-agent architecture there are several surveillancesensor agents which track every the targets and send vectors of features and their associate covariance matrix to their respective fusion agent. Each surveillance-sensor agent is coordinated with other surveillance-sensor agents in order to improve surveillance quality. The fusion agent integrates all the surveillance-sensor agent s data information (vectors of features and an association covariance matrix) of the targets. We consider that cameras in the surveillance system are deployed so that their field of view is partially overlapped. Figure 2 shows a possible geometry. This level of redundancy allows advantages of redundancy and smooth transitions with overlapped areas, which may be affordable given the current low cost of equipments and B. Surveillance-sensor agent: Calibration and Correspondence The camera calibration is the projection from local space to the common representation in central coordinates. The correspondence between multiple cameras involves at the same time instant finding correspondences between objects in the different images sequences. So, each camera in the surveillance system is assumed to measure the location of mobile targets within its field of view with respecting to a common reference system. This is a mandatory step, since they must use the same metrics during the cooperative process. In order to make the calibration between multiple cameras we use Tsai calibration method [23]. As Khan et al. [25] [27] we use the points located on the feet to match people in multiple views based on the homography constraint defined by the ground plane (see Figure 3). We projected the centroid of each blob on the local ground plane and then applied the Tsai calibration method in order to transform each local coordinates

3 and their associated error covariance matrix intto the common representation. Figure 3. Planar projection on the ground plane Correspondence results are used to improve the tracking through the fusion of consistent trajectories between cameras. Once the correspondence problem is solved, the aim is to improve the tracking results of each camera by using the tracking results of the others cameras. It means that the data fusion process makes a robust tracking possible even when the position of the person is affected by shadows or occlusions. III. FUSION AGENT: FUSION ALGORITHM If several surveillance-sensor agents with different points of view tracks the same object, it may occur that one of these surveillance-sensor agents provide wrong information about the position of the same object. Wrong information can be provided due to many reasons (i.e. the tracking object could be affected by shadow, communication errors, hardware failures, occlusions, change of the illumination conditions, dew, etc.). Therefore, it is necessary to detect inconsistencies and problems before the fusion process. Let N be equal to the number of surveillance-sensor agents in the surveillance system and S be a subset of surveillancesensor agents S = {S 1, S 2,..., S i } which are monitoring the same area with 3 i N. Thus, we suppose that we have at least 3 surveillance-sensor agents in {S} that are monitoring the same area. Let s suppose j is equal to the number of common targets in {S} and T be the set of targets T = {,,..., }. For each target the surveillance-sensor agent observes a vector of features. Each feature is an observable characteristic that could be taken into account in order to fuse the information. Some examples of features could be: position, velocity, color, size, shape, etc... Therefore, we have a vector of features ˆXS i [n] of each target j acquired by each surveillance-sensor agent i. F is the set of vector S1 features F = {{ ˆX [n], ˆXS 1 S1 S2 [n],... ˆX [n]}, { ˆX [n], ˆX S2 S2 [n],..., ˆX [n]},..., { ˆX [n], ˆXS i [n],,..., ˆX [n]}} and P is the set of associated error covariance matrices S1 S P = {{ ˆP [n], ˆP 1 S1 S2 S [n],... ˆP [n]}, { ˆP [n], ˆP 2 [n] S2 S,..., ˆP [n]},..., { ˆP [n], ˆP i [n],,..., ˆP [n]}}. In the fusion process we suppose that the correspondence problem between the same target in different surveillanceagents is solved. The fusion algorithm carried out by the fusion agent is a two phase algorithm, in the first phase we selected the consistent tracks acquired from each surveillance-sensor agent S i and subsequently the selected tracks are fused. A. Phase 1. Consistent Tracks In order to detect inconsistent tracks we use these two methods: Calculating the Mahanalobis Distance (MD) [26] between each surveillance-sensor agent (S i ) track features and the mean (M) of all candidates features. MD = ( ˆX [n] ˆM ( ˆP 1 ( ˆX [n] ˆM T (2) If the MD exceeds the λ threshold, the track is not selected to be taken into account in the second phase. taking into account context information. The idea is to establish a priori spatial context information in which tracking measurements which have no sense (spatial tracking restrictions) are ruled out. For example, if a surveillance system is tracking a person inside an office, spatial context information could be the position of the desks in the office. Algorithm Phase 1: Select Consistent Tracks of each camera SelectConsistentTraks ({S}, {T}, {F}) for each S i {S} for each common target {T } ˆM [n] CalculateMean(S i,, Initialize fusion set: {S F } {S} for each S i {S} for each common target {T } ˆX ˆX S if (MD ( [n], ˆM i ) λ or IsOutOfContext ( ˆX {S F } = {S F } S i for each S i {S F } for each common target {T } ˆM [n] CalculateMean(S i,, if{s F } = {S F } take S i with Min( return({s F }) ˆP B. Phase 2. Fusion between consistent tracks ˆX Once consistent tracks are selected, the data fusion is performed according to the reliability of each track. We take a simple fusion approach based on weighting each source of information according to its level of confidence (α. So we need to calculate each level of confidence α [n] per each target for all surveillance-sensor agents S i in the consistent set {S F }.

4 Algorithm: Calculate Weighted values CalculateWeigthValues ({S F }, {T }, {F }) for each consistent camera S i {S F } for each common target {T } ˆP 1 + h α [n] ( {S F }) 1 * ( α Norm[n] Normalize (α return(α Norm In the previous step we calculate the level of confidence for each consistent camera and for each common target. This value is based on the inverse covariance value of each sensor and target plus a heuristic value h per sensor that is setting by a human operator of the surveillance system. Then the N ormalize() function changes the values in order to satisfy equation 3. Figure 4. Geometric patterns of the floor used for calibration purposes j { } 1 = {S F } i=1 α [n] (3) The vector obtained is used in the second phase of the algorithm. Algorithm Phase 2: Fusion between consistent tracks FusionConsistentTracks ({S F }, {T }, {F }, α for each consistent camera S i {S F } for each common target {T } XF ˆ [n] = α [n] value( ˆX return( XF ˆ With the previous algorithm we obtained the fused values of each target feature from all the consistent sensors. IV. EXPERIMENTS AND PRELIMINARY RESULTS We evaluate the proposed fusion algorithm, using the open computer vision data set PETS 2006 [21] and our surveillance system implementation based on the well-known Open Computer Vision (OpenCV) [24] library. Many algorithms [27] [28] [29] [30] have been evaluated using the PETS database and we think it is a good approach to use well known data sets to evaluate data fusion algorithms. The resolution of all PETS 2006 sequences are PAL standard (768 x 576 pixels, 25 frames per second) compressed as JPEG image sequences. The input images for the experiments are the images between the frame 0 and 199 of the cameras C 1 (Canon MV-1 1xCCD w/progressive scan), C 3 (Canon MV- 1 1xCCD w/progressive scan) and C 4 (Sony DCR-PC1000E 3xCMOS). We do not consider images of camera C 2 (Sony DCR-PC1000E 3xCMOS) because they have a poor image quality due to the location. These videos were taken in a real world public setting a railway station. The calibration data for each individual camera was given and computed from specific point locations taken from the geometric patterns on the floor of the station (see figure 4). In figure 5 we show the tracking trajectories for each local camera. Camera 1 (on the left) shows an unstable tracking, camera 3 (in the center) presents the best local tracking results due to their location, camera 4 (on the right) presents a stable but imprecise tracking due to the shadow of the tracking person. In figure 6 we show the foreground and the blob detection for each local tracking trajectories. Figures 7, 8 and 9 show the global trajectories position of camera 1, camera 3 and camera 4. That is, the projection of the ground plane coordinates (x,y) of the tracked object and the application of Tsai transformation [23]. As we can see in figure 7 (trajectories of camera 1) trajectory positions are scattered due to the tracking problems. On the other hand, figure 8 (trajectories of camera 3) presents continuous and stable trajectory positions in the same frames. We can see another tracking problem in figure 9, where the tracking is affected by shadow. Therefore the fusion process deal with three different types of sources. Figure 7. Global trajectories position of camera 1 (Frames: ). In the figure 10, we show the difference between the mean position values of the three cameras (((x C1, y C1 ) + (x C3, y C3 )+(x C4, y C4 ))/3) and the tracking results using the previous fusion algorithm. This figure shows the improvement obtained by using the proposed fusion algorithm. In this experiments the algorithm detects an inconsistency between camera C 4 and the rest of cameras. Therefore the fusion

5 Figure 5. Tracking trajectories in the ground plane of the same object with three different points of view (camera1, camera3 and camera 4 of Pets 2006 data set 1): From the top to the bottom, the frame numbers are, respectively, 91, 140 and 199. Figure 8. Global trajectories position of camera 3 (Frames: ). Figure 9. Global trajectories position of camera 4 (Frames: ). algorithm only takes into account trajectory positions from camera C 1 and camera C 3. In this preliminary results we only use position features and we do not take into account context information. V. CONCLUSIONS AND FUTURE WORK In this research we tackled the fusion of data from multiple cameras in a visual sensor multi-agent architecture. There are several approaches in the literature which deal with data fusion, although they have some specific problems when being applied to distributed vision systems. Our approach uses a visual sensor multi-agent architecture and a robust data fusion process carried out by a fusion agent. Our data fusion algorithm, is a two phase algorithm. In the first step we detect inconsistent tracks, using normalized residuals between them, and context information. This makes it possible to eliminate inconsistent tracks and then we fuse each source according to their reliability. Our method has been tested on the PETS 2006 data set [21]. We have shown preliminary results of the fusion algorithm tested with three different sequences of images acquired from three different cameras of the PETS 2006 data set. The preliminary experimental results have shown the robustness of the algorithm. In our future work, we will exploit using context information in the experiments and apply the algorithm in real time surveillance systems.

6 Figure 6. Foreground of tracking trajectories in the ground plane of the same object with three different points of view (camera1, camera3 and camera 4 of Pets 2006 data set 1): From the top to the bottom, the frame numbers are, respectively, 91, 140 and 199. Figure 10. Comparison between mean values and fusion algorithm values (Frames: ). ACKNOWLEDGMENT The authors are supported by projects CICYT TSI , CICYT TEC and CAM MADRINET S- 0505/TIC/0255 REFERENCES [1] J. Manyika and H. Durrant-Whyte. Data Fusion and Sensor Management a decentralized information-theoretic approach. Ellis Horwood, 1994 [2] E. Waltz and J. Llinas. Multisensor Data Fusion. Artech House Inc, Norwood, Massachussets, U.S, [3] D.L. Hall and J. Llinas. Handbook of MultiSensor Data Fusion. Ed. Boca Raton. CRC Press, [4] D. Hall. Mathematical Techniques in Multisensor Data Fusion Artech House [5] Ng. G.W. Intelligent systems: fusion, tracking and control Research studies Press, ISBN: X. [6] L. Marchessoti, G. Vernazza and C. Regazzoni. A multicamera fusion framework for multiple occluding objects tracking in intelligent monitoring and sport viewing applications. IEEE International Conference on Image Processing, 2004, pp [7] S. Mavandadi and P. Aarabi. Multi-sensor Information Fusion with applications to Multi-Camera Systems. Proceedings of the 2004 IEEE International Conference on Systems Man and Cybernetics, October 2004, The Hague, Netherlands, pp [8] L.Y. Pao. Distributed Multisensor Fusion. Am. Inst. of Aeronautics and Astronautics, [9] Gerhard Weiss. Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence. The MIT Press [10] L.Y. Pao. and M. Kalandros. Algorithms for Distributed Architecture Tracking. Proc. Am. Control Conference, June, [11] A.P. Dempster. A Generalisation of Bayesian Inference. J. Royal Statistical Soc. vol 30, pp [12] A Mathematical Theory of Evidence. Princeton Univ. Press, [13] S.J. Julier and J.K. Uhlmann. A Non-Divergent Algorithm in the Presence of Unknown Correlation. Proc. Am. Control Conference, June [14] J.K. Uhlmann. Covariance Consistency Methods for Fault Tolerant Distributed Data Fusion- Information Fusion, Vol 4, pp , [15] L. Snidaro, R. Niu, P.K. Varshney and G.L. Foresti. Sensor fusion for video surveillance. Proc. of the Seventh International Conference on Information Fusion, Stockholm, Sweden, pp [16] L. Snidaro, R. Niu, P.K. Varshney and G.L. Foresti. Automatic Camera Selection and Fusion for Outdoor Surveillance under Changing Weather Conditions. Proc. of the IEEE Conference on Advanced Video and gnal Based Surveillance, [17] J. Garcia, J.M. Molina, J.A. Besada and J.I. Portillo. A multitarget tracking video system based on fuzzy and neuro-fuzzy techniques. IN EURASIP Journal on Applied gnal Processing, volume 14, pp [18] F. Castanedo, M.A. Patricio, J. García and J.M. Molina. Extending Surveillance Systems Capabilities Using BDI Cooperative Sensor Agents.

7 Proc. of the 4th ACM international workshop on Video surveillance and sensor networks, Santa Barbara, California, USA, pp [19] F. Castanedo, M.A. Patricio, J. García and J.M. Molina. Coalition of Surveillance Agents. Cooperative Fusion Improvement in Surveillance Systems. Proc. of the 1st International Workshop on Agent-Based Ubiquitous Computing, Honolulu, Hawaii, [20] M.A. Patricio, J. Carbó, O. Pérez and J. García. Multi-Agent Framework in Visual Sensor Networks. IN EURASIP Journal on Advances in gnal Processing, volume [21] [22] Network Time Protocol. [23] R.Y. Tsai. An Efficient and Accurate Camera Calibration Technique for 3D Machine Vision. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Miami Beach, FL, pp , [24] [25] S. Khan and M. Shan. Consistent labeling of Tracked Objects in Multiple Cameras with Overlapping Fields of View. IEEE Trans. Pattern Analysis and Machine Intelligence, vol 25, no. 10, pp , Oct [26] P. Mahanalobis. On the generalized distance in statistics. Proc. Natl. Inst. Sci. 12, [27] S. Khan, O. Javed, and M. Shah. Tracking in Uncalibrated Cameras with overlapping Fields of View. Proc. IEEE Int l Workshop Performance Evaluation of Tracking and Surveillance, pp , Dec [28] J. Black and T. Ellis. Multi-Camera Image Tracking. Proc. IEEE Int l Workshop Performance Evaluation of Tracking and Surveillance, pp Dec [29] Q. Zhou and J.K. Aggarwal. Tracking and Classifying Moving Objects from Video. Proc. IEEE Int l Workshop Performance Evaluation of Tracking and Surveillance, pp Dec [30] L.M. Fuentes and S.A. Velastin. People Tracking in Surveillance Applications. Prc. IEEE Int l Workshop Performance Evaluation of Tracking and Surveillance, pp Dec

Multiple camera fusion based on DSmT for tracking objects on ground plane

Multiple camera fusion based on DSmT for tracking objects on ground plane Multiple camera fusion based on DSmT for tracking objects on ground plane Esteban Garcia and Leopoldo Altamirano National Institute for Astrophysics, Optics and Electronics Puebla, Mexico eomargr@inaoep.mx,

More information

Quaternion-Based Tracking of Multiple Objects in Synchronized Videos

Quaternion-Based Tracking of Multiple Objects in Synchronized Videos Quaternion-Based Tracking of Multiple Objects in Synchronized Videos Quming Zhou 1, Jihun Park 2, and J.K. Aggarwal 1 1 Department of Electrical and Computer Engineering The University of Texas at Austin

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Continuous Multi-View Tracking using Tensor Voting

Continuous Multi-View Tracking using Tensor Voting Continuous Multi-View Tracking using Tensor Voting Jinman Kang, Isaac Cohen and Gerard Medioni Institute for Robotics and Intelligent Systems University of Southern California {jinmanka, icohen, medioni}@iris.usc.edu

More information

A new parameterless credal method to track-to-track assignment problem

A new parameterless credal method to track-to-track assignment problem A new parameterless credal method to track-to-track assignment problem Samir Hachour, François Delmotte, and David Mercier Univ. Lille Nord de France, UArtois, EA 3926 LGI2A, Béthune, France Abstract.

More information

Continuous Multi-Views Tracking using Tensor Voting

Continuous Multi-Views Tracking using Tensor Voting Continuous Multi-Views racking using ensor Voting Jinman Kang, Isaac Cohen and Gerard Medioni Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA 90089-073.

More information

Optical Flow-Based Person Tracking by Multiple Cameras

Optical Flow-Based Person Tracking by Multiple Cameras Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and

More information

3.2 Level 1 Processing

3.2 Level 1 Processing SENSOR AND DATA FUSION ARCHITECTURES AND ALGORITHMS 57 3.2 Level 1 Processing Level 1 processing is the low-level processing that results in target state estimation and target discrimination. 9 The term

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

A Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion

A Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion A Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion Paper ID:086 Abstract Multi-view approach has been proposed to solve occlusion and lack of visibility

More information

3D Reconstruction from Scene Knowledge

3D Reconstruction from Scene Knowledge Multiple-View Reconstruction from Scene Knowledge 3D Reconstruction from Scene Knowledge SYMMETRY & MULTIPLE-VIEW GEOMETRY Fundamental types of symmetry Equivalent views Symmetry based reconstruction MUTIPLE-VIEW

More information

Multi-Camera Calibration, Object Tracking and Query Generation

Multi-Camera Calibration, Object Tracking and Query Generation MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Multi-Camera Calibration, Object Tracking and Query Generation Porikli, F.; Divakaran, A. TR2003-100 August 2003 Abstract An automatic object

More information

Vision-Motion Planning with Uncertainty

Vision-Motion Planning with Uncertainty Vision-Motion Planning with Uncertainty Jun MIURA Yoshiaki SHIRAI Dept. of Mech. Eng. for Computer-Controlled Machinery, Osaka University, Suita, Osaka 565, Japan jun@ccm.osaka-u.ac.jp Abstract This paper

More information

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Consistent Labeling for Multi-camera Object Tracking

Consistent Labeling for Multi-camera Object Tracking Consistent Labeling for Multi-camera Object Tracking Simone Calderara 1, Andrea Prati 2, Roberto Vezzani 1, and Rita Cucchiara 1 1 Dipartimento di Ingegneria dell Informazione - University of Modena and

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Automatic visual recognition for metro surveillance

Automatic visual recognition for metro surveillance Automatic visual recognition for metro surveillance F. Cupillard, M. Thonnat, F. Brémond Orion Research Group, INRIA, Sophia Antipolis, France Abstract We propose in this paper an approach for recognizing

More information

Design of an A-SMGCS prototype at Barajas Airport: Data Fusion Algorithms *

Design of an A-SMGCS prototype at Barajas Airport: Data Fusion Algorithms * Design of an A-SMGCS prototype at Barajas Airport: Data Fusion Algorithms * Jesús García José M. Molina Dpto. Informática Universidad Carlos III de Madrid. Colmenarejo, Madrid, SPAIN jgarcia@inf.uc3m.es

More information

Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion

Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion Zhe Lin, Larry S. Davis, David Doermann, and Daniel DeMenthon Institute for Advanced Computer Studies University of

More information

Occlusion Robust Multi-Camera Face Tracking

Occlusion Robust Multi-Camera Face Tracking Occlusion Robust Multi-Camera Face Tracking Josh Harguess, Changbo Hu, J. K. Aggarwal Computer & Vision Research Center / Department of ECE The University of Texas at Austin harguess@utexas.edu, changbo.hu@gmail.com,

More information

Multi-Camera Target Tracking in Blind Regions of Cameras with Non-overlapping Fields of View

Multi-Camera Target Tracking in Blind Regions of Cameras with Non-overlapping Fields of View Multi-Camera Target Tracking in Blind Regions of Cameras with Non-overlapping Fields of View Amit Chilgunde*, Pankaj Kumar, Surendra Ranganath*, Huang WeiMin *Department of Electrical and Computer Engineering,

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Matrix Inference in Fuzzy Decision Trees

Matrix Inference in Fuzzy Decision Trees Matrix Inference in Fuzzy Decision Trees Santiago Aja-Fernández LPI, ETSIT Telecomunicación University of Valladolid, Spain sanaja@tel.uva.es Carlos Alberola-López LPI, ETSIT Telecomunicación University

More information

Video Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Video Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Final Report Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This report describes a method to align two videos.

More information

Motion Detection Algorithm

Motion Detection Algorithm Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection

More information

Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems

Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems Xiaoyan Jiang, Erik Rodner, and Joachim Denzler Computer Vision Group Jena Friedrich Schiller University of Jena {xiaoyan.jiang,erik.rodner,joachim.denzler}@uni-jena.de

More information

Subpixel Corner Detection for Tracking Applications using CMOS Camera Technology

Subpixel Corner Detection for Tracking Applications using CMOS Camera Technology Subpixel Corner Detection for Tracking Applications using CMOS Camera Technology Christoph Stock, Ulrich Mühlmann, Manmohan Krishna Chandraker, Axel Pinz Institute of Electrical Measurement and Measurement

More information

Observing people with multiple cameras

Observing people with multiple cameras First Short Spring School on Surveillance (S 4 ) May 17-19, 2011 Modena,Italy Course Material Observing people with multiple cameras Andrea Cavallaro Queen Mary University, London (UK) Observing people

More information

2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes

2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes 2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or

More information

Color Space Projection, Feature Fusion and Concurrent Neural Modules for Biometric Image Recognition

Color Space Projection, Feature Fusion and Concurrent Neural Modules for Biometric Image Recognition Proceedings of the 5th WSEAS Int. Conf. on COMPUTATIONAL INTELLIGENCE, MAN-MACHINE SYSTEMS AND CYBERNETICS, Venice, Italy, November 20-22, 2006 286 Color Space Projection, Fusion and Concurrent Neural

More information

Fundamental Matrices from Moving Objects Using Line Motion Barcodes

Fundamental Matrices from Moving Objects Using Line Motion Barcodes Fundamental Matrices from Moving Objects Using Line Motion Barcodes Yoni Kasten (B), Gil Ben-Artzi, Shmuel Peleg, and Michael Werman School of Computer Science and Engineering, The Hebrew University of

More information

Estimation of common groundplane based on co-motion statistics

Estimation of common groundplane based on co-motion statistics Estimation of common groundplane based on co-motion statistics Zoltan Szlavik, Laszlo Havasi 2, Tamas Sziranyi Analogical and Neural Computing Laboratory, Computer and Automation Research Institute of

More information

Light source estimation using feature points from specular highlights and cast shadows

Light source estimation using feature points from specular highlights and cast shadows Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps

More information

WATERMARKING FOR LIGHT FIELD RENDERING 1

WATERMARKING FOR LIGHT FIELD RENDERING 1 ATERMARKING FOR LIGHT FIELD RENDERING 1 Alper Koz, Cevahir Çığla and A. Aydın Alatan Department of Electrical and Electronics Engineering, METU Balgat, 06531, Ankara, TURKEY. e-mail: koz@metu.edu.tr, cevahir@eee.metu.edu.tr,

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Learning the Three Factors of a Non-overlapping Multi-camera Network Topology

Learning the Three Factors of a Non-overlapping Multi-camera Network Topology Learning the Three Factors of a Non-overlapping Multi-camera Network Topology Xiaotang Chen, Kaiqi Huang, and Tieniu Tan National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration , pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information

More information

A New Parameterless Credal Method to Track-to-Track Assignment Problem

A New Parameterless Credal Method to Track-to-Track Assignment Problem A New Parameterless Credal Method to Track-to-Track Assignment Problem Samir Hachour, François Delmotte, and David Mercier Univ. Lille Nord de France, UArtois, EA 3926 LGI2A, Béthune, France Abstract.

More information

Rigid ICP registration with Kinect

Rigid ICP registration with Kinect Rigid ICP registration with Kinect Students: Yoni Choukroun, Elie Semmel Advisor: Yonathan Aflalo 1 Overview.p.3 Development of the project..p.3 Papers p.4 Project algorithm..p.6 Result of the whole body.p.7

More information

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Face Hallucination Based on Eigentransformation Learning

Face Hallucination Based on Eigentransformation Learning Advanced Science and Technology etters, pp.32-37 http://dx.doi.org/10.14257/astl.2016. Face allucination Based on Eigentransformation earning Guohua Zou School of software, East China University of Technology,

More information

Classification with Diffuse or Incomplete Information

Classification with Diffuse or Incomplete Information Classification with Diffuse or Incomplete Information AMAURY CABALLERO, KANG YEN Florida International University Abstract. In many different fields like finance, business, pattern recognition, communication

More information

Texture Image Segmentation using FCM

Texture Image Segmentation using FCM Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

2564 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 10, OCTOBER 2010

2564 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 10, OCTOBER 2010 2564 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL 19, NO 10, OCTOBER 2010 Tracking and Activity Recognition Through Consensus in Distributed Camera Networks Bi Song, Member, IEEE, Ahmed T Kamal, Student

More information

Lecture 6 Stereo Systems Multi-view geometry

Lecture 6 Stereo Systems Multi-view geometry Lecture 6 Stereo Systems Multi-view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-5-Feb-4 Lecture 6 Stereo Systems Multi-view geometry Stereo systems

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Task analysis based on observing hands and objects by vision

Task analysis based on observing hands and objects by vision Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Car tracking in tunnels

Car tracking in tunnels Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern

More information

Fusion of Radar and EO-sensors for Surveillance

Fusion of Radar and EO-sensors for Surveillance of Radar and EO-sensors for Surveillance L.J.H.M. Kester, A. Theil TNO Physics and Electronics Laboratory P.O. Box 96864, 2509 JG The Hague, The Netherlands kester@fel.tno.nl, theil@fel.tno.nl Abstract

More information

Understanding Tracking and StroMotion of Soccer Ball

Understanding Tracking and StroMotion of Soccer Ball Understanding Tracking and StroMotion of Soccer Ball Nhat H. Nguyen Master Student 205 Witherspoon Hall Charlotte, NC 28223 704 656 2021 rich.uncc@gmail.com ABSTRACT Soccer requires rapid ball movements.

More information

Moving Object Detection and Tracking for Video Survelliance

Moving Object Detection and Tracking for Video Survelliance Moving Object Detection and Tracking for Video Survelliance Ms Jyoti J. Jadhav 1 E&TC Department, Dr.D.Y.Patil College of Engineering, Pune University, Ambi-Pune E-mail- Jyotijadhav48@gmail.com, Contact

More information

Consistent Labeling of Tracked Objects in Multiple Cameras with Overlapping Fields of View

Consistent Labeling of Tracked Objects in Multiple Cameras with Overlapping Fields of View Consistent Labeling of Tracked Objects in Multiple Cameras with Overlapping Fields of View Sohaib Khan Department of Computer Science Lahore Univ. of Management Sciences Lahore, Pakistan sohaib@lums.edu.pk

More information

A virtual tour of free viewpoint rendering

A virtual tour of free viewpoint rendering A virtual tour of free viewpoint rendering Cédric Verleysen ICTEAM institute, Université catholique de Louvain, Belgium cedric.verleysen@uclouvain.be Organization of the presentation Context Acquisition

More information

Footprint Recognition using Modified Sequential Haar Energy Transform (MSHET)

Footprint Recognition using Modified Sequential Haar Energy Transform (MSHET) 47 Footprint Recognition using Modified Sequential Haar Energy Transform (MSHET) V. D. Ambeth Kumar 1 M. Ramakrishnan 2 1 Research scholar in sathyabamauniversity, Chennai, Tamil Nadu- 600 119, India.

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Incremental Observable-Area Modeling for Cooperative Tracking

Incremental Observable-Area Modeling for Cooperative Tracking Incremental Observable-Area Modeling for Cooperative Tracking Norimichi Ukita Takashi Matsuyama Department of Intelligence Science and Technology Graduate School of Informatics, Kyoto University Yoshidahonmachi,

More information

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

Chapter 7: Computation of the Camera Matrix P

Chapter 7: Computation of the Camera Matrix P Chapter 7: Computation of the Camera Matrix P Arco Nederveen Eagle Vision March 18, 2008 Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 1 / 25 1 Chapter 7: Computation of the camera Matrix

More information

Decision Fusion using Dempster-Schaffer Theory

Decision Fusion using Dempster-Schaffer Theory Decision Fusion using Dempster-Schaffer Theory Prof. D. J. Parish High Speed networks Group Department of Electronic and Electrical Engineering D.J.Parish@lboro.ac.uk Loughborough University Overview Introduction

More information

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model TAE IN SEOL*, SUN-TAE CHUNG*, SUNHO KI**, SEONGWON CHO**, YUN-KWANG HONG*** *School of Electronic Engineering

More information

Centre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB

Centre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB HIGH ACCURACY 3-D MEASUREMENT USING MULTIPLE CAMERA VIEWS T.A. Clarke, T.J. Ellis, & S. Robson. High accuracy measurement of industrially produced objects is becoming increasingly important. The techniques

More information

Estimating the wavelength composition of scene illumination from image data is an

Estimating the wavelength composition of scene illumination from image data is an Chapter 3 The Principle and Improvement for AWB in DSC 3.1 Introduction Estimating the wavelength composition of scene illumination from image data is an important topics in color engineering. Solutions

More information

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Javier Romero, Danica Kragic Ville Kyrki Antonis Argyros CAS-CVAP-CSC Dept. of Information Technology Institute of Computer Science KTH,

More information

Project Updates Short lecture Volumetric Modeling +2 papers

Project Updates Short lecture Volumetric Modeling +2 papers Volumetric Modeling Schedule (tentative) Feb 20 Feb 27 Mar 5 Introduction Lecture: Geometry, Camera Model, Calibration Lecture: Features, Tracking/Matching Mar 12 Mar 19 Mar 26 Apr 2 Apr 9 Apr 16 Apr 23

More information

LATEST TRENDS on APPLIED MATHEMATICS, SIMULATION, MODELLING

LATEST TRENDS on APPLIED MATHEMATICS, SIMULATION, MODELLING 3D surface reconstruction of objects by using stereoscopic viewing Baki Koyuncu, Kurtuluş Küllü bkoyuncu@ankara.edu.tr kkullu@eng.ankara.edu.tr Computer Engineering Department, Ankara University, Ankara,

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

LEARNING NAVIGATION MAPS BY LOOKING AT PEOPLE

LEARNING NAVIGATION MAPS BY LOOKING AT PEOPLE LEARNING NAVIGATION MAPS BY LOOKING AT PEOPLE Roger Freitas,1 José Santos-Victor Mário Sarcinelli-Filho Teodiano Bastos-Filho Departamento de Engenharia Elétrica, Universidade Federal do Espírito Santo,

More information

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment

More information

Neuro-adaptive Formation Maintenance and Control of Nonholonomic Mobile Robots

Neuro-adaptive Formation Maintenance and Control of Nonholonomic Mobile Robots Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 50 Neuro-adaptive Formation Maintenance and Control of Nonholonomic

More information

Video Georegistration: Key Challenges. Steve Blask Harris Corporation GCSD Melbourne, FL 32934

Video Georegistration: Key Challenges. Steve Blask Harris Corporation GCSD Melbourne, FL 32934 Video Georegistration: Key Challenges Steve Blask sblask@harris.com Harris Corporation GCSD Melbourne, FL 32934 Definitions Registration: image to image alignment Find pixel-to-pixel correspondences between

More information

Human Tracking based on Multiple View Homography

Human Tracking based on Multiple View Homography Journal of Universal Computer Science, vol. 15, no. 13 (2009), 2463-2484 submitted: 31/10/08, accepted: 13/6/09, appeared: 1/7/09 J.UCS Human Tracking based on Multiple View Homography Dong-Wook Seo (University

More information

A Fast Linear Registration Framework for Multi-Camera GIS Coordination

A Fast Linear Registration Framework for Multi-Camera GIS Coordination A Fast Linear Registration Framework for Multi-Camera GIS Coordination Karthik Sankaranarayanan James W. Davis Dept. of Computer Science and Engineering Ohio State University Columbus, OH 4320 USA {sankaran,jwdavis}@cse.ohio-state.edu

More information

Fixed Point Probability Field for Complex Occlusion Handling

Fixed Point Probability Field for Complex Occlusion Handling Fixed Point Probability Field for Complex Occlusion Handling François Fleuret Richard Lengagne Pascal Fua École Polytechnique Fédérale de Lausanne CVLAB CH 1015 Lausanne, Switzerland {francois.fleuret,richard.lengagne,pascal.fua}@epfl.ch

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease Particle Tracking For Bulk Material Handling Systems Using DEM Models By: Jordan Pease Introduction Motivation for project Particle Tracking Application to DEM models Experimental Results Future Work References

More information

Geometrical Feature Extraction Using 2D Range Scanner

Geometrical Feature Extraction Using 2D Range Scanner Geometrical Feature Extraction Using 2D Range Scanner Sen Zhang Lihua Xie Martin Adams Fan Tang BLK S2, School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798

More information

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Yang Wang Tele Tan Institute for Infocomm Research, Singapore {ywang, telctan}@i2r.a-star.edu.sg

More information

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains PhD student: Jeff DELAUNE ONERA Director: Guy LE BESNERAIS ONERA Advisors: Jean-Loup FARGES Clément BOURDARIAS

More information

Globally Stabilized 3L Curve Fitting

Globally Stabilized 3L Curve Fitting Globally Stabilized 3L Curve Fitting Turker Sahin and Mustafa Unel Department of Computer Engineering, Gebze Institute of Technology Cayirova Campus 44 Gebze/Kocaeli Turkey {htsahin,munel}@bilmuh.gyte.edu.tr

More information

Real-Time Fusion of Multi-Focus Images for Visual Sensor Networks

Real-Time Fusion of Multi-Focus Images for Visual Sensor Networks Real-Time Fusion of Multi-Focus Images for Visual Sensor Networks Mohammad Bagher Akbari Haghighat, Ali Aghagolzadeh, and Hadi Seyedarabi Faculty of Electrical and Computer Engineering, University of Tabriz,

More information

POME A mobile camera system for accurate indoor pose

POME A mobile camera system for accurate indoor pose POME A mobile camera system for accurate indoor pose Paul Montgomery & Andreas Winter November 2 2016 2010. All rights reserved. 1 ICT Intelligent Construction Tools A 50-50 joint venture between Trimble

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Sung Chun Lee, Chang Huang, and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu,

More information

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION 2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent

More information

ФУНДАМЕНТАЛЬНЫЕ НАУКИ. Информатика 9 ИНФОРМАТИКА MOTION DETECTION IN VIDEO STREAM BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING

ФУНДАМЕНТАЛЬНЫЕ НАУКИ. Информатика 9 ИНФОРМАТИКА MOTION DETECTION IN VIDEO STREAM BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING ФУНДАМЕНТАЛЬНЫЕ НАУКИ Информатика 9 ИНФОРМАТИКА UDC 6813 OTION DETECTION IN VIDEO STREA BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING R BOGUSH, S ALTSEV, N BROVKO, E IHAILOV (Polotsk State University

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information