A Multisensor Surveillance System for Automated Border Control (egate)

Size: px
Start display at page:

Download "A Multisensor Surveillance System for Automated Border Control (egate)"

Transcription

1 Workshop on Activity Monitoring by Multiple Distributed Sensing (AMMDS) in conjunction with th IEEE International Conference on Advanced Video and Signal Based Surveillance A Multisensor Surveillance System for Automated Border Control (egate) David Schreiber, Andreas Kriechbaum, Michael Rauter AIT Austrian Institute of Technology Video- and Security Technology, Donau-City-Straße 1, 1220 Vienna, Austria david.schreiber@ait.ac.at Abstract This paper presents 1 a multisensor surveillance system used inside an Automated Border Control (ABC) system (more specifically, an egate). The system consists of two parts: counting the number of persons inside the egate (person separation), ensuring that no more than one passenger is present; left luggage detection, ensuring that the passenger did not leave any item inside the egate. These tasks are performed using a top-view mounted sensor inside the egate, consisting of a trinocular camera configuration comprised of a monochrome stereo setup which delivers depth information, and a color camera mounted in-between, capturing color information. In contrast to already existing ABC solutions, which mostly use electronic sensors, e.g. simple beam technology, for person separation and left item detection, we introduce vision based technologies to elevate the security of such systems to a higher level, also increasing the usability for the border guard. The system achieves real time and had been demonstrated and evaluated at the Vienna International Airport. 1. Introduction Increasing worldwide travel capacities at airports poses new challenges in the area of border and security control. The proposed Automated Border Control system shall accelerate the border control process by increasing the passenger throughput while maintaining the highest level of security. The developed user-friendly, throughput optimized, automated border crossing prototype operating at the highest security level is currently demonstrated and evaluated at the Vienna International Airport (Terminal 2, non-schengen Arrival). The developed egate is a specific type of an ABC system, characterized by a one-step process and two doors. The developed system architecture is an interconnected 1 Co-funded by the Austrian Security Research Programme KIRAS ( an initiative of the Federal Ministry for Transport, Innovation and Technology, Austria (bmvit, application chain with the following components (see Fig. 1): Transit area with two doors representing the physical barrier of the border; Passport reader for document authentication; Video verification (1:1 face comparison in a live stream) for identity authentication; Video surveillance for reliably counting the number of persons present inside the egate. Furthermore, the left luggage detection ensures that the person exiting the egate did not leave behind any of its personal belongings. Person counting in front of the egate, for optimizing the number of egate operating at any given moment; Connection to the national security database and the certificate server, which are managed by the authority; Monitoring interface for trained border guards to visualize the process and traveler-specific data. The process of using the egate in cooperation with the necessary technologies is depicted in Fig. 2. The traveler places his epassport on the passport reader, which then authenticates the epassport, including electronic and optical security checks. If reading has succeeded, the first egate s door opens automatically, and the passenger goes through the egate. During his walk, recorded live images of his face are captured, and compared against the picture stored in the chip of his epassport. In addition, a security check is performed against the Schengen Information System (SIS). At the same time, a surveillance system (top view sensor) ensures that only a single person is present inside the egate (person separation). Once the identity of the (single) passenger has been authenticated, the second door opens automatically, and the passenger steps out of the egate. The opening of the second doors activates the left luggage detection module. In case that any item was left behind in the egate, the second door opens again, enabling the passenger to return and pickup his luggage. There are already existing ABC solutions that are still highly diverse: ABC systems currently on the market differ in a multitude of aspects such as type of biometrics, topology (e.g. with/without kiosk), number of doors, overall system and process layout, guidance of users, etc. Considerable efforts have already been made to establish best practice guidelines and recommendations by FRONTEX and a limited number of member European states [3]. Most current systems use electronic sensors, e.g. simple beam technology, for person separation and left /13/$ IEEE 432

2 item detection. In this paper, we introduce vision based technologies to elevate the security of such systems to a higher level, also increasing the usability for the border guard. respect to embedded real-time systems in software. A detailed performance analysis of the algorithm was given in [7] for optimized reference implementations on various commercial of the shelf (COTS) platforms, e.g. a PC, a DSP and a GPU. The above sensor is mounted in the egate in a top-view position, 279 cm from the ground, yielding 0.97 cm per pixel visual resolution on the ground floor plane, and 2 cm depth resolution. In the given resolution, the sensor delivers 15 fps (The bottleneck is the USB connection of the sensor to the PC via a single host controller.) The rest of this paper is organized as follows. Section 2 presents the person separation method, while section 3, the left luggage detection algorithm. Each section includes literature survey, an outline of the method and experimental results. Section 4 contains our conclusions. 2. Person separation (CPU) Fig. 1. The developed system architecture, with the following components: Transit area, Passport reader, Video verification, Video surveillance, Connection to the national security database and the certificate server, Monitoring interface. Fig. 2. The process of using the egate in cooperation with the necessary technologies. In this paper, we focus on the surveillance system inside the egate, performing person separation and left luggage detection. We use an in-house developed sensor to extract color as well as depth information, employing canonical stereo setup (monochrome cameras mounted in parallel), with a baseline of 0.2m. A third color camera is mounted in-between, used for capturing color information. The board-level industrial cameras have a USB 2 interface and the resolution of the sensor is 752x480, resampled to 608x328 (WVGA), with 8 bits. This trinocular camera setup is calibrated offline. The stereo matching process outputs depth data alongside with a rectified color image, congruent to the depth image. Depth information is computed via Census-based stereo matching algorithm, described in [7], which is an explicit adaption and optimization of the well-known Census transform in Human counting is a crucial part of the egate system. When multiple passengers try to pass through the egate simultaneously, using a single passport, an alert must be forwarded to a border guard. To achieve usability in the real-world, false alarm rate should be kept low. In particular, the system should not confuse luggage or other objects for a person. Mounting the visual sensor in a top-view configuration, the problem of humans partially occluding each other is drastically decreased. Systems using traditional color cameras suffer from further difficulties, such as sensitivity to illumination changes and shadows or reflections in the scene. Using depth information decreases or even solves entirely these types of problems. Previous literature dealing with the topic of human detection, tracking and counting using depth data employ varying setups with both oblique and top view sensors. We first briefly outline methods employing depth information from oblique view setups. In [11], foreground blobs are detected by background subtraction. The location of humans in the scene is estimated from depth information of each foreground blob. Head detection is done without using stereo information, by applying a partial-ellipse fitting algorithm. In [9], the image is segmented into different disparity layers which are used in a contour fitting step. Finally, a tracking step is performed. In [6], stereo-based human head detection is performed using scale-adaptive filters, and mean-shift algorithm is used to localize heads in the likelihood map. In [16], a method for human detection is presented which performs a scalespace search with a combination of histogram of oriented depths features and color data. In [20], Histogram of Depth Difference is proposed as a new feature descriptor. A Support Vector Machine (SVM) classifier is trained with this descriptor. In [22], the Simplified Local Ternary 433

3 Pattern feature descriptor is presented. The detection window is partitioned into non-overlapping blocks. A histogram is build from these blocks. Other approaches employ top view sensor mounting. In [1], a 3D volume of interest, in head height, is investigated to locate heads. Resampling of the depth image from a vertical projection into an occupancy map is done to remove perspective distortions. A Gaussian mixture model is applied to the occupancy map, to locate the humans. In [21], the depth map is segmented into several height intervals, generating binary image for each depth interval. Morphological operations, such as openings with circular structuring elements, are used to locate the human heads. Finally, the heads are tracked. A Kalman filter is employed for predicting the head positions. In [10], an adaptive background model is used to extract foreground regions. On these foreground regions, a spherical crust template is matched. They determine the number of heads and their position and height in a foreground blob. A blob separation step splits the foreground blobs which contain more than one head. A tracker based on a set of Kalman filters is then being fed with the detections Our method Our system receives depth information provided by the top view mounted sensor. Knowledge of the sensor s height relative to the floor enables us to determine the height of an object in 3D world coordinates. The depth image contains invalid pixels (no valid depth information available), with invalid depth values set to depth equal zero. The depth image contains depth values measured relative to the sensor. These are inverted as to be relative to the egate s floor, and depth values which exceed the sensor s height relative to the floor are clipped. Thus, instead of searching for minima in the depth image, we search for maxima, followed by a non-maxima suppression algorithm. Our optimized non-maxima suppression algorithm is an extension of a recent work reported in [12], where the algorithm locates 1-D peaks along the images scan-line and compares each of these peaks against its 2-D neighborhood in a spiral scan order. We have extended the first solution offered by [12], namely by, first, using a rectangular object window rather than a squared one; second, we enable a variable scale object window (due to perspective effects); finally, we detect plateaus in addition to regular peaks. We use the following method for human detection, tracking and counting (for more details, see [14]). In the first step, we search the depth map for local maxima. The maxima are used as seed points for a head localization algorithm. The centers of the head candidates are found in this step using a gradient climbing algorithm. They are handed over to the head detector. The head detector is based on an SVM classifier, trained on histograms of depth difference features. In a final step, a tracking algorithm aggregates the detections over time and computes trajectories for detected heads. Association costs for 3 different tracking strategies are compared, and the best association configuration is picked. Euclidean distance is used as distance metric. The different strategies are: (1) nearest neighbor association, (2) association of detections with minimum number of candidate tracks first, (3) association of tracks with minimum number of candidate detections first. Note that our algorithm does not depend on background modeling for foreground/background segmentation. Therefore, it does not suffer from typical shortcomings related to background-based methods, e.g. a person that stands still for a while and eventually being integrated into the background. Furthermore, incorrect foreground segmentation can be the consequence of an abrupt short-term illumination/lighting change. In Fig. 3, the resulting human trajectories of our algorithm are shown. As can be seen, even persons walking close to each other (piggy-backing) are detected and tracked. Each track gets a unique identifier assigned. Furthermore, the calculated height of the persons in centimeters is shown. In the upper left corner, the current person count is presented Implementation details and results We used C++ to implement our algorithm. The Intel Integrated Performance Primitives library was used for fast image operations. For classification, the LIBSVM [2] was used, employing radial basis functions kernel. The optimized parameters for the kernel were found via a grid search. The test system consisted of an Intel Xeon CPU with 4 physical and 4 virtual 2.93 GHz, 12 GB RAM running on Windows7 64-bit machine. The runtime performance of our algorithm is shown in table 1, averaged over 1000 frames. The most time consuming algorithmic part is the head hypothesis generation (1.77msec). This is hardly surprising, since it involves both the maxima search as well as the gradient climbing algorithm. Especially the maxima search involves scanning the whole image, which is computationally intensive. Computation time of the head classification is moderate. This is due to the fact that the head hypothesis generation reduces the number of head candidates drastically. Hence, the remaining processing time for this step is about 1msec. The depth values conversion takes up 0 5msec. As one can see from Table 1, the time consumption of the tracking step is negligible. We have evaluated our detector on a test data set of 5184 positive and 7344 negative samples. A true positive detection rate of 0.93 and a true negative detection rate of 0.99 were achieved on the test data set (see [14]). 434

4 Fig. 3: Human trajectories and count computed by our system Table 1: Average runtime of the person separation algorithm. Algorithmic step Computation time Conversion of depth values 0.52 milliseconds Generation of head hypothesis 1.77 milliseconds Head classification 1.13 milliseconds Association/tracking 0.01 milliseconds Total 3.43 milliseconds 3. Left luggage detection (CPU-GPU) Most of the proposed techniques for abandoned object detection rely on tracking informationn to detect drop-off events, while fusing information from multiple cameras [18]. Only few techniques are concerned with abandoned object detection based on a single visual camera. [13] proposed a single camera, non-tracking-based system, which makes use of two backgrounds for detection of stationary objects. The two backgrounds are constructed by sampling the input video at different frame rates (one for short-term and another for long-term events). The ObjectVideo surveillance system [19] keeps track of background regions which are stored right before they are covered by an abandoned object. In case the same object is removed, the stored region can be matched with the current frame to determine that the object was removed. In addition, the system relies on analyzing the edge energy associated with the boundaries of the foreground region for both the current frame and the background model. In [18], a framework to detect abandoned and removed objects is presented, using a mixture of three Gaussians for each pixel in the image. It is assumed that the 1st Gaussian distribution shows the persistent pixels and represents the background image, the repetitive variations and the relative stationary regions are updatedd to the 2 nd Gaussian distribution, while the 3rd Gaussian represents the pixels with quick changes. Accordingly, if the weight of the 2nd Gaussian for a pixel is larger than the threshold, the pixel belongs to the static region. We are not aware of any previous work which employs depth information in the context of abandoned object detection, and none which combines intensity and depth information for the same purpose. In fact, the restricted field of view of the egate s ground floor (180x82 cm), and the height of the top-view mounted sensor relative to the ground (279 cm) make it an ideal scenario for using accurate depth information. Moreover, we combine color and depth cues in order to be able to detect left objects based on either color (e.g. passport), or texture (e.g. gray laptop having similar color to the floor), or based on both cues. Our left luggage detection module is based on the fusion of two separate background subtraction modalities, namely color and depth. Both modalities employ the same GPU-based non-parametric background subtraction algorithm reported in [15], which is extended in the present paper to distinguish between moving and static objects GPU-based background subtraction Most background modeling methods are pixel-based. However, the intensity values of a pixel cannot be modeled properly by a single unimodal distribution. To handle lighting changes, repetitive background motion, and introducing or removing objects from the scene, the Mixture of Gaussians (MoG) approach was introduced in [17], where each background pixel is modeled by a distribution composed of a fixed number of Gaussians. MoG has been widely incorporated into various algorithms, including color, gradient and dense depth data [5]. However, as the distribution of pixel values over time cannot be modeled accurately by a mixture of Gaussians, a non-parametric kernel density estimation method for pixel-wise background modeling was proposed [3]. However, the method is very memory and time consuming. To overcome computational limitation of this technique, [8] proposed a compressed non-parametric representation using a codebook model. Samples at each pixel are clustered into a set of code-words based on a color distortion metric together with brightness bounds. [15] presented a background modeling algorithm for a practical surveillance system that utilizes a compressed non-parametric representation, significantly simplifying the work of [8]. The algorithm is simple, efficient, yet robust, and was successfully ported to GPU, using CUDA, achieving ultra real-time. Moreover, the time it takes until a stopped object is absorbed into the background (integration time) is controllable by the user, and is stable regardless of the complexity of the scene. The distribution of a pixel s history is captured by a set of codes, where a maximal number of 6 codes is sufficient to model outdoor 435

5 videos. Codes lying in the tail of the distribution are classified as foreground pixels Our method Our left luggage detection module is based on the fusion of color- and depth-based background subtraction algorithms, both employing the same compressed nonparametric background subtraction method [15], extended in the present work to distinguish between moving and static objects. The two backgrounds are parameterized similarly as described in [15], except for the sensitivity d of the depth-based background which is set to equal 1 pixel (2 cm of depth). The depth image contains depth values measured relative to the sensor. These are inverted as to be relative to the egate s ground floor, and depth values which exceed the sensor s height (279 cm) relative to the floor are clipped. The intensity and depth images are cropped to 62x192 sub-images, which contain only the floor s area, with image resolution of 0.97 cm per pixel. The depth image contains invalid pixels with undefined depth values, set to depth equal zero. The temporal consistency of the invalid pixels depends on the scene. The gray carpet covering the egate s floor contains low texture and is susceptible to shadows and light reflections, caused by the motion of passengers. Hence, invalid pixels associated with the floor are occasionally set to non-zero values. To handle invalid pixel fluctuations, we use a fuzzy definition of invalidness, maintaining an adaptive count on each pixel s history. A pixel is considered invalid if in the current frame it has zero value, or if it has reappeared as invalid often enough recently (above a threshold). Each of the two backgrounds, color- and depth-based, produces its own foreground-background segmentation. Next, the algorithm distinguishes between moving and static objects for each one of the two segmentations. Again, we maintain an adaptive count on each pixel s history. A foreground pixel is considered as static provided that it persistently reappeared as foreground often enough (above a threshold). The static foreground image is then filtered against small noisy areas using a median filter. Finally, the two static foreground images obtained for color and depth cues are fused (via a union operation). Next, blob extraction (connected component labeling) is performed on the fused static foreground image, and too small foreground regions are rejected. To increase robustness even further, we synchronize the left luggage detection module with the egate s door signals. Thus, background updating is blocked during the period where the passenger is inside the egate, and detection of left items is activated for a short period of time after the person has left the egate Implementation details and results Due the constrained nature of the application restricted field of view and sensor s height, controlled internal illumination; the fusion of color and depth information; the synchronization of the left item detection module with the egate process we achieve a very robust performance. In fact, we consider any region whose area is as small as 9 pixels as a valid left luggage, and are able to detect virtually all possible left items, without being troubled by false alarms. We have tested left luggage detection by deliberately dropping off items in the egate, using 57 items of different size, shape, texture and color laptop bags and trolleys, passports, small personal belongings, dressing, drinking and reading items. The only item which our system was not able to detect was a transparent small drinking bottle, without a cork, and at standing position. The same bottle was detected when lying on the floor (due to its etiquette), or when full. Some detected items can be seen in Fig. 4. Fig. 4: examples of left luggage detection. Color image with detected bounding box (left), depth image (middle) and resulting fused static object foreground (right). From top to bottom: glove, empty bottle, passport, magazine, standing bottle, hat, laptop bag, trolley. False alarms rate was tested further, with persons passing through the egate and not leaving any items. A sequence of 231 egate processes has yielded zero false alarms rate. We note that due to data protection issues, we were not allowed to record video sequences while the egate is being continuously tested with real passengers. However, we hope to obtain more testing data shortly. 436

6 The test system is specified in Section 2.2. The left luggage module average performance, on optimized C++ and on hybrid CPU-GPU implementations, is 4.5msec and 2.3msec, respectively. For the hybrid version, the two background subtraction algorithms consume 0.3sec, using the same GPU implementation as in [15]. The postprocessing step (invalid pixel estimation, distinguishing between moving and static foreground objects, connected component labeling) is implemented in (non-optimized) C++ and consumes 2msec. 4. Conclusions This paper presented a hybrid CPU-GPU multisensor surveillance system used inside an egate, performing person separation, and left luggage detection. The system is based on a top-view mounted sensor inside the egate, fusing color and depth information. In contrast to previous published work, our depth-based person separation method does not make use of background subtraction algorithm, thus avoiding persons blending in to the background and other segmentation errors. To the best of our knowledge, our left luggage detection method is the first to use depth information, and to fuse color and depth information. Due the constrained nature of the application, we achieve a very robust performance. For the person separation task, a true positive detection rate of 0.93 and a true negative detection rate of 0.99 were achieved, while for the left luggage task, we consider any region whose area is as small as 9 pixels as a valid left luggage, and are able to detect virtually all possible left items, without being troubled by false alarms. The person separation module runs on average at 286 fps (C++), while the left luggage detection runs on average at 435 fps (CPU- GPU). References [1] D. Beymer. Person counting using stereo. In Proceedings of the Workshop on Human Motion, pages , [2] C.C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1 27:27, [3] A. Elgammal, D. Harwood and L.S. Davis. Nonparametric model for background subtraction, ECCV, 2:751 67, [4] FRONTEX: Best Practice Guidelines on the Design, Deployment and Operation of Automated Border Crossing Systems; Warsaw, March 2011 Release 1.1. [5] M. Harville. A framework for high-level feedback to adaptive, per pixel, mixture-of-gaussian background models, ECCV, vol.3, pp , [6] X. Huang, L. Li, and T. Sim. Stereo-based human head detection from crowd scenes. ICIP, pages vol.2, [7] M. Humenberger, C. Zinner, M. Weber, W. Kubinger and M. Vincze, A fast stereo matching algorithm suitable for embedded real-time systems, CVIU, Vol. 114, Issue 11, November 2010, Pages [8] K. Kim, T.H. Chalidabhongse, D. Harwood and L. Davis, Real-time foreground background segmentation using codebook model, Journal of Real-Time Imaging, Vol. 11(3), pp:172, [9] R. Luo and Y. Guo. Real-time stereo tracking of multiple moving heads. ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, pages 55 59, [10] T. van Oosterhout, S. Bakkes, and B. J. A. Kröse. Head detection in stereo data for people counting and segmentation. 6th International Conference on Computer Vision Theory and Applications, pages , [11] S. Park and J. Aggarwal. Head segmentation and head orientation in 3d space for pose estimation of multiple people. 4th IEEE Southwest Symposium on Image Analysis and Interpretation, pages , [12] T. Pham. Non-maximum suppression using fewer than two comparisons per pixel. In Advanced Concepts for Intelligent Vision Systems, volume 6474, pages , [13] F. Porikli, Detection of Temporarily Static Regions by Processing Video at Different Frame Rates, AVSS, [14] Authors. Reliable Human Detection and Tracking in Top- View Depth Images, submitted to The 3rd International Workshop on Human Activity Understanding from 3D Data [15] D. Schreiber and M. Rauter, GPU-based non-parametric background subtraction for a practical surveillance system. In ECV Workshop, CVPR, pages , [16] L. Spinello and K. Arras. People detection in rgb-d data. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages , [17] C. Stauffer and W.E.L. Grimson, "Adaptive Background Mixture Models for Real-Time Tracking," CVPR, vol. 2, pp.2246, [18] Y.L. Tian, R. S. Feris, H. Liu, A. Hampapur and M.T. Sun, Robust Detection of Abandoned and Removed Objects in Complex Surveillance Videos, IEEE Transactions on Systems, Man and Cybernetics, part C: Applications and Reviews, Vol. 41, No. 5, [19] P. L. Venetianer, Z. Zhang, W. Yin and A. J. Liptop, Stationary Target Detection Using the ObjectVideo Surveillance System, AVSS, [20] S. Wu, S. Yu, and W. Chen. An attempt to pedestrian detection in depth images. IVS, pages , [21] T. Yahiaoui, C. Meurie, L. Khoudour, and F. Cabestaing. A people counting system based on dense and close stereovision. 3rd international conference on Image and Signal Processing, pages 59 66, [22] S. Yu, S.Wu, and L.Wang. Sltp: A fast descriptor for people detection in depth images. In AVSS, pages 43 47,

Reliable Human Detection and Tracking in Top-View Depth Images

Reliable Human Detection and Tracking in Top-View Depth Images 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Reliable Human Detection and Tracking in Top-View Depth Images Michael Rauter Austrian Institute of Technology Donau-City-Strasse

More information

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE Hongyu Liang, Jinchen Wu, and Kaiqi Huang National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

An Approach for Real Time Moving Object Extraction based on Edge Region Determination An Approach for Real Time Moving Object Extraction based on Edge Region Determination Sabrina Hoque Tuli Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,

More information

1/1 1. Challenging vision tasks meeting depth sensing: an in-depth look. Austrian Institute of Technology

1/1 1. Challenging vision tasks meeting depth sensing: an in-depth look. Austrian Institute of Technology Short intro who are we in 20 seconds Austrian Institute of Technology Challenging vision tasks meeting depth sensing: an in-depth look Csaba Beleznai Csaba Beleznai Senior Scientist Video- and Safety Technology

More information

Detecting motion by means of 2D and 3D information

Detecting motion by means of 2D and 3D information Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Introduction to behavior-recognition and object tracking

Introduction to behavior-recognition and object tracking Introduction to behavior-recognition and object tracking Xuan Mo ipal Group Meeting April 22, 2011 Outline Motivation of Behavior-recognition Four general groups of behaviors Core technologies Future direction

More information

Adaptive Background Mixture Models for Real-Time Tracking

Adaptive Background Mixture Models for Real-Time Tracking Adaptive Background Mixture Models for Real-Time Tracking Chris Stauffer and W.E.L Grimson CVPR 1998 Brendan Morris http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Motivation Video monitoring and surveillance

More information

Automatic Tracking of Moving Objects in Video for Surveillance Applications

Automatic Tracking of Moving Objects in Video for Surveillance Applications Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering

More information

Detecting and Identifying Moving Objects in Real-Time

Detecting and Identifying Moving Objects in Real-Time Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary

More information

A Fast Moving Object Detection Technique In Video Surveillance System

A Fast Moving Object Detection Technique In Video Surveillance System A Fast Moving Object Detection Technique In Video Surveillance System Paresh M. Tank, Darshak G. Thakore, Computer Engineering Department, BVM Engineering College, VV Nagar-388120, India. Abstract Nowadays

More information

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO Makoto Arie, Masatoshi Shibata, Kenji Terabayashi, Alessandro Moro and Kazunori Umeda Course

More information

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

Real-Time Human Detection using Relational Depth Similarity Features

Real-Time Human Detection using Relational Depth Similarity Features Real-Time Human Detection using Relational Depth Similarity Features Sho Ikemura, Hironobu Fujiyoshi Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai, Aichi, 487-8501 Japan. si@vision.cs.chubu.ac.jp,

More information

Scene Text Detection Using Machine Learning Classifiers

Scene Text Detection Using Machine Learning Classifiers 601 Scene Text Detection Using Machine Learning Classifiers Nafla C.N. 1, Sneha K. 2, Divya K.P. 3 1 (Department of CSE, RCET, Akkikkvu, Thrissur) 2 (Department of CSE, RCET, Akkikkvu, Thrissur) 3 (Department

More information

Connected Component Analysis and Change Detection for Images

Connected Component Analysis and Change Detection for Images Connected Component Analysis and Change Detection for Images Prasad S.Halgaonkar Department of Computer Engg, MITCOE Pune University, India Abstract Detection of the region of change in images of a particular

More information

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Sung Chun Lee, Chang Huang, and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu,

More information

Background subtraction in people detection framework for RGB-D cameras

Background subtraction in people detection framework for RGB-D cameras Background subtraction in people detection framework for RGB-D cameras Anh-Tuan Nghiem, Francois Bremond INRIA-Sophia Antipolis 2004 Route des Lucioles, 06902 Valbonne, France nghiemtuan@gmail.com, Francois.Bremond@inria.fr

More information

Class 3: Advanced Moving Object Detection and Alert Detection Feb. 18, 2008

Class 3: Advanced Moving Object Detection and Alert Detection Feb. 18, 2008 Class 3: Advanced Moving Object Detection and Alert Detection Feb. 18, 2008 Instructor: YingLi Tian Video Surveillance E6998-007 Senior/Feris/Tian 1 Outlines Moving Object Detection with Distraction Motions

More information

A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b and Guichi Liu2, c

A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b and Guichi Liu2, c 4th International Conference on Mechatronics, Materials, Chemistry and Computer Engineering (ICMMCCE 2015) A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b

More information

International Journal of Modern Engineering and Research Technology

International Journal of Modern Engineering and Research Technology Volume 4, Issue 3, July 2017 ISSN: 2348-8565 (Online) International Journal of Modern Engineering and Research Technology Website: http://www.ijmert.org Email: editor.ijmert@gmail.com A Novel Approach

More information

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Jong Taek Lee, M. S. Ryoo, Matthew Riley, and J. K. Aggarwal Computer & Vision Research Center Dept. of Electrical & Computer Engineering,

More information

Multi-Camera Calibration, Object Tracking and Query Generation

Multi-Camera Calibration, Object Tracking and Query Generation MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Multi-Camera Calibration, Object Tracking and Query Generation Porikli, F.; Divakaran, A. TR2003-100 August 2003 Abstract An automatic object

More information

Hybrid Cone-Cylinder Codebook Model for Foreground Detection with Shadow and Highlight Suppression

Hybrid Cone-Cylinder Codebook Model for Foreground Detection with Shadow and Highlight Suppression Hybrid Cone-Cylinder Codebook Model for Foreground Detection with Shadow and Highlight Suppression Anup Doshi and Mohan Trivedi University of California, San Diego Presented by: Shaurya Agarwal Motivation

More information

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation Alexander Andreopoulos, Hirak J. Kashyap, Tapan K. Nayak, Arnon Amir, Myron D. Flickner IBM Research March 25,

More information

A Texture-Based Method for Modeling the Background and Detecting Moving Objects

A Texture-Based Method for Modeling the Background and Detecting Moving Objects A Texture-Based Method for Modeling the Background and Detecting Moving Objects Marko Heikkilä and Matti Pietikäinen, Senior Member, IEEE 2 Abstract This paper presents a novel and efficient texture-based

More information

Background Subtraction Techniques

Background Subtraction Techniques Background Subtraction Techniques Alan M. McIvor Reveal Ltd PO Box 128-221, Remuera, Auckland, New Zealand alan.mcivor@reveal.co.nz Abstract Background subtraction is a commonly used class of techniques

More information

Background Initialization with A New Robust Statistical Approach

Background Initialization with A New Robust Statistical Approach Background Initialization with A New Robust Statistical Approach Hanzi Wang and David Suter Institute for Vision System Engineering Department of. Electrical. and Computer Systems Engineering Monash University,

More information

Idle Object Detection in Video for Banking ATM Applications

Idle Object Detection in Video for Banking ATM Applications Research Journal of Applied Sciences, Engineering and Technology 4(24): 5350-5356, 2012 ISSN: 2040-7467 Maxwell Scientific Organization, 2012 Submitted: March 18, 2012 Accepted: April 06, 2012 Published:

More information

Algorithmic development for 2D and 3D vision systems using Matlab

Algorithmic development for 2D and 3D vision systems using Matlab Algorithmic development for 2D and 3D vision systems using Matlab Csaba Beleznai Csaba Beleznai Senior Scientist Video- and Safety Technology Safety & Security Department AIT Austrian Institute of Technology

More information

SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES

SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES 1 R. AROKIA PRIYA, 2 POONAM GUJRATHI Assistant Professor, Department of Electronics and Telecommunication, D.Y.Patil College of Engineering, Akrudi,

More information

Background Image Generation Using Boolean Operations

Background Image Generation Using Boolean Operations Background Image Generation Using Boolean Operations Kardi Teknomo Ateneo de Manila University Quezon City, 1108 Philippines +632-4266001 ext 5660 teknomo@gmail.com Philippine Computing Journal Proceso

More information

A Feature Point Matching Based Approach for Video Objects Segmentation

A Feature Point Matching Based Approach for Video Objects Segmentation A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer

More information

Automatic Parameter Adaptation for Multi-Object Tracking

Automatic Parameter Adaptation for Multi-Object Tracking Automatic Parameter Adaptation for Multi-Object Tracking Duc Phu CHAU, Monique THONNAT, and François BREMOND {Duc-Phu.Chau, Monique.Thonnat, Francois.Bremond}@inria.fr STARS team, INRIA Sophia Antipolis,

More information

Human Detection and Tracking for Video Surveillance: A Cognitive Science Approach

Human Detection and Tracking for Video Surveillance: A Cognitive Science Approach Human Detection and Tracking for Video Surveillance: A Cognitive Science Approach Vandit Gajjar gajjar.vandit.381@ldce.ac.in Ayesha Gurnani gurnani.ayesha.52@ldce.ac.in Yash Khandhediya khandhediya.yash.364@ldce.ac.in

More information

Motion Detection Algorithm

Motion Detection Algorithm Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection

More information

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Evangelos MALTEZOS, Charalabos IOANNIDIS, Anastasios DOULAMIS and Nikolaos DOULAMIS Laboratory of Photogrammetry, School of Rural

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Model-based Visual Tracking:

Model-based Visual Tracking: Technische Universität München Model-based Visual Tracking: the OpenTL framework Giorgio Panin Technische Universität München Institut für Informatik Lehrstuhl für Echtzeitsysteme und Robotik (Prof. Alois

More information

Car tracking in tunnels

Car tracking in tunnels Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern

More information

Efficient Acquisition of Human Existence Priors from Motion Trajectories

Efficient Acquisition of Human Existence Priors from Motion Trajectories Efficient Acquisition of Human Existence Priors from Motion Trajectories Hitoshi Habe Hidehito Nakagawa Masatsugu Kidode Graduate School of Information Science, Nara Institute of Science and Technology

More information

Supervised texture detection in images

Supervised texture detection in images Supervised texture detection in images Branislav Mičušík and Allan Hanbury Pattern Recognition and Image Processing Group, Institute of Computer Aided Automation, Vienna University of Technology Favoritenstraße

More information

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification Huei-Yung Lin * and Juang-Yu Wei Department of Electrical Engineering National Chung Cheng University Chia-Yi

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Clustering Based Non-parametric Model for Shadow Detection in Video Sequences

Clustering Based Non-parametric Model for Shadow Detection in Video Sequences Clustering Based Non-parametric Model for Shadow Detection in Video Sequences Ehsan Adeli Mosabbeb 1, Houman Abbasian 2, Mahmood Fathy 1 1 Iran University of Science and Technology, Tehran, Iran 2 University

More information

Face Quality Assessment System in Video Sequences

Face Quality Assessment System in Video Sequences Face Quality Assessment System in Video Sequences Kamal Nasrollahi, Thomas B. Moeslund Laboratory of Computer Vision and Media Technology, Aalborg University Niels Jernes Vej 14, 9220 Aalborg Øst, Denmark

More information

Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion

Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion Zhe Lin, Larry S. Davis, David Doermann, and Daniel DeMenthon Institute for Advanced Computer Studies University of

More information

Selection of Scale-Invariant Parts for Object Class Recognition

Selection of Scale-Invariant Parts for Object Class Recognition Selection of Scale-Invariant Parts for Object Class Recognition Gy. Dorkó and C. Schmid INRIA Rhône-Alpes, GRAVIR-CNRS 655, av. de l Europe, 3833 Montbonnot, France fdorko,schmidg@inrialpes.fr Abstract

More information

Classification of objects from Video Data (Group 30)

Classification of objects from Video Data (Group 30) Classification of objects from Video Data (Group 30) Sheallika Singh 12665 Vibhuti Mahajan 12792 Aahitagni Mukherjee 12001 M Arvind 12385 1 Motivation Video surveillance has been employed for a long time

More information

Detection and Classification of a Moving Object in a Video Stream

Detection and Classification of a Moving Object in a Video Stream Detection and Classification of a Moving Object in a Video Stream Asim R. Aldhaheri and Eran A. Edirisinghe Abstract In this paper we present a new method for detecting and classifying moving objects into

More information

A Texture-based Method for Detecting Moving Objects

A Texture-based Method for Detecting Moving Objects A Texture-based Method for Detecting Moving Objects M. Heikkilä, M. Pietikäinen and J. Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500

More information

HOG-based Pedestriant Detector Training

HOG-based Pedestriant Detector Training HOG-based Pedestriant Detector Training evs embedded Vision Systems Srl c/o Computer Science Park, Strada Le Grazie, 15 Verona- Italy http: // www. embeddedvisionsystems. it Abstract This paper describes

More information

INTELLIGENT AUTONOMOUS SYSTEMS LAB

INTELLIGENT AUTONOMOUS SYSTEMS LAB Matteo Munaro 1,3, Alex Horn 2, Randy Illum 2, Jeff Burke 2, and Radu Bogdan Rusu 3 1 IAS-Lab at Department of Information Engineering, University of Padova 2 Center for Research in Engineering, Media

More information

Bus Detection and recognition for visually impaired people

Bus Detection and recognition for visually impaired people Bus Detection and recognition for visually impaired people Hangrong Pan, Chucai Yi, and Yingli Tian The City College of New York The Graduate Center The City University of New York MAP4VIP Outline Motivation

More information

Object Detection in Video Streams

Object Detection in Video Streams Object Detection in Video Streams Sandhya S Deore* *Assistant Professor Dept. of Computer Engg., SRES COE Kopargaon *sandhya.deore@gmail.com ABSTRACT Object Detection is the most challenging area in video

More information

A MIXTURE OF DISTRIBUTIONS BACKGROUND MODEL FOR TRAFFIC VIDEO SURVEILLANCE

A MIXTURE OF DISTRIBUTIONS BACKGROUND MODEL FOR TRAFFIC VIDEO SURVEILLANCE PERIODICA POLYTECHNICA SER. TRANSP. ENG. VOL. 34, NO. 1 2, PP. 109 117 (2006) A MIXTURE OF DISTRIBUTIONS BACKGROUND MODEL FOR TRAFFIC VIDEO SURVEILLANCE Tamás BÉCSI and Tamás PÉTER Department of Control

More information

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 2 Issue 11, November 2015. Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

More information

Online Tracking Parameter Adaptation based on Evaluation

Online Tracking Parameter Adaptation based on Evaluation 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Online Tracking Parameter Adaptation based on Evaluation Duc Phu Chau Julien Badie François Brémond Monique Thonnat

More information

A Robust Wipe Detection Algorithm

A Robust Wipe Detection Algorithm A Robust Wipe Detection Algorithm C. W. Ngo, T. C. Pong & R. T. Chin Department of Computer Science The Hong Kong University of Science & Technology Clear Water Bay, Kowloon, Hong Kong Email: fcwngo, tcpong,

More information

Tri-modal Human Body Segmentation

Tri-modal Human Body Segmentation Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4

More information

A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion

A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion Marek Schikora 1 and Benedikt Romba 2 1 FGAN-FKIE, Germany 2 Bonn University, Germany schikora@fgan.de, romba@uni-bonn.de Abstract: In this

More information

Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement

Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement Daegeon Kim Sung Chun Lee Institute for Robotics and Intelligent Systems University of Southern

More information

Small-scale objects extraction in digital images

Small-scale objects extraction in digital images 102 Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 Small-scale objects extraction in digital images V. Volkov 1,2 S. Bobylev 1 1 Radioengineering Dept., The Bonch-Bruevich State Telecommunications

More information

Moving Object Detection for Video Surveillance

Moving Object Detection for Video Surveillance International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Moving Object Detection for Video Surveillance Abhilash K.Sonara 1, Pinky J. Brahmbhatt 2 1 Student (ME-CSE), Electronics and Communication,

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Automatic Gait Recognition. - Karthik Sridharan

Automatic Gait Recognition. - Karthik Sridharan Automatic Gait Recognition - Karthik Sridharan Gait as a Biometric Gait A person s manner of walking Webster Definition It is a non-contact, unobtrusive, perceivable at a distance and hard to disguise

More information

Vehicle Detection Method using Haar-like Feature on Real Time System

Vehicle Detection Method using Haar-like Feature on Real Time System Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.

More information

Automated Video Analysis of Crowd Behavior

Automated Video Analysis of Crowd Behavior Automated Video Analysis of Crowd Behavior Robert Collins CSE Department Mar 30, 2009 Computational Science Seminar Series, Spring 2009. We Are... Lab for Perception, Action and Cognition Research Interest:

More information

Image Segmentation Via Iterative Geodesic Averaging

Image Segmentation Via Iterative Geodesic Averaging Image Segmentation Via Iterative Geodesic Averaging Asmaa Hosni, Michael Bleyer and Margrit Gelautz Institute for Software Technology and Interactive Systems, Vienna University of Technology Favoritenstr.

More information

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images Karthik Ram K.V & Mahantesh K Department of Electronics and Communication Engineering, SJB Institute of Technology, Bangalore,

More information

Segmentation Framework for Multi-Oriented Text Detection and Recognition

Segmentation Framework for Multi-Oriented Text Detection and Recognition Segmentation Framework for Multi-Oriented Text Detection and Recognition Shashi Kant, Sini Shibu Department of Computer Science and Engineering, NRI-IIST, Bhopal Abstract - Here in this paper a new and

More information

Postprint.

Postprint. http://www.diva-portal.org Postprint This is the accepted version of a paper presented at 14th International Conference of the Biometrics Special Interest Group, BIOSIG, Darmstadt, Germany, 9-11 September,

More information

Time Stamp Detection and Recognition in Video Frames

Time Stamp Detection and Recognition in Video Frames Time Stamp Detection and Recognition in Video Frames Nongluk Covavisaruch and Chetsada Saengpanit Department of Computer Engineering, Chulalongkorn University, Bangkok 10330, Thailand E-mail: nongluk.c@chula.ac.th

More information

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN Gengjian Xue, Jun Sun, Li Song Institute of Image Communication and Information Processing, Shanghai Jiao

More information

COMS W4735: Visual Interfaces To Computers. Final Project (Finger Mouse) Submitted by: Tarandeep Singh Uni: ts2379

COMS W4735: Visual Interfaces To Computers. Final Project (Finger Mouse) Submitted by: Tarandeep Singh Uni: ts2379 COMS W4735: Visual Interfaces To Computers Final Project (Finger Mouse) Submitted by: Tarandeep Singh Uni: ts2379 FINGER MOUSE (Fingertip tracking to control mouse pointer) Abstract. This report discusses

More information

A Real Time Human Detection System Based on Far Infrared Vision

A Real Time Human Detection System Based on Far Infrared Vision A Real Time Human Detection System Based on Far Infrared Vision Yannick Benezeth 1, Bruno Emile 1,Hélène Laurent 1, and Christophe Rosenberger 2 1 Institut Prisme, ENSI de Bourges - Université d Orléans

More information

PRECEDING VEHICLE TRACKING IN STEREO IMAGES VIA 3D FEATURE MATCHING

PRECEDING VEHICLE TRACKING IN STEREO IMAGES VIA 3D FEATURE MATCHING PRECEDING VEHICLE TRACKING IN STEREO IMAGES VIA 3D FEATURE MATCHING Daniel Weingerl, Wilfried Kubinger, Corinna Engelhardt-Nowitzki UAS Technikum Wien: Department for Advanced Engineering Technologies,

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation

More information

Seminar Heidelberg University

Seminar Heidelberg University Seminar Heidelberg University Mobile Human Detection Systems Pedestrian Detection by Stereo Vision on Mobile Robots Philip Mayer Matrikelnummer: 3300646 Motivation Fig.1: Pedestrians Within Bounding Box

More information

Exploiting Depth Camera for 3D Spatial Relationship Interpretation

Exploiting Depth Camera for 3D Spatial Relationship Interpretation Exploiting Depth Camera for 3D Spatial Relationship Interpretation Jun Ye Kien A. Hua Data Systems Group, University of Central Florida Mar 1, 2013 Jun Ye and Kien A. Hua (UCF) 3D directional spatial relationships

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Designing Applications that See Lecture 7: Object Recognition

Designing Applications that See Lecture 7: Object Recognition stanford hci group / cs377s Designing Applications that See Lecture 7: Object Recognition Dan Maynes-Aminzade 29 January 2008 Designing Applications that See http://cs377s.stanford.edu Reminders Pick up

More information

Multi-Channel Adaptive Mixture Background Model for Real-time Tracking

Multi-Channel Adaptive Mixture Background Model for Real-time Tracking Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 2073-4212 Ubiquitous International Volume 7, Number 1, January 2016 Multi-Channel Adaptive Mixture Background Model for Real-time

More information

Multi-Camera Occlusion and Sudden-Appearance-Change Detection Using Hidden Markovian Chains

Multi-Camera Occlusion and Sudden-Appearance-Change Detection Using Hidden Markovian Chains 1 Multi-Camera Occlusion and Sudden-Appearance-Change Detection Using Hidden Markovian Chains Xudong Ma Pattern Technology Lab LLC, U.S.A. Email: xma@ieee.org arxiv:1610.09520v1 [cs.cv] 29 Oct 2016 Abstract

More information

Adaptive Gesture Recognition System Integrating Multiple Inputs

Adaptive Gesture Recognition System Integrating Multiple Inputs Adaptive Gesture Recognition System Integrating Multiple Inputs Master Thesis - Colloquium Tobias Staron University of Hamburg Faculty of Mathematics, Informatics and Natural Sciences Technical Aspects

More information

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM 1 PHYO THET KHIN, 2 LAI LAI WIN KYI 1,2 Department of Information Technology, Mandalay Technological University The Republic of the Union of Myanmar

More information

A Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion

A Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion A Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion Paper ID:086 Abstract Multi-view approach has been proposed to solve occlusion and lack of visibility

More information

6. Multimodal Biometrics

6. Multimodal Biometrics 6. Multimodal Biometrics Multimodal biometrics is based on combination of more than one type of biometric modalities or traits. The most compelling reason to combine different modalities is to improve

More information

People detection and tracking using stereo vision and color

People detection and tracking using stereo vision and color People detection and tracking using stereo vision and color Rafael Munoz-Salinas, Eugenio Aguirre, Miguel Garcia-Silvente. In Image and Vision Computing Volume 25 Issue 6 (2007) 995-1007. Presented by

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview Human Body Recognition and Tracking: How the Kinect Works Kinect RGB-D Camera Microsoft Kinect (Nov. 2010) Color video camera + laser-projected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due

More information

HYBRID CENTER-SYMMETRIC LOCAL PATTERN FOR DYNAMIC BACKGROUND SUBTRACTION. Gengjian Xue, Li Song, Jun Sun, Meng Wu

HYBRID CENTER-SYMMETRIC LOCAL PATTERN FOR DYNAMIC BACKGROUND SUBTRACTION. Gengjian Xue, Li Song, Jun Sun, Meng Wu HYBRID CENTER-SYMMETRIC LOCAL PATTERN FOR DYNAMIC BACKGROUND SUBTRACTION Gengjian Xue, Li Song, Jun Sun, Meng Wu Institute of Image Communication and Information Processing, Shanghai Jiao Tong University,

More information

Ensemble of Bayesian Filters for Loop Closure Detection

Ensemble of Bayesian Filters for Loop Closure Detection Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information

More information

QUT Digital Repository: This is the author version published as:

QUT Digital Repository:   This is the author version published as: QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chen,

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information