Multi-Object Tracking Using Color, Texture and Motion

Size: px
Start display at page:

Download "Multi-Object Tracking Using Color, Texture and Motion"

Transcription

1 Multi-Object Tracking Using Color, Texture and Motion Valtteri Takala and Matti Pietikäinen Machine Vision Group Infotech Oulu and Dept. of Electrical and Information Engineering P.O. Box 4500 FIN University of Oulu, Finland Abstract In this paper, we introduce a novel real-time tracker based on color, texture and motion information. RGB color histogram and correlogram (autocorrelogram) are exploited as color cues and texture properties are represented by local binary patterns (LBP). Object s motion is taken into account through location and trajectory. After extraction, these features are used to build a unifying distance measure. The measure is utilized in tracking and in the classification event, in which an object is leaving a group. The initial object detection is done by a texture-based background subtraction algorithm. The experiments on indoor and outdoor surveillance videos show that a unified system works better than the versions based on single features. It also copes well with low illumination conditions and low frame rates which are common in large scale surveillance systems. 1. Introduction One of the most useful and valuable applications for current vision technology is visual surveillance. It is feasible to construct and able to generate direct benefits for its utilizers. It can be used for many purposes in a number of different environments: people and vehicle tracking in traffic scenes [6], proximity detection in battlefields [17] (original study: [14]), suspicious event detection [12] ([5]), and geriatric care, to name but a few. This has been already noticed, and the research is in full flow both in academic and industrial sides. Tracking is the primary part in active visual surveillance where human intervention is to be minimized. It is also a field of numerous methods for numerous different tracking cases: Mean Shift based algorithms [1], [3], [4], [7] can be used in single object problems with both static and moving cameras. Multi-object tracking, however, is a very different and more challenging problem. In addition to the normal frame-to-frame following of a salient area, the system must be able to handle occlusion, splitting, merging and other complicated events related to multiple moving targets. Existing solutions [6], [8], [9], [14] are meant for static cameras and limited types of scenes. Lately, particle filters [13] have gained a great deal of attention [8], [16], [19], [20], [25]. To survive in diverse environments, one should take advantage of multiple image properties, like color, texture, temporal, etc., as none of them alone provides all-around invariance to different imaging conditions. By using a versatile collection of properties the system performance can be enhanced and made more robust against the large variation of data common in surveillance. Still, one has to be careful while choosing multiple features as they may also have a negative effect on each other. Color correlogram [11] (also known as autocorrelogram) and Local Binary Patterns (LBP) [18] are well developed descriptors for general image classification and retrieval. They are fast to extract and provide features with varying size and discrimination power, depending on the used parameters (kernel radius, color channel quantization, sample count, etc.). They offer good histogram-based descriptions for object detection and matching. In low frame rates (< 10 fps), some sort of model-based matching is a worthwhile way to go as the spatial correspondence of objects in adjacent frames is often low. Frame rates like 2 fps are common in large scale surveillance systems where the amount of collected data can be tremendous. One server may have to handle a data stream of tens of video sources. In such situations the load on the system s IO is significant and frame rates like 25 fps per video source are not easily feasible. There is a good likelihood that the situation stays the same in the near future as the camera technology advances rapidly and megapixel-class video frames are already reality. This paper introduces an approach which uses multiple image features for frame-to-frame correspondence matching. RGB color histogram and correlogram are used to /07/$ IEEE

2 describe the object s color properties, LBP is chosen for the texture, and geometric location and the smoothness of trajectory provide the motional support. The merging and splitting of objects are handled using the same set of features. The tracker s performance on low frame rate video is emphasized, as it is the area which has not been considered often enough. 2. Features for Tracking Matching-based tracking requires good feature descriptors to be usable in the diverse conditions of real-world video surveillance. The main sources of descriptors are color, texture, shape, and temporal (motion) properties. Each of them has its pros and cons but the color has gained the most of attention as it is well distinguishable to human eye and seems to contain a good amount of useful information Color Color provides many cues. The most well known color descriptor is the RGB color histogram [23] which has been used for tracking in various occasions [1], [3], [26]. There are also other potential features like the color moments [22], MPEG-7 color descriptors [15], and color correlograms [11], to mention but a few. In this study, the last one was selected together with the RGB color histogram to describe the color properties of objects. Selecting the color correlogram was natural due to its good discrimination power [11]. The main advantage of the correlogram is that it pays attention to the local spatial correlation of color pixels, thus increasing the value of color as the color histogram is purely a global measure Texture It would be shortsightedness to rely on color properties only. For instance, colors are very sensitive to illumination changes. This trouble can be alleviated, to some extent, by using other features that are less responsive to such image transformations. Texture, which has not enjoyed major attention in tracking applications, provides a good option to enhance the power of color descriptors. The list of available texture features is quite a long one but a good survey to different approaches has been made by Tuceryan and Jain [24]. As being one of the most efficient texture descriptors, the LBP texture measure [18] is a logical choice for describing object s textural properties. We selected a modified version of it [10] which is more stable against noise. LBP s main characteristics are invariance to monotonic changes in grayscale and fast computation, and it has proven performance background in texture classification [18]. While operating in gray-scale color space, LBP is also robust to illumination changes common in surveillance videos Motion In addition to visually descriptive features, video provides temporal properties. One can add, of course, a temporal dimension to any feature described above by combining the information extracted at consecutive instants of time in some meaningful way. We decided to use the object s location and trajectory to describe its motional properties. In addition to the close positions in successive frames, physical objects tend to have smooth trajectories, at least when the frame rate is high enough, and this can be exploited. We can calculate the smoothness of direction and speed [21] for each existing object track i: ( ) vi,t 1 v i,t S i,t = w +(1 w) v i,t 1 v i,t ( ) 2 vi,t 1 v i,t, v i,t 1 + v i,t (1) where the first term defines the smoothness of direction and the second one the smoothness of speed. S i,t is the combined smoothness of the track i between the time instants t and t 1. v stands for the difference vector of two points and w is a weight. If m points are extracted from n frames, the total smoothness is defined by Equation (2), which is the sum of smoothnesses of all the interior points of all the m paths. 3. Tracker T s = m n 1 S i,t (2) i=1 t=2 Our tracker (see Figure 1) consists of two main elements: background subtraction (detection) and tracking. The subtraction on the video data, that is first processed with a Gaussian filter to remove noise, is done by an adaptive algorithm which is based on LBP texture distributions [10]. The algorithm was chosen because of its good performance in most environments and the fact that it exploits the same texture properties as the object matching part of the tracker, so the re-use of features is possible. The subtracted foreground is enhanced by filtering the artifacts caused by noise and moving background using standard morphological operations. All the remaining foreground areas are considered as possible object candidates and filtered according to the needs: the size of object depends heavily on the surveillance scene. The tracking is done by matching features extracted from the subtracted foreground shapes. As told in Section 2, three of these are based on histogram distributions: RGB color histogram [23] and correlogram [11], and LBP [18]. The

3 Detection Tracking Frames Texture-based background subtraction Object detection using contours Matching using color, texture and motion features Tracked objects Group handling (merging and splitting) Figure 1. The tracker. Same features (color, texture and motion) are used in initial matching and group handling. reason for choosing two color-based descriptors is the difference in their spatial performance. While the histogram is a global measure and thus invariant to many local attributes like scale, the correlogram takes into account spatial color distributions and has better discrimination performance on coherent data. The spatial properties are usually well preserved in tracking as the objects do not change much between successive frames, depending, of course, on the frame rate. The LBP texture measure was selected due to its qualified performance as stated in Section 2.2. It supports the color features well in natural scenes as those often contain a lot of textural information. It also provides better discrimination capabilities in many situations where a simple color descriptor may fail, for example in low lighting conditions. The other two cues for matching, the geometric distance and the combined smoothness of speed and direction, were included to emphasize the importance of motion on tracking. The tracker uses similar structure as Yang et al. s system [26] in which tracking is based on distance and correspondence matrices and object occlusion is managed through the detection of splitting and merging events. First a distance matrix with new measures as columns and existing tracks as rows is built. This is followed by creating a zero-initialized correspondence matrix in which the best matching track for each measure is marked by incrementing the corresponding matrix element by one. The same is done for each track after which the correspondence matrix elements with value of two are considered as definite matches. Unmatched measures and tracks are considered as possible candidates for merging and splitting. The complete details of the logics are described in the Yang et al. s paper [26]. In our system, a group of diverse features is employed for measure-to-track matching and the events are detected differently. Instead of using the bounding boxes themselves for occlusion detection, we surround the boxes with circles that have the radii of the half diagonals of the boxes as in the Figure 2 and use them for event detection: If the object circles are occluding each other in the previous frame n 1 a merging event in frame n is possible. If the occlusion is true in frame n + 1 and the closest occluding object is Figure 2. Diagonal occlusion. The merging event in frame n is detected by analyzing the situation in the previous frame n 1. The splitting detection is done in frame n + 1. frame rate ffps Radius Weight vs. Frame Rate radius weight w r Figure 3. Weight selection. The weight of event detection circle radius is selected according to the frame rate. The weights of 2 and 25 fps frame rates are displayed with small circles. a group object 1(1&2) then a split might have happened. The circle s radius can be weighted according to the frame rate to cover a smaller or larger detection area. The radius weight w r is inversely related to the frame rate f fps and can be estimated with a hyperbola function: from which we get f fps = α wr 2 + β, (3) α w r = f fps β, (4) where α and β are constants. Figure 3 shows an example curve where α = 15.3 and β = 5.7. The matching itself is carried out by using an overall distance obtained from several descriptors. It is done both in the initial correspondence matching process and in the splitting event, in which one has to recognize the object that is leaving its group. After filtering out the least probable candidates with a separate geometric distance threshold, five descriptors are used: RGB color histogram and correlogram, LBP, geometric distance and smoothness. The first three are applied on the upper and lower half of the object separately. This is done to add more spatial discrimination power to situations where the interesting objects are people wearing two distinctive pieces of clothing, like a shirt

4 and trousers. It should be also noticed that the distributionbased features are extracted from the foreground only. The distance matrix D(i, j) of the initial matching phase is constructed as follows. Each matrix column j corresponding to a measure is filled with a distance vector d j of M elements (tracks). The vector elements are sums of five descriptor measures that have been normalized to the range [0, 1] prior to final summing to the vector: d j = 5 d i,f, i = 1, 2,..., M, (5) f=1 where d i,f is the normalized distance of the feature f of the ith element. The feature-specific normalization is done between the original distance values ˆd i,f as d i,f = ˆd i,f M i=1 ˆd i,f. (6) The smoothness is obtained by adding the center point of the measure to the track s trajectory and then calculating a new smoothness value. Note: if the candidate tracks have long common history (period of occlusion), there is no difference in smoothness as the active trajectories, from which the feature is extracted, have become identical. The actual smoothness value T s, as not being a distance measure converging to zero, is first shifted by its bias, which depends on the length of the trajectory, and then inverted to make it comparable with the other descriptors: d s = 1 T s T s, (7) where d s is the final measure nearing to zero. T s is the bias value which has been rounded down to the closest smaller integer value. The geometric distance is used in the same way as the distribution-based descriptors. After calculating the initial geometric distances between a measure and its candidate tracks, the obtained vector of distance values is normalized to one. Before distance measurement, the track s color and texture descriptors, all consisting of histogram representations, are updated by filtering last N histograms with Gaussian weights: N h updated i = h i,t w G (t), (8) t=1 where h updated i is the updated value of bin i and h i,t the corresponding original bin value, in which t is the index in temporal dimension, 1 referring to the latest bin value and N to the oldest in history. The weights w G are obtained using a standard Gaussian distribution (µ = 0, σ 2 = 1): w init (t) = 1 2π e t2 2, (9) which is then normalized to one to get w G. The histogram distances are compared by an Euclidean distance measure: D eucl (x 1, x 2 ) = N (x 1,i x 2,i ) 2, (10) i=1 where i is the corresponding bin of the histogram x of length N. 4. Experiments Our system consist of a software framework operating on standard Intel Pentium 4 (3GHz) PC hardware with 1 GB of memory. The details of test video sequences are collected in Table 1. The first two sequences (Indoor and Outdoor) are from real surveillance scenes (camera type unknown) and their original frame rate is 2 fps. The Merge & split video has been created using a Sony DFW-VL500 camera. The Near-IR dataset was made with Axis 213 PTZ Network Camera which has a built-in IR source and is capable of sensing near infrared light. The last two video sets belong to the well known CAVIAR [2] and PETS 2001 databases. The CAVIAR sequence had an original frame rate of 25 fps which was decreased to 1 fps for testing purposes. The background subtraction parameters include the radius and sample count of the LBP operator, the radius of a circular region around a pixel over which the LBP histogram is calculated (R region ), the number of LBP histograms per pixel (K), the thresholds of histogram proximity measure (T P ) and background histogram selection (T B ), and the learning rates for model histogram updating (α b, α w ). The selected parameter values are collected in Table 2. Guidance for selecting them properly can be found in the original paper [10]. Both our background subtraction and tracking implementations use the modified version of LBP with the thresholding constant a set to 3, as suggested by the original study. To speed up the subtraction, we apply it to every third pixel in horizontal and vertical directions and use that value as an approximation of the pixel s circular surroundings. Prior to the subtraction, the video frame is processed with a Gaussian 3x3 kernel which has a standard deviation σ of Afterwards, the small artifacts in the subtracted scene are morphologically filtered by doing one close and open operation with a circular kernel of 3 pixels in diameter. The RGB color histogram has 216 bins (6x6x6) and the RGB color correlogram is created from two distances (1 and 3) for 64 colors (4x4x4) making up a 128-bin descriptor in total. The LBP feature is calculated from a circular neighborhood of eight samples with radius of two, thus the length of the LBP histogram becomes 256 bins. In the histogram filtering process, as described by Equations (8) and (9), we

5 Test sequence Size Fps Length (s) Indoor 352x Outdoor 352x Merge & split 320x Near-IR 352x CAVIAR 384x PETS x Table 1. Test videos and their sizes, frame rates and lengths. Examples of tracking in each video are included in the supporting material. Parameter Value LBP radius 2 LBP sample count 6 R region 9 K 3 T P 0.6 T B 0.65 α b 0.01 α w 0.01 Table 2. Background subtraction parameters. use 5 for N. In the smoothness extraction, the active trajectory is taken from the last three seconds and the direction and speed are equally weighted. The other tracker parameters are chosen as follows: The interesting contours for object detection are thresholded by minimum size that depends on the scene. In short distances, when the objects tend to be large, we use a foreground/background ratio of For long distance surveillance data, the size filtering is done by a ratio of The geometric distance threshold for matching, as introduced in Section 3, is selected according to frame size. We use values equaling to times the diagonal of the frame. For smaller frame rates or very large objects even greater values could be used as the locations of the targets change more rapidly. The radius of the merging and splitting detection circle is weighted using Equation (4) with α = 15.3 and β = 5.7. α and β were obtained through experiments with different frame rates (1-30 fps). Figure 4 presents our system s performance in indoor and outdoor scenes at very low frame rate (2 fps). Tracking is successful through the heavy occlusion events of the indoor dataset, and it is not easily disturbed by the moving background elements, like moving trees and reflections from the asphalt. To demonstrate the performance in even lower frame rates, we took a sequence from CAVIAR database and lowered its frame rate from 25 to 1 fps. Figure 5 shows the result where object identities are maintained through a fighting scene containing partial occlusion. Figure 4. Indoor and Outdoor sequences. The tracker survives in situations where the frame rate is very low (2 fps), heavy occlusion exists and the background contains a lot of movement (moving trees, reflections). Figure 5. CAVIAR. A low frame rate do not cause major problems to the tracker. Feature(s) Precision All features 87% Color histogram 73% Color correlogram 73% LBP 60% Geometric distance 60% Smoothness 40% Table 3. Splitting performance. The overall performance of different features in the critical object splitting event is shown in Table 3. Each number is the average percentage of right splitting decisions (15 in total) on six test videos of Table 1. The performance of combined features is clearly better than that of the others. Color histogram and correlogram are equal in average but their performance deviate on different videos. LBP has clearly lower recognition rate which may have been affected by the Gaussian filtering of the preprocessing phase. The filtering process affects the structural patterns that are considered as textures by the LBP operator. The poor performance of smoothness feature may indicate long occlusion times of objects in the test datasets. If the period of occlusion is considerably longer than in normal passing and the objects move in a uniform manner the reliability of the trajectory measure will decrease together with its impact on the final classification decision. Figure 6 contains a few example frames from the multifeature tracking in Merge & split video together with corre-

6 Figure 6. Merge & split. The upper row contains the results of the background subtraction process and the lower shows the actual tracking. Figure 7. PETS The system tracks multiple people through several successive occlusion events. sponding background subtraction steps. The dependency on the texture-based background subtraction is especially high in this sequence as there are substantial color similarities between one of the foreground objects and the background. Figure 7 shows tracking results on a PETS 2001 dataset in which three people enter the scene and become occluded by each other several times. Our system has some problems in the beginning of the sequence, as the persons enter the room in parallel, but it is able to maintain the number of people and their identities most of the time after they have been initially discovered. The system was also tested in low light conditions. In the sequence of Figure 8 the persons were tracked correctly even though no color information was available. In the left image, the person on the left has a turquoise shirt while the other one has white. Both look very similar in noisy grayscale imagery. 5. Conclusions In this study, we have introduced a novel tracker based on the combined use of color, texture and motion features, and texture-based background subtraction. The system is able to track multiple objects in diverse conditions while achieving speeds of fps on a 3 GHz Intel Pentium 4 computer. It is also less sensitive to color due to the use of a versatile collection of cues. The system shows robust performance while most of the parameters are fixed. Future research will concentrate on a different weighting scheme in which the illumination conditions and other ef- Figure 8. Near-IR. The system works also in noisy near-infrared conditions where color information is limited to gray-scale. The background subtraction results are shown in the upper row. fectors are taken into account adaptively. The current way of using static feature weighting prevents the cues from dying but in some cases some of the features become useless and may even decrease the probability of correct classification. For example, after a very long period of occlusion the smoothness of trajectory has no use for classification. The long-term accuracy on people tracking could be improved by confirming the number of people in a group with an additional detector of human shapes or faces. To include cars and other interesting objects of arbitrary shape, a more versatile classifier would be needed. 6. Acknowledgment This research was supported by the Infotech Oulu Graduate School and the Finnish Funding Agency for Technology and Innovation (Tekes). References [1] G. R. Bradski. Computer video face tracking for use in a perceptual user interface. Intel Technology Journal, 2, [2] CAVIAR project: benchmark datasets for video surveillance [3] R. Collins, Y. Liu, and M. Leordeanu. Online selection of discriminative tracking features. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27: , [4] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24: , [5] D. Gibbins, G. N. Newsam, and M. J. Brooks. Detecting suspicious background changes in video surveillance of busy scenes. In Proceedings of the Third IEEE Workshop on Applications of Computer Vision, (WACV 96), Sarasota, Florida, USA, 2-4 December 1996, pages 22 26, [6] A. Hampapur, L. Brown, J. Connell, A. Ekin, N. Haas, M. Lu, H. Merkl, S. Pankanti, A. Senior, C.-F. Shu, and Y. L. Tian. Smart video surveillance: Exploring the concept of

7 multiscale spatiotemporal tracking. IEEE Signal Processing Magazine, 22:38 51, [7] B. Han and L. Davis. Object tracking by adaptive feature extraction. In Proceedings of International Conference on Image Processing, (ICIP 2004), Singapore, 27 June-2 July 2004, pages , [8] B. Han, Y. Zhu, D. Comaniciu, and L. Davis. Kernel-based Bayesian filtering for object tracking. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (CVPR 2005), San Diego, California, USA, June 2005, pages , [9] I. Haritaoglu, D. Hardwood, and L. S. Davis. W4: Real-time surveillance of people and their activities. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22: , [10] M. Heikkilä and M. Pietikäinen. A texture-based method for modeling the background and detecting moving objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28: , [11] J. Huang, S. R. Kumar, M. Mitra, W.-J. Zhu, and R. Zabih. Image indexing using color correlograms. In Proceedings of the 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (CVPR 1997), San Juan, Puerto Rico, June 1997, pages , [12] iomniscient [13] M. Isard and A. Blake. CONDENSATION - conditional density propagation for visual tracking. International Journal of Computer Vision, 29:5 28, [14] A. J. Lipton, H. Fujiyoshi, and R. S. Patil. Moving target classification and tracking from real-time video. In Proceedings of the Fourth IEEE Workshop on Applications of Computer Vision, (WACV 98), Princeton, New Jersey, USA, October 1998, pages 8 14, [15] B. S. Manjunath, J.-R. Ohm, V. Vasudevan, and A. Yamada. Color and texture descriptors. IEEE Transactions on Circuits and Systems for Video Technology, 11: , [16] K. Nummiaro, E. Koller-Meier, and L. V. Gool. An adaptive color-based particle filter. Image and Vision Computing, 21:99 110, [17] ObjectVideo [18] T. Ojala, M. Pietikäinen, and T. Mäenpää. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24: , [19] P. Pérez, C. Hue, J. Vermaak, and M. Gangnet. Color-based probabilistic tracking. In Proceedings of the 7th European Conference on Computer Vision (ECCV 2002), Copenhagen, Denmark, May 2002, pages , [20] V. Philomin, R. Duraiswami, and L. Davis. Quasi-random sampling for Condensation. In Proceedings of the 6th European Conference on Computer Vision (ECCV 2000), Dublin, Ireland, 26 June - 1 July 2000, pages , [21] L. G. Shapiro and G. Stockman. Computer Vision, page 267. Prentice Hall, first edition, [22] M. Stricker and M. Orengo. Similarity of color images. In Proceedings of the SPIE Conference on Storage and Retrieval for Image and Video Databases, San Jose, California, USA, 9 February 1995, pages , [23] M. Swain and D. Ballard. Color indexing. In Proceedings of the Third IEEE International Conference on Computer Vision, (ICCV 1990), Osaka, Japan, December 1990, pages 11 32, [24] M. Tuceryan and A. Jain. Texture analysis. In C. Chen, L. Pau, and P. Wang, editors, Handbook of Pattern Recognition and Computer Vision, pages World Scientific, second edition, [25] C. Yang, R. Duraiswami, and L. Davis. Fast multiple object tracking via a hierarchical particle filter. In The Proceedings of the Tenth IEEE International Conference on Computer Vision, (ICCV 2005), Beijing, China, October 2005, pages , [26] T. Yang, S. Z. Li, Q. Pan, and J. Li. Real-time multiple objects tracking with occlusion handling in dynamic scenes. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (CVPR 2005), San Diego, California, USA, June 2005, pages , 2005.

Adaptive Feature Extraction with Haar-like Features for Visual Tracking

Adaptive Feature Extraction with Haar-like Features for Visual Tracking Adaptive Feature Extraction with Haar-like Features for Visual Tracking Seunghoon Park Adviser : Bohyung Han Pohang University of Science and Technology Department of Computer Science and Engineering pclove1@postech.ac.kr

More information

A Texture-based Method for Detecting Moving Objects

A Texture-based Method for Detecting Moving Objects A Texture-based Method for Detecting Moving Objects M. Heikkilä, M. Pietikäinen and J. Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500

More information

Detecting and Identifying Moving Objects in Real-Time

Detecting and Identifying Moving Objects in Real-Time Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary

More information

A Fast Moving Object Detection Technique In Video Surveillance System

A Fast Moving Object Detection Technique In Video Surveillance System A Fast Moving Object Detection Technique In Video Surveillance System Paresh M. Tank, Darshak G. Thakore, Computer Engineering Department, BVM Engineering College, VV Nagar-388120, India. Abstract Nowadays

More information

Object Tracking with an Adaptive Color-Based Particle Filter

Object Tracking with an Adaptive Color-Based Particle Filter Object Tracking with an Adaptive Color-Based Particle Filter Katja Nummiaro 1, Esther Koller-Meier 2, and Luc Van Gool 1,2 1 Katholieke Universiteit Leuven, ESAT/VISICS, Belgium {knummiar,vangool}@esat.kuleuven.ac.be

More information

A Texture-based Method for Detecting Moving Objects

A Texture-based Method for Detecting Moving Objects A Texture-based Method for Detecting Moving Objects Marko Heikkilä University of Oulu Machine Vision Group FINLAND Introduction The moving object detection, also called as background subtraction, is one

More information

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN Gengjian Xue, Jun Sun, Li Song Institute of Image Communication and Information Processing, Shanghai Jiao

More information

Real-Time Human Detection using Relational Depth Similarity Features

Real-Time Human Detection using Relational Depth Similarity Features Real-Time Human Detection using Relational Depth Similarity Features Sho Ikemura, Hironobu Fujiyoshi Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai, Aichi, 487-8501 Japan. si@vision.cs.chubu.ac.jp,

More information

A Feature Point Matching Based Approach for Video Objects Segmentation

A Feature Point Matching Based Approach for Video Objects Segmentation A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO Makoto Arie, Masatoshi Shibata, Kenji Terabayashi, Alessandro Moro and Kazunori Umeda Course

More information

Fragment-based Visual Tracking with Multiple Representations

Fragment-based Visual Tracking with Multiple Representations American Journal of Engineering and Applied Sciences Original Research Paper ragment-based Visual Tracking with Multiple Representations 1 Junqiu Wang and 2 Yasushi Yagi 1 AVIC Intelligent Measurement,

More information

A Texture-Based Method for Modeling the Background and Detecting Moving Objects

A Texture-Based Method for Modeling the Background and Detecting Moving Objects A Texture-Based Method for Modeling the Background and Detecting Moving Objects Marko Heikkilä and Matti Pietikäinen, Senior Member, IEEE 2 Abstract This paper presents a novel and efficient texture-based

More information

LOCAL KERNEL COLOR HISTOGRAMS FOR BACKGROUND SUBTRACTION

LOCAL KERNEL COLOR HISTOGRAMS FOR BACKGROUND SUBTRACTION LOCAL KERNEL COLOR HISTOGRAMS FOR BACKGROUND SUBTRACTION Philippe Noriega, Benedicte Bascle, Olivier Bernier France Telecom, Recherche & Developpement 2, av. Pierre Marzin, 22300 Lannion, France {philippe.noriega,

More information

Multiple-Person Tracking by Detection

Multiple-Person Tracking by Detection http://excel.fit.vutbr.cz Multiple-Person Tracking by Detection Jakub Vojvoda* Abstract Detection and tracking of multiple person is challenging problem mainly due to complexity of scene and large intra-class

More information

Pedestrian counting in video sequences using optical flow clustering

Pedestrian counting in video sequences using optical flow clustering Pedestrian counting in video sequences using optical flow clustering SHIZUKA FUJISAWA, GO HASEGAWA, YOSHIAKI TANIGUCHI, HIROTAKA NAKANO Graduate School of Information Science and Technology Osaka University

More information

A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b and Guichi Liu2, c

A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b and Guichi Liu2, c 4th International Conference on Mechatronics, Materials, Chemistry and Computer Engineering (ICMMCCE 2015) A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b

More information

Human Activity Recognition Using a Dynamic Texture Based Method

Human Activity Recognition Using a Dynamic Texture Based Method Human Activity Recognition Using a Dynamic Texture Based Method Vili Kellokumpu, Guoying Zhao and Matti Pietikäinen Machine Vision Group University of Oulu, P.O. Box 4500, Finland {kello,gyzhao,mkp}@ee.oulu.fi

More information

Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement

Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement Daegeon Kim Sung Chun Lee Institute for Robotics and Intelligent Systems University of Southern

More information

Online Tracking Parameter Adaptation based on Evaluation

Online Tracking Parameter Adaptation based on Evaluation 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Online Tracking Parameter Adaptation based on Evaluation Duc Phu Chau Julien Badie François Brémond Monique Thonnat

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Color Image Segmentation

Color Image Segmentation Color Image Segmentation Yining Deng, B. S. Manjunath and Hyundoo Shin* Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 93106-9560 *Samsung Electronics Inc.

More information

Texture Features in Facial Image Analysis

Texture Features in Facial Image Analysis Texture Features in Facial Image Analysis Matti Pietikäinen and Abdenour Hadid Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500, FI-90014 University

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

Automatic Parameter Adaptation for Multi-Object Tracking

Automatic Parameter Adaptation for Multi-Object Tracking Automatic Parameter Adaptation for Multi-Object Tracking Duc Phu CHAU, Monique THONNAT, and François BREMOND {Duc-Phu.Chau, Monique.Thonnat, Francois.Bremond}@inria.fr STARS team, INRIA Sophia Antipolis,

More information

MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES

MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES Mehran Yazdi and André Zaccarin CVSL, Dept. of Electrical and Computer Engineering, Laval University Ste-Foy, Québec GK 7P4, Canada

More information

Detection and recognition of moving objects using statistical motion detection and Fourier descriptors

Detection and recognition of moving objects using statistical motion detection and Fourier descriptors Detection and recognition of moving objects using statistical motion detection and Fourier descriptors Daniel Toth and Til Aach Institute for Signal Processing, University of Luebeck, Germany toth@isip.uni-luebeck.de

More information

Face Detection and Recognition in an Image Sequence using Eigenedginess

Face Detection and Recognition in an Image Sequence using Eigenedginess Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping CS 534: Computer Vision Segmentation and Perceptual Grouping Spring 2005 Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Where are we? Image Formation Human vision Cameras Geometric Camera

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 2 Issue 11, November 2015. Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

More information

Elliptical Head Tracker using Intensity Gradients and Texture Histograms

Elliptical Head Tracker using Intensity Gradients and Texture Histograms Elliptical Head Tracker using Intensity Gradients and Texture Histograms Sriram Rangarajan, Dept. of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634 srangar@clemson.edu December

More information

An Adaptive Threshold LBP Algorithm for Face Recognition

An Adaptive Threshold LBP Algorithm for Face Recognition An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent

More information

Color Local Texture Features Based Face Recognition

Color Local Texture Features Based Face Recognition Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India

More information

Car tracking in tunnels

Car tracking in tunnels Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern

More information

Tracking and Recognizing People in Colour using the Earth Mover s Distance

Tracking and Recognizing People in Colour using the Earth Mover s Distance Tracking and Recognizing People in Colour using the Earth Mover s Distance DANIEL WOJTASZEK, ROBERT LAGANIÈRE S.I.T.E. University of Ottawa, Ottawa, Ontario, Canada K1N 6N5 danielw@site.uottawa.ca, laganier@site.uottawa.ca

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

TRACKING OF MULTIPLE SOCCER PLAYERS USING A 3D PARTICLE FILTER BASED ON DETECTOR CONFIDENCE

TRACKING OF MULTIPLE SOCCER PLAYERS USING A 3D PARTICLE FILTER BASED ON DETECTOR CONFIDENCE Advances in Computer Science and Engineering Volume 6, Number 1, 2011, Pages 93-104 Published Online: February 22, 2011 This paper is available online at http://pphmj.com/journals/acse.htm 2011 Pushpa

More information

A FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS

A FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS A FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS Timo Ahonen and Matti Pietikäinen Machine Vision Group, University of Oulu, PL 4500, FI-90014 Oulun yliopisto, Finland tahonen@ee.oulu.fi, mkp@ee.oulu.fi Keywords:

More information

A Keypoint Descriptor Inspired by Retinal Computation

A Keypoint Descriptor Inspired by Retinal Computation A Keypoint Descriptor Inspired by Retinal Computation Bongsoo Suh, Sungjoon Choi, Han Lee Stanford University {bssuh,sungjoonchoi,hanlee}@stanford.edu Abstract. The main goal of our project is to implement

More information

Efficient Acquisition of Human Existence Priors from Motion Trajectories

Efficient Acquisition of Human Existence Priors from Motion Trajectories Efficient Acquisition of Human Existence Priors from Motion Trajectories Hitoshi Habe Hidehito Nakagawa Masatsugu Kidode Graduate School of Information Science, Nara Institute of Science and Technology

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

A New Feature Local Binary Patterns (FLBP) Method

A New Feature Local Binary Patterns (FLBP) Method A New Feature Local Binary Patterns (FLBP) Method Jiayu Gu and Chengjun Liu The Department of Computer Science, New Jersey Institute of Technology, Newark, NJ 07102, USA Abstract - This paper presents

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Supervised texture detection in images

Supervised texture detection in images Supervised texture detection in images Branislav Mičušík and Allan Hanbury Pattern Recognition and Image Processing Group, Institute of Computer Aided Automation, Vienna University of Technology Favoritenstraße

More information

Ensemble Tracking. Abstract. 1 Introduction. 2 Background

Ensemble Tracking. Abstract. 1 Introduction. 2 Background Ensemble Tracking Shai Avidan Mitsubishi Electric Research Labs 201 Broadway Cambridge, MA 02139 avidan@merl.com Abstract We consider tracking as a binary classification problem, where an ensemble of weak

More information

Text Information Extraction And Analysis From Images Using Digital Image Processing Techniques

Text Information Extraction And Analysis From Images Using Digital Image Processing Techniques Text Information Extraction And Analysis From Images Using Digital Image Processing Techniques Partha Sarathi Giri Department of Electronics and Communication, M.E.M.S, Balasore, Odisha Abstract Text data

More information

Human Upper Body Pose Estimation in Static Images

Human Upper Body Pose Estimation in Static Images 1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project

More information

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE Hongyu Liang, Jinchen Wu, and Kaiqi Huang National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

Implementation of a Face Recognition System for Interactive TV Control System

Implementation of a Face Recognition System for Interactive TV Control System Implementation of a Face Recognition System for Interactive TV Control System Sang-Heon Lee 1, Myoung-Kyu Sohn 1, Dong-Ju Kim 1, Byungmin Kim 1, Hyunduk Kim 1, and Chul-Ho Won 2 1 Dept. IT convergence,

More information

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte

More information

Human Detection and Motion Tracking

Human Detection and Motion Tracking Human Detection and Motion Tracking Technical report - FI - VG20102015006-2011 04 Ing. Ibrahim Nahhas Ing. Filip Orság, Ph.D. Faculty of Information Technology, Brno University of Technology December 9,

More information

Spatio-Temporal Nonparametric Background Modeling and Subtraction

Spatio-Temporal Nonparametric Background Modeling and Subtraction Spatio-Temporal onparametric Background Modeling and Subtraction Raviteja Vemulapalli R. Aravind Department of Electrical Engineering Indian Institute of Technology, Madras, India. Abstract Background

More information

Texture Sensitive Image Inpainting after Object Morphing

Texture Sensitive Image Inpainting after Object Morphing Texture Sensitive Image Inpainting after Object Morphing Yin Chieh Liu and Yi-Leh Wu Department of Computer Science and Information Engineering National Taiwan University of Science and Technology, Taiwan

More information

2 Proposed Methodology

2 Proposed Methodology 3rd International Conference on Multimedia Technology(ICMT 2013) Object Detection in Image with Complex Background Dong Li, Yali Li, Fei He, Shengjin Wang 1 State Key Laboratory of Intelligent Technology

More information

Idle Object Detection in Video for Banking ATM Applications

Idle Object Detection in Video for Banking ATM Applications Research Journal of Applied Sciences, Engineering and Technology 4(24): 5350-5356, 2012 ISSN: 2040-7467 Maxwell Scientific Organization, 2012 Submitted: March 18, 2012 Accepted: April 06, 2012 Published:

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

Interactive Offline Tracking for Color Objects

Interactive Offline Tracking for Color Objects Interactive Offline Tracking for Color Objects Yichen Wei Jian Sun Xiaoou Tang Heung-Yeung Shum Microsoft Research Asia, Beijing, P.R. China {yichenw,jiansun,xitang,hshum}@microsoft.com Abstract In this

More information

The goals of segmentation

The goals of segmentation Image segmentation The goals of segmentation Group together similar-looking pixels for efficiency of further processing Bottom-up process Unsupervised superpixels X. Ren and J. Malik. Learning a classification

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

Spatio-Temporal LBP based Moving Object Segmentation in Compressed Domain

Spatio-Temporal LBP based Moving Object Segmentation in Compressed Domain 2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance Spatio-Temporal LBP based Moving Object Segmentation in Compressed Domain Jianwei Yang 1, Shizheng Wang 2, Zhen

More information

Graph Matching Iris Image Blocks with Local Binary Pattern

Graph Matching Iris Image Blocks with Local Binary Pattern Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of

More information

Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems

Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems Xiaoyan Jiang, Erik Rodner, and Joachim Denzler Computer Vision Group Jena Friedrich Schiller University of Jena {xiaoyan.jiang,erik.rodner,joachim.denzler}@uni-jena.de

More information

Real-Time Human Detection, Tracking, and Verification in Uncontrolled Camera Motion Environments

Real-Time Human Detection, Tracking, and Verification in Uncontrolled Camera Motion Environments Real-Time Human Detection, Tracking, and Verification in Uncontrolled Camera Motion Environments Mohamed Hussein Wael Abd-Almageed Yang Ran Larry Davis Institute for Advanced Computer Studies University

More information

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation

More information

arxiv: v3 [cs.cv] 3 Oct 2012

arxiv: v3 [cs.cv] 3 Oct 2012 Combined Descriptors in Spatial Pyramid Domain for Image Classification Junlin Hu and Ping Guo arxiv:1210.0386v3 [cs.cv] 3 Oct 2012 Image Processing and Pattern Recognition Laboratory Beijing Normal University,

More information

Object Detection in Video Streams

Object Detection in Video Streams Object Detection in Video Streams Sandhya S Deore* *Assistant Professor Dept. of Computer Engg., SRES COE Kopargaon *sandhya.deore@gmail.com ABSTRACT Object Detection is the most challenging area in video

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Motion Detection Algorithm

Motion Detection Algorithm Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection

More information

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Akitsugu Noguchi and Keiji Yanai Department of Computer Science, The University of Electro-Communications, 1-5-1 Chofugaoka,

More information

BRIEF Features for Texture Segmentation

BRIEF Features for Texture Segmentation BRIEF Features for Texture Segmentation Suraya Mohammad 1, Tim Morris 2 1 Communication Technology Section, Universiti Kuala Lumpur - British Malaysian Institute, Gombak, Selangor, Malaysia 2 School of

More information

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:

More information

Capturing People in Surveillance Video

Capturing People in Surveillance Video Capturing People in Surveillance Video Rogerio Feris, Ying-Li Tian, and Arun Hampapur IBM T.J. Watson Research Center PO BOX 704, Yorktown Heights, NY 10598 {rsferis,yltian,arunh}@us.ibm.com Abstract This

More information

Weighted Multi-scale Local Binary Pattern Histograms for Face Recognition

Weighted Multi-scale Local Binary Pattern Histograms for Face Recognition Weighted Multi-scale Local Binary Pattern Histograms for Face Recognition Olegs Nikisins Institute of Electronics and Computer Science 14 Dzerbenes Str., Riga, LV1006, Latvia Email: Olegs.Nikisins@edi.lv

More information

An Adaptive Background Model for Camshift Tracking with a Moving Camera. convergence.

An Adaptive Background Model for Camshift Tracking with a Moving Camera. convergence. 261 An Adaptive Background Model for Camshift Tracking with a Moving Camera R. Stolkin,I.Florescu,G.Kamberov Center for Maritime Systems, Dept. of Mathematical Sciences, Dept. of Computer Science Stevens

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents

More information

Introduction to behavior-recognition and object tracking

Introduction to behavior-recognition and object tracking Introduction to behavior-recognition and object tracking Xuan Mo ipal Group Meeting April 22, 2011 Outline Motivation of Behavior-recognition Four general groups of behaviors Core technologies Future direction

More information

Automatic Tracking of Moving Objects in Video for Surveillance Applications

Automatic Tracking of Moving Objects in Video for Surveillance Applications Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

Implementing the Scale Invariant Feature Transform(SIFT) Method

Implementing the Scale Invariant Feature Transform(SIFT) Method Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial

More information

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds 9 1th International Conference on Document Analysis and Recognition Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds Weihan Sun, Koichi Kise Graduate School

More information

A Modified Mean Shift Algorithm for Visual Object Tracking

A Modified Mean Shift Algorithm for Visual Object Tracking A Modified Mean Shift Algorithm for Visual Object Tracking Shu-Wei Chou 1, Chaur-Heh Hsieh 2, Bor-Jiunn Hwang 3, Hown-Wen Chen 4 Department of Computer and Communication Engineering, Ming-Chuan University,

More information

A Texture-Based Method for Modeling the Background and Detecting Moving Objects

A Texture-Based Method for Modeling the Background and Detecting Moving Objects IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 4, APRIL 2006 657 A Texture-Based Method for Modeling the Background and Detecting Moving Objects Marko Heikkilä and Matti Pietikäinen,

More information

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Si Chen The George Washington University sichen@gwmail.gwu.edu Meera Hahn Emory University mhahn7@emory.edu Mentor: Afshin

More information

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution Detecting Salient Contours Using Orientation Energy Distribution The Problem: How Does the Visual System Detect Salient Contours? CPSC 636 Slide12, Spring 212 Yoonsuck Choe Co-work with S. Sarma and H.-C.

More information

A Survey on Moving Object Detection and Tracking in Video Surveillance System

A Survey on Moving Object Detection and Tracking in Video Surveillance System International Journal of Soft Computing and Engineering (IJSCE) A Survey on Moving Object Detection and Tracking in Video Surveillance System Kinjal A Joshi, Darshak G. Thakore Abstract This paper presents

More information

BSFD: BACKGROUND SUBTRACTION FRAME DIFFERENCE ALGORITHM FOR MOVING OBJECT DETECTION AND EXTRACTION

BSFD: BACKGROUND SUBTRACTION FRAME DIFFERENCE ALGORITHM FOR MOVING OBJECT DETECTION AND EXTRACTION BSFD: BACKGROUND SUBTRACTION FRAME DIFFERENCE ALGORITHM FOR MOVING OBJECT DETECTION AND EXTRACTION 1 D STALIN ALEX, 2 Dr. AMITABH WAHI 1 Research Scholer, Department of Computer Science and Engineering,Anna

More information

Short Run length Descriptor for Image Retrieval

Short Run length Descriptor for Image Retrieval CHAPTER -6 Short Run length Descriptor for Image Retrieval 6.1 Introduction In the recent years, growth of multimedia information from various sources has increased many folds. This has created the demand

More information

Multi-Channel Adaptive Mixture Background Model for Real-time Tracking

Multi-Channel Adaptive Mixture Background Model for Real-time Tracking Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 2073-4212 Ubiquitous International Volume 7, Number 1, January 2016 Multi-Channel Adaptive Mixture Background Model for Real-time

More information

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION Dipankar Das Department of Information and Communication Engineering, University of Rajshahi, Rajshahi-6205, Bangladesh ABSTRACT Real-time

More information

Classification of objects from Video Data (Group 30)

Classification of objects from Video Data (Group 30) Classification of objects from Video Data (Group 30) Sheallika Singh 12665 Vibhuti Mahajan 12792 Aahitagni Mukherjee 12001 M Arvind 12385 1 Motivation Video surveillance has been employed for a long time

More information

Color-Texture Segmentation of Medical Images Based on Local Contrast Information

Color-Texture Segmentation of Medical Images Based on Local Contrast Information Color-Texture Segmentation of Medical Images Based on Local Contrast Information Yu-Chou Chang Department of ECEn, Brigham Young University, Provo, Utah, 84602 USA ycchang@et.byu.edu Dah-Jye Lee Department

More information

Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features

Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features 1 Kum Sharanamma, 2 Krishnapriya Sharma 1,2 SIR MVIT Abstract- To describe the image features the Local binary pattern (LBP)

More information

Clustering Based Non-parametric Model for Shadow Detection in Video Sequences

Clustering Based Non-parametric Model for Shadow Detection in Video Sequences Clustering Based Non-parametric Model for Shadow Detection in Video Sequences Ehsan Adeli Mosabbeb 1, Houman Abbasian 2, Mahmood Fathy 1 1 Iran University of Science and Technology, Tehran, Iran 2 University

More information