Pattern Recognition Letters

Size: px
Start display at page:

Download "Pattern Recognition Letters"

Transcription

1 Pattern Recognition Letters 30 (2009) Contents lists available at ScienceDirect Pattern Recognition Letters journal homepage: Robust human tracking based on multi-cue integration and mean-shift Hong Liu *, Ze Yu, Hongbin Zha, Yuexian Zou, Lin Zhang National Lab on Machine Perception, Shenzhen Graduate School, Peking University, Beijing , PR China article info abstract Article history: Available online 1 November 2008 Keywords: Mean-Shift Multi-cue tracking Adaptive integration Multi-cue integration has been researched extensively for robust visual tracking. Researchers aim to use multiple cues under the probabilistic methods, such as Particle Filtering and Condensation. On the other hand, Color-based Mean-Shift has been addressed as an effective and fast algorithm for tracking color blobs. However, this deterministic searching method suffers from objects with low saturation color, color clutter in backgrounds and complete occlusion for several frames. This paper integrates multiple cues into Mean-Shift algorithm to extend its application areas of the fast and robust deterministic searching method. A direct multiple cues integration method with an occlusion handler is proposed to solve the common problems in color-based deterministic methods. Moreover, motivated by the idea of tuning weight of each cue in an adaptive way to overcome the rigidity of the direct integration method, an adaptive multi-cue integration based Mean-Shift framework is proposed. A novel quality function is introduced to evaluate the reliability of each cue. By using the adaptive integration method, the problem of changing appearance caused by object rotation can be solved. Extensive experiments show that this method can adapt the weight of individual cue efficiently. When the tracked color blob is invisible for human bodies rotation, the color cue is compensated by motion cue. When the color blob becomes visible again, the color cue will become dominating as well. Furthermore, the direct-cue-integration method with an occlusion handler is combined with the adaptive integration method to extend the application areas of the adaptive method to full occlusion cases. Ó 2008 Published by Elsevier B.V. 1. Introduction * Corresponding author. Tel.: address: hongliu@pku.edu.cn (H. Liu). Tracking objects in complex environments is a challenging task in intelligent surveillance field (Haritaoglu et al., 2000; Wren and Pentland, 1997). A good tracking algorithm should be able to work well in various difficult situations, such as various illuminations, background clutter, and occlusion. There are two technique trends in the computer vision tracking community. One is to develop more inherently robust algorithms and another is to employ multiple cues to enhance tracking robustness. To increase the robustness and generality of tracking, various image features must be employed. Every single cue has its own advantages and disadvantages (Tao et al., 2000; Hayman and Eklundh, 2002). For example, shape cue is suitable for tracking rigid objects which seldom change their shapes in video sequences, like human heads. However, shape cue based methods perform poorly when backgrounds are with rich texture and edges. Color feature is widely used in tracking (Vermaak et al., 2002) because it is easy to extract and robust to the partial occlusion. Unfortunately, it is vulnerable to sudden light change and backgrounds with similar colors. As a result, using a single cue for tracking is insufficient because of the complexity and time varying properties of environments. Various complementary features can be combined to get more robust tracking results. It is our interest to employ multiple cues under a robust tracking framework. Tracking problem can be viewed as a state estimation problem of dynamic systems. From this point of view, algorithms can be divided into two categories. The first category is probabilistic method. This method views tracking as a dynamic state estimation problem under the Bayesian framework, provided that the system model and measurement model bring in uncertainty (Sherrah and Gong, 2001; Toyama and Horvitz, 2000). The representative methods are Kalman Filter and its derivatives, multi-hypothesis tracking algorithms, such as Condensation (Isard and Blake, 1998), Particle Filtering (Arulampalam et al., 2002; Nummiaro et al., 2002), and Monte Carlo tracking (Perez et al., 2002). The second category is deterministic method. This method compares a model with current frame and finds out the most promising region. Mean-Shift (Bradski, 1998; Comaniciu et al., 2000, 2003) and Trust Region (Liu and Chen, 2004) are two typical examples. The deterministic methods are hard to handle complete occlusion very well since the tracking is based on the previous tracking results. If the tracked object is lost or occluded completely, deterministic searching methods will fail. However, they are usually more accurate in tracking than that of the probabilistic multi-hypothesis tracking algorithms. Mean-Shift is a non-parametric method of climbing the density gradient to find the peak of a distribution, which belongs to the /$ - see front matter Ó 2008 Published by Elsevier B.V. doi: /j.patrec

2 828 H. Liu et al. / Pattern Recognition Letters 30 (2009) deterministic methods category. Generally, Mean-Shift converges fast and is robust to small distractors in distributions. It is firstly applied to the color tracking scheme by Bradski (1998) and Comaniciu et al. (2000), respectively. The well-known Continuous Adaptive Mean-Shift (CAMSHIFT) was developed by using color histograms to model the object s color. Bradski and Commaniciu adopted different methods in calculation of the color distribution and kernel scale. The recent work for the Mean-Shift algorithms (Liu and Chen, 2004; Collins, 2003; Zivkovic and Krose, 2004) mainly focused on solving the window scale problem. How to handle the problems caused by the background color clutter and the complete occlusion has not been addressed in the related literatures. In this paper, the idea of motion cue integrated with the color cue is proposed to solve these problems. Many researchers focus on establishing a multi-cue-integration mechanism under the probabilistic framework, including Dynamic Bayesian Network (Wang et al., 2004), Monte Carlo method (Wu and Huang, 2001), and Particle Filters (Spengler and Schiele, 2003). In these methods, multiple cues are tightly coupled with the tracking model and the tracking algorithm based on Bayesian framework, which makes them difficult to be used in the deterministic tracking methods. Another kind of multi-cue-integration methods is the pixelwise integration method. In this method, tracking is considered as a pixel classification problem. A pixel belongs to the foreground or background is determined by all the cues. Every cue has a saliency map, and these maps are combined according to certain principle. One representative method is the adaptive democratic integration method proposed by Triesch and Malsburg (2000). Each cue votes for the final combined saliency maps and the voting-like integration scheme is adaptive. Spengler and Schiele (2003) uses this adaptive integration method to integrate cues in human face tracking. This pixel-wise integration method is suitable to be used in deterministic tracking methods. Up to now, most literatures concerning deterministic searching methods employ only single color probability distribution. This directly leads to the tracking results vulnerable to complex conditions, such as similarly colored backgrounds and tracking low saturation objects. We try to solve these common problems in color-based deterministic approaches by using multi-cue integration similar as in the area of figure-ground segmentation. The concept of multi-cue used in our method means feature combination for the same object. Multiple cues of features are really used to detect objects automatically. In Mean-Shift tracking, color cue is easy to be computed. However, it may include some similarly colored background areas that distract tracking. Moreover, when the tracked color has low saturation, the color blob will be lost soon because of the heavy noise. On the other hand, the motion cue gained from the background subtraction holds all moving objects, some of which are out of tracking targets. Furthermore, motion detection is usually difficult to get the complete and clean silhouette of moving objects. Combining the motion cue with the color cue will eliminate those uninterested regions as much as possible in both cues maps. This motivates us to develop a cue-integration method to integrate motion cue with color cue. Summarily, there are four reasons to integrate both cues under the framework of Mean-Shift. Firstly, integrating motion cue and color cue can eliminate noise and uninterested areas in both cues. Secondly, motion detection results are regarded as a motion probability distribution map and can be integrated with color distribution naturally. Thirdly, as Mean-Shift is robust to small distractors, we can employ preliminary motion detection algorithm, which will reduce the computational complexity. Lastly, Mean-Shift is a fast mode seeking algorithm, which saves computational resources for cue-integration methods. Deterministic algorithm is vulnerable to full occlusion for a few frames because the present iteration is initialized according to the previous one. Once the object being tracked is lost, the deterministic methods normally cannot recover when the object reappears. Based on the color-motion integration mechanism, the occlusion handler is introduced, which can be used to detect full occlusion cases and to reinitialize the tracking window automatically when the object reappears. Our work can be summarized as follows. Firstly, the multiple cue integration technique is brought into the framework of deterministic searching methods to improve tracking robustness. Mean-Shift is a fast and robust tracking algorithm, which is inherently very suitable to build a real-time tracking system, and the cue integration is able to enhance the tracking robustness under various conditions. Secondly, based on the motioncolor integration, an occlusion handler is employed to tackle the full occlusion problem in the deterministic Mean-Shift. Experiments show that it can handle occlusion reasonably well. Thirdly, we applied the Mean-Shift algorithm based on adaptive cue-integration method, and proposed a more robust quality function to evaluate cues reliability. With a cue-evaluation mechanism, this method overcomes the rigidity in direct integration. To the best of our knowledge, we have done a creative work to employ the adaptive integration mechanism under the framework of Mean-Shift with a new quality function suitable for evaluating cues reliability. In principle, when it is reliable and visible, the color cue will have higher weight in the combined probability distribution. Otherwise, it will be compensated by the motion cue. Lastly, the direct-cue-integration method with the occlusion handler is integrated with the adaptive cue integration to extend the application areas of the adaptive cueintegration method. The rest of this paper is organized as follows: Section 2 brings forward the direct color-motion cue-integration method incorporated with the occlusion handler. Section 3 illustrates the strategy of adaptive multi-cue integration. We also extend the adaptive integration method to the full occlusion cases by integrating the direct-cue integration based occlusion handler in this section. Experimental results and conclusions are given in Sections 4 and 5, respectively. 2. Integrating multiple cues 2.1. Deficiency with color-based mean-shift To use Mean-Shift iterations, a probabilistic distribution map indicating the object being tracked should be firstly calculated. A color probabilistic map is calculated by the histogram back projection: first, the color histogram of the object s color is calculated and stored in a look-up table. When a new frame comes in, the table is looked up for each pixel s color, and a probability value is assigned to each pixel. Hence, a probabilistic distribution map is obtained. Mean Shift procedure can be employed to find the nearby dominant distribution peak afterwards. In color-based tracking, in order to gain robustness against illumination variations, RGB video images generally are converted into the HSV color space. It is interesting to find that whether the color model is obtained by the hue channel only. This will improve the robustness of the color model against lighting changes. However, this method brings in a new problem. When pixels color has a low saturation near to zero, which means the RGB channels have similar values and the hue channel is not well defined or inaccurate (Swain and Ballard, 1991). Basically, we have

3 ( 0; if max ¼ 0 s ¼ max min ; otherwise max v ¼ max 8 undefined g b 60 max min >< ; g b h ¼ 60 þ 360; max min >: 60 b r max min þ 120; if max ¼ min if max ¼ r and g P b if max ¼ r and g < b if max ¼ g b r 60 þ 240; if max ¼ b: max min To illustrate this case, we assume that a color vector (R, G, B) is given and satisfies R > G > B > 0. From Eqs. (1) (3), we can obtain: V ¼ R; S ¼ 1 B R ð1þ ð2þ ð3þ G B ; H ¼ 60 : ð4þ S Suppose only G has changed to G + DG, and H has changed to H 0. From Eq. (4), we have DH ¼ H 0 H ¼ 60 DG S : From formula 5, it can be seen that small changes in G will cause wild swings in hue value when S 0. In this case, the hue value can not represent the original RGB color reliably, which results in inaccuracy and noise in the back-projection image. It is the first deficiency of using single color cue. Moreover, Mean-Shift is robust to small distractors, but if the distractor is larger than the object color area, the object may be lost when it moves near to the similar distractor. This is the second deficiency. Although increasing the color model s dimensionality and increasing the number of bins can get a cleaner color back-projection image, there are three reasons why we do not choose this way. Firstly, in some cases, using 2D HS histogram model or 3D HSV histogram model can not get satisfying results. As we have shown that when saturation is low, the hue value is corrupted. This can not be solved by increasing the dimensionality of the histogram. Secondly, it is a problem that how many components of color we should use, and how many bins we should discretize each color component. It is difficult to get the right answer under varying circumstances. Thirdly, the computational resource is limited. Increasing the color model s dimensionality and the number of bins means increasing the computation complexity. This should be avoided in real-time tracking applications. It tries to seek an alternative way to improve tracking robustness. In color-based Mean-Shift tracking, it should be noticed that the distrators are all from the background. When the camera is static, the background is assumed to be fixed, which can be used as a priori to eliminate those noisy areas in the back-projection image. Therefore, it is believed that motion information can be employed to solve the deficiencies. Firstly, motion cue is calculated according to a background model. We assume that the intensity value I of each pixel satisfies Gaussian distribution! pðiþ ¼pffiffiffiffiffiffi 1 ði lþ2 exp : ð6þ r 2r 2 2p This can be viewed as the background model M m,b. Here, the foreground model M F is difficult to calculate. However, we can calculate the observation likelihood p motion (Z i jm m,f ) through M B :! p motion ðz i jm m;f Þ¼1 pffiffiffiffiffiffi 1 exp r ði i l i Þ 2 : ð7þ 2p 2r 2 i H. Liu et al. / Pattern Recognition Letters 30 (2009) ð5þ This represents the likelihood of pixel x i belonging to the foreground M F only using the motion cue. The background model needs to be updated to deal with illumination changes. The different image D i is calculated using the mean values of the background model (8) and incoming images D i ¼jI i l i j then D i is binarized to get a motion mask B i according to the following formula: ð9þ B i ¼ 1; D i > lr i 0; D i 6 lr i ; where l is a constant. Then B i is used to update the background model using the following equations: l i ðt þ 1Þ ¼ ð1 aþl i ðtþþai iðt þ 1Þ; B i ðtþ ¼1 ð10þ l i ðtþ; B i ðtþ ¼0 8 >< ð1 aþðr 2ðtÞþðl i i ðt þ 1Þ l i ðtþþ2 Þ r 2 i ðt þ 1Þ ¼ þaði i ðt þ 1Þ l i ðt þ 1ÞÞ 2 ; B i ðtþ ¼1 ð11þ >: r 2 i ðtþ; B iðtþ ¼0; where r is the corresponding standard deviation. For each image I i, p m (x i, t) denotes the motion probability of the pixel x i at time t. Let p m ðx i ; tþ ¼p motion ðz i jm m;f Þ; ð8þ ð12þ where, p m (x i, t) can be viewed as a distribution that represents the probability of motion for each pixel Direct-cue integration Probabilistic distribution map (PDM) is a mono-chromatic image, the pixels in PDM p j (x i,t) satisfy p j ðx i ; tþ /p j ðz i jm j;f Þ; ð13þ where Z i is the observation at pixel i, M j,f is the foreground model in cue j, p j (Z i jm j,f ) represents the observation likelihood of the pixel i given the foreground model M j,f in cue j. The higher the pixel s value in p j (x i,t) is, the higher the likelihood of pixel i belongs to the foreground target. The direct-cue-integration method uses the minimum pixel value in several PDMs from different cues. In the direct-cue-integration method, if the pixel s value in a certain cue is 0, the value of the corresponding pixel in the combined PDM p(x i,t) will also be 0. Since each pixel will be examined by the multiple cues, the direct integration method is very strict. If a pixel has higher probability to be assigned to the background according to any cue, regardless of other cue s result, the pixel will have a probability lower than 0.5 in the combined PDM. Therefore, the integration method can eliminate the scattered noise on the PDM of the single cue. Suppose c cues are considered, the combined PDM p(x i,t) in the direct integration method can be represented as pðx i ; tþ ¼minðp j ðx i ; tþþ j ¼ 1;...; c: ð14þ The color observation likelihood p color (Z i jm c,f ) is calculated through the back projection. The color model M c,f is represented by the histogram of the object s color, which is saved as a lookup table. p color (Z i jm c,f ) is calculated using this table. Let p c ðx i ; tþ ¼p color ðz i jm c;f Þ: ð15þ Then, p m (x i, t) is integrated into the original color probability distribution p c (x i, t). The color-motion integration based Mean- Shift algorithm is illustrated in Table 1. In step 5, k is a constant, and the mean location b P is the centroid of the area, and it is given as

4 830 H. Liu et al. / Pattern Recognition Letters 30 (2009) Table 1 Algorithm of Mean-Shift based on the color-motion integration. 1. Calculate the color PDM: Calculate the color probabilistic distribution map p c (x i, t) through the back projection 2. Calculate the motion PDM: Calculate the motion probabilistic distribution map p m (x i, t) through the motion detection 3. Cue integration: Integrate the two maps by using formula (7) 4. Initialize Mean-Shift iteration: Choose a search window scale s 0 and initial the location P 0 on combined distribution map p(x i, t) 5. Mean-Shift iteration: Compute moments M 00, M 01 of the region in the search window (P, s) as follows: M 00ðtÞ ¼ P i pðx i; tþ M 01ðtÞ ¼ P ixi pðxi; tþ and calculate the mean location P b by using formula (16). Set the new window parameters as P ¼ P; b p s ¼ k ffiffiffiffiffiffiffiffi M 00. Repeat step 5 until convergence bp ¼ M 01 M 00 ¼ P i x i pðx i ; tþ P i pðx : ð16þ i; tþ The motion and the color cues are employed explicitly. Motion continuity has been implicitly used, as it is initialized according to the tracking result of the last frame. Note that the integration scheme in the formula (16) is open and more cues can be integrated in. Fig. 1. Direct color-motion integration based Mean-Shift occlusion handler (emphasized by the dotted box) Occlusion handling Mean-Shift algorithm is vulnerable to full occlusions in a few frames because the present Mean-Shift iteration is initialized according to the result of the previous iteration. If the object is totally occluded in a couple of frames, the tracking window will drift away and the algorithm will have no mechanism to continue tracking. Based on the direct color-motion integration mechanism, an occlusion handling approach will be helpful, which can be used to detect the full occlusion cases and to reinitialize tracking automatically when the lost object reappears. Based on the direct-cue integration, a distribution map with little background noise is obtained, which makes it possible to search larger non-zero regions on the distribution map to find the reappeared object. Without the direct-cue integration, the background noise may cause the occlusion handler to fail. Fig. 1 shows that the flow chart of the occlusion handler using direct-cue integration. The occlusion handler is composed of an occlusion detecting part and an occlusion recovering part. If the object s color is fully occluded by some other objects, the tracking window will shrink. When the window area or the density of non-zero pixels in the window becomes smaller than the presetting thresholds, it will be regarded as a full occlusion case. In such case, larger non-zero regions are searched in the object s probabilistic distribution map near the place where the object disappeared. If some other large regions are found, the largest one is used to initialize the tracking window. A projection-based region segmentation method is used to search the big regions after the full occlusion happens. To minimize the possibility of misclassifying the background clutter as the reappeared object, as well as to save computational resources, searching is limited to the region near the place where the object disappeared. Suppose the person disappeared at x d, then the person is expected to reappear at x near to x d satisfying: x d r < x < x d + r (suggest to use another symbol to represent this r to avoid ambiguity), where r is an empirical parameter. Fig. 2 Fig. 2. Principle to discover reappearing targets. Region larger than a threshold is searched in the interval [x d r, x d + r] after the full occlusion. The region found (white rectangular box) is then used to reinitialize Mean-Shift iterations. shows the principle of obtaining the reappeared target. The algorithm to obtain the reappearing target is summarized in Table 2. Since Mean-Shift converges fast and is robust to small distractors, the region segmentation algorithm is allowed to be a coarse result. Therefore, it is suitable to use the fast projection-based region segmentation method to find the big region after the full occlusion. With the color-motion integration method and the occlusion handler, we can deal with the color background clutter and the full occlusion over a few frames, which is said to be deficiency of the deterministic methods (Perez et al., 2002). Furthermore, the Table 2 Algorithm of finding lost target. Save the horizontal coordinate of the disappearing point of the target, x d, 1. Find the left and right boundaries: The part in the [x d r, x d + r] of the combined distribution p(x, t) is projected to the horizontal axis first to find regions left and right boundaries (l, r) 2. Find the top and bottom boundaries: The part in the [l,r] of the combined distribution p(x, t) is projected to the vertical axis to find regions top and bottom boundaries (t, b) 3. Tune the searching range or return: If no large region is found, extend the searching range: r = r + D and go to step1; otherwise, return the found region (l, r, t, b)

5 H. Liu et al. / Pattern Recognition Letters 30 (2009) occlusion handler can handle long time complete occlusion or the object s departure from the field of view, FOV of the camera over a couple of frames, which is difficult to be handled by the multihypothesis based probabilistic tracking methods, such as Particle Filter. 3. Mean-Shift adaptive multi-cue integration Although the direct multi-cue integration can enhance the tracking performance of the color-based Mean-Shift algorithm, it may erode the color probabilistic image because of the inevitable holes in the motion detection results. This is a disadvantage of the direct integration when an object s color has sufficiently high saturation component and its color probabilistic map alone is good enough for tracking. In addition, the direct multi-cue-integration method assumes that the contributions of each cue are the same, regardless of their reliabilities. Hence, we employ an adaptive multi-cue-integration technique. Our work mainly differs from the adaptive multi-cue-integration work suggested in (Spengler and Schiele, 2003; Triesch and Malsburg, 2000), in terms of that we introduce a new quality function which is suitable for the blob tracking Adaptive multi-cue integration Suppose p j (x i,t) is the probability distribution map of cue j, p(x i,t) is the combined probability distribution map, the cues are integrated as a weighted sum of probability distribution pðx i ; tþ ¼ X x j ðtþp j ðx i ; tþ; ð17þ j X x i ðtþ ¼1 ðsuggest j instead of i hereþ: ð18þ i The adaptive integration method changes each cue s weight adaptively according to the reliability of them in the previous frame. Suppose the performance of the individual cue j can be evaluated using a quality function q j (t). The normalized quality of cue j is given as q j ðtþ ¼ q jðtþ Pj q jðtþ : ð19þ The relation between the quality and the weight of cue j is defined as s _x j ðtþ ¼q j ðtþ x j ðtþ: ð20þ Formula (20) can be used to update individual weight of each cue. As the quality function q j (t) is normalized, the weight x j (t) is normalized as well. The parameter s is a time constant controlling the updating speed. If q j ðtþ > x j ðtþ, x j (t) tends to be increased. Essentially, q j (t) represents the feedback of tracking results, so Eq. (14) can be regarded as a running average, and x j (t) is adapted according to q j (t), which brings in the information about the performance of cue j in the last frame. The remaining work is the choice of an appropriate quality function q j (t). Quality function q j (t) can be viewed as the feedback of the tracking result X _ ðtþ ¼ðP; sþ. Here P and s are the estimated center and the scale of the tracking window, respectively. Each cue s weight is adjusted according to the quality of the cue in last frame. In this paper, the quality function is defined as the ratio between the number of the non-zero pixels in the foreground and those in the background in the individual probability distribution map. To eliminate the effect of pixels far from the object, a center-surround approach is used. The background is defined as the area between _0ðtÞ the tracking box and a larger window X, which shares the same center as the tracking window. In (Spengler and Schiele, 2003; Triesch and Malsburg, 2000), the authors take the maximum point on the PDM as the estimated position of the target. This simplified estimation method is not robust to find the region which the target encompasses. When the target changes its shape and size in the image, this method may fail to segment the target out accurately. In addition, if there are more than one maximum point on the combined PDM, how to determine the position of the target is not addressed in these literatures. Mean-Shift method is employed in this paper to find out the human body in images. There are two reasons to do this: Firstly, the adaptive Mean Shift tracking algorithm is able to find out not only the position of the human body, but also the area of the human body in images. Secondly, Mean-Shift is an iterative method to find the nearest mode in the distribution, which means that it employs the motion continuity cue implicitly. Therefore, it is not necessary to calculate a motion continuity cue separately as that in (Spengler and Schiele, 2003; Triesch and Malsburg, 2000), which is computational expensive. In (Spengler and Schiele, 2003; Triesch and Malsburg, 2000), the authors use a cue quality function as following: ( q j ðtþ ¼ 0; p jð^x; tþ 6 p j ðx i ; tþ p j ðx _ ; tþ p j ðx i ; tþ; p j ð^x; tþ > p j ðx i ; tþ p j ðx i ; tþ is the mean value over the PDM of cue j, ^x is the estimated target position. This quality function depends heavily on a single point ^x, which may be distracted by noise. To avoid influences of noise, in (Spengler and Schiele, 2003; McKenna et al., 2000, the approach of smoothing each cue s PDM is used in the price of introducing high computation complexity. To make the algorithm tractable, the image is subsampled, which sacrifices the image resolution. This paper presents a new quality function based on the statistics over the region. The new quality function is much more robust to noise. In addition, in our approach, images are processed using the original resolution and subsampling is not needed Quality function based on region statistics The sum of the probabilities in the window W is defined as follows: f ðpðx i Þ; WÞ ¼ X i pðx i Þ; x i 2 W ð21þ and X _0 ðtþ is a larger window including background pixels, then the quality function q j (t) is defined as f ðp j ðx i ; tþ; X _ ðtþþ q j ðtþ ¼ f ðp j ðx i ; tþ; X _ 0 ðtþ X _ : ð22þ ðtþþ Each cue s reliability is evaluated by this quality function and the weights are adapted accordingly. When the object with low saturation color is tracked, the quality function value of the color cue will be much lower than that of the motion cue. The weight of the motion cue is increased. When the object changes its color appearance, the color-based algorithm will fail because the tracked color is invisible, but the adaptive color-motion cue based Mean-Shift is able to continue tracking the human by using the information from the motion cue. Sometimes when the motion cue dominates, as the area of color cue is always smaller than that of the motion cue, the area of the tracking window X _ ðtþ will be expanded larger by the motion cue. This may make the tracking window much larger than the object color area. According to Eq. (22), the color cue may have smaller weight even when it is reliable and this causes a difficulty for color cue to take back the dominating position after the motion cue dominates the tracking. To avoid this, Mean-Shift is applied to the individual color cue as well. We compare tracking window

6 832 H. Liu et al. / Pattern Recognition Letters 30 (2009) on the color cue with the tracking window on the combined distribution, and choose the smaller window as the final tracking window. With this improvement, when the reliable color cue becomes visible again, its weight will automatically increase. When the object color reappears, the weight of the color cue will increase again if the color cue is reliable. The weight can reveal the orientation of the tracked person relative to the camera. It can be seen that the adaptive weighted sum integration is different from the direct integration method. In the direct integration method, if a pixel value is zero in the motion probabilistic distribution map, its probability value is set to 0 no matter what its value in the color probabilistic distribution map is. However, in the weighted sum integration the combined probability value is always decided by both the color and motion probability. Considering possible detection holes in the cue extraction process, the adaptive weighted sum integration is more robust to detect holes than the direct integration method. After the combined probability distribution map is obtained, a region detection algorithm should operate on the combined map to locate the object. Spengler and Schiele (2003) uses the projection of the combined distribution p(x i, t) to the coordinate axes to find the estimated position, more robust multi-cue algorithm is used in their paper, with a new quality function. The motion cue is inherently very suitable to be integrated into the Mean-Shift framework using the adaptive integration method discussed above. The background subtraction results usually have holes and patches of noise. The holes may be remedied by other cues using the weighted sum integration. Moreover, as Mean-Shift is robust to small distractors, noise from the background subtraction has much less influence on the tracking results. The flow chart of our proposed adaptive multi-cue integration based Mean Shift tracking is illustrated in Fig. 3. It is noted that the cue performance evaluation forms a feedback loop, which is the key difference from the direct multi-cue integration. Each cue s reliability is evaluated in the cue-evaluation phase by the new quality function. Another point deserving notice is that there are two places using Mean Shift iterations. The first one is applied to the combined PDM, and the second is applied to the color PDM. Fig. 4. Integration of the adaptive cue integration and the direct-cue-integration methods. This extends the adaptive integration to the full occlusion cases. The aim of this is to favor the color cue when it is reliable. When the color cue is occluded by the target itself and reappears again, the second Mean-Shift iteration can help to focus on the color cue again. It should be mentioned that the adaptive integration method in Fig. 3 works well in single person sequences, and it may fail in some sequences in which the object person is occluded. We also integrate the direct integration method with the occlusion handler (Section 2) to increase the robustness of the adaptive cue integration in full occlusion cases. The principle of the direct integration and the adaptive integration method is shown in Fig. 4. Direct-cue integration is performed in every frame aiming at helping the adaptive integration method to detect and handle the full occlusion cases. When occlusion is detected, the occlusion handler is called to search for the lost target. Once the target is found, the found region is used to initialize the adaptive cue-integration based Mean-Shift iterations. Direct integration is performed in every frame besides the adaptive integration, and the probability density map from the directcue integration is used to detect occlusion and search for the occluded person. It is noted that there are three times of Mean-Shift iterations (one Mean-Shift is used in the occlusion handler) in the proposed algorithm. Fortunately, Mean-Shift is very efficient and the adaptive cue integration with the occlusion handler can run in real time. 4. Experiments Fig. 3. Flow chart of the adaptive color-motion integration based Mean Shift tracking algorithm. To evaluate the effectiveness of the proposed algorithm, an experimental system is set up. Experiments are carried out on a PC with a 1.8 GHz Pentium 4 CPU and 512 MB memory. Pixels with saturation lower than 30 and brightness lower than 10 are discarded. The direct color-motion integration method is tested. The occlusion handler based on the direct cue-integration method is tested as well. Finally, the advantages of the adaptive multi-cue

7 H. Liu et al. / Pattern Recognition Letters 30 (2009) integration are demonstrated. Video sequences are in resolution without subsampling compared to Spengler and Schiele (2003) and Triesch and Malsburg (2000). S1 and S2 are the sequences with low saturation objects. S3 and S4 are the sequences in which the color cue with media saturation is tracked with a similar background color clutter. S5 and S6 are the multi-people sequences in which a good color cue is tracked but the object is never occluded. S7 to S11 are the video sequences with two persons and there are full occlusion cases. There are three persons in S12 and S13. In total, the video sequence database has over 3000 frames. All the results are obtained in real-time running Direct motion cue integration values not accurate and forms a large area of distractor. Motion cue helps to eliminate the distractor at the door, and enables the color-based Mean-Shift method to converge to the right color area Occlusion handling The occlusion handler is tested by using multiple person video sequences. In these sequences, the occluded person is tracked and the Bradski (1998) algorithm fails in all these sequences. With the occlusion handler, when the occluded person reappears from the occlusion, the occlusion handler can reinitialize the tracking and recover from the object lost. Fig. 7 shows a full occlusion case in sequence S8. Color model is initialized on the boy s red coat Algorithms are tested with the video sequences whose information is shown in Table 3. An 1D hue-histogram with 16 bins is used. In cases with only color cue, objects were lost after initialization later. Table 4 shows the tracking results using four video sequences, that is S1 to S4. It can be seen that the original color-based Mean- Shift algorithm failed for all sequences. When tracking window is converged to a wrong region or much larger than the object color area, it is defined as the tracking failure. Fig. 5 shows the tracking results using the method by Bradski (1998) and the proposed direct integration algorithm in frame 68 of Sequence S1. From Fig. 5, it can be seen that the integration of the motion information helps the color-based Mean-Shift algorithm to overcome the difficulty of tracking objects with low saturation color. Even when the saturation of the object color is not low, background color clutter may also cause tracking failure. Integrating motion cue enables the color-based Mean Shift algorithm to track objects in this case. Fig. 6 shows the results of a real-time video sequence in frame 90 of sequence S3, the lighting condition in this sequence is dark, which makes the background pixels hue Table 3 The video sequences used in our experiments. Sequence Human Sequence characteristics Total frames S1(200 frames) Single Low saturation color 1918 S2(200 frames) Low saturation color S3(200 frames) Background distractor S4(200 frames) Background distractor S5(350 frames) Occlusion by object S6(384 frames) Reliable color S7(384 frames) Reliable color S8(110 frames) 2 Once occlusion 1515 S9(160 frames) Once occlusion S10(200 frames) Once occlusion S11(255 frames) Twice occlusion S12(350 frames) More than 2 Three times occlusion S13(350 frames) Three times occlusion Fig. 5. Tracking results in frame 68 of video sequence S1 (tracking object with low saturation color): (a) shows the tracking results and probabilistic distribution maps using only color cue and (b) show the tracking results using color-motion cue integration. Table 4 Tracking performance: S1 and S2 are sequences with low saturation objects. S3 and S4 are sequences with similar background color clutter. Video sequence Integrate motion info. Model initialization (nth frame) Fail (nthframe) S1 (200 frames) N Y Success 100 S2 (200 frames) N Y Success 100 S3 (200 frames) N Y Success 100 S4 (200 frames) N Y Success 100 Successful rate (%) Fig. 6. Tracking results in frame 90 of video sequence S3: tracking human under similarly colored backgrounds: (a) the tracking result and corresponding probabilistic distribution map using only color cue and (b) the results after color-motion cue integration.

8 834 H. Liu et al. / Pattern Recognition Letters 30 (2009) Fig. 7. Full occlusion case from video sequence S8. (a d) are tracking results from S8. In frame 67, the tracked human reappears, the algorithm successfully detects him and reinitializes Mean Shift iterations: (a) frame 64, (b) frame 66, (c) frame 67, and (d) frame 68. before the occlusion occurs. In frame 66, the full occlusion occurs. In frame 67, the occlusion handler can recover from the full occlusion and reinitialize the tracking. Fig. 8 shows the performance of the occlusion handler in sequence S13. There are two occlusion cases in the figure. Color model is initialized on the boy s blue shirt before the occlusion. In frame 204, the boy is totally occluded by another boy. In frame 206, the boy in blue shirt becomes visible again and the algorithm can continue tracking him. Later, the girl in red occludes the boy completely in frame 219. The occlusion handler can successfully work as well Mean-Shift with adaptive color-motion integration In the adaptive multi-cue-integration experiments, a 2D histogram is used. The hue and saturation components are discretized into 16 and 10 bins, respectively. Other experimental conditions unchanged. Adaptive multi-cue-integration strategy is tested in sequences S1 to S6, the object can be tracked during the whole sequences. Fig. 9 shows a representative result of tracking a low saturation color in video sequence S1. Color model is selected according to the color of boy s shirt. As the color cue has a low value of quality function, its weight is diminished and the weight of motion cue (the lighter curve) increases. It can be seen from Fig. 10 that both cues have impacts on the combined probability distribution map and the motion cue dominates (see Fig. 11). We compared the effect of quality functions in (Spengler and Schiele, 2003; Triesch and Malsburg, 2000) with our proposed quality function. Fig. 10 shows the weights adjusting results using the quality function in (Spengler and Schiele, 2003; Triesch and Malsburg, 2000). As this quality function is based on the value at only one estimated point, there is a great possibility that the result may be distracted by noise. Compared with the result of our proposed quality function shown in Fig. 9, the quality functions used in (Spengler and Schiele, 2003; Triesch and Malsburg, 2000) can not adjust the weights as smooth as ours. And even there are unreasonable cases in which motion cue has lower weights than color cue around frame 100. When the color cue is reliable, i.e. there is a little distractor on the color probability map, and the adaptive integration mechanism should not depress the color cue, otherwise, we should give it a Fig. 8. Tracking results in full occlusion case from video sequence S13. This figure shows two successive full occlusion cases in a video involving three persons. Fig. 9. Adaptive color-motion integration for tracking low saturation color. The lighter curve is the weight of motion cue, and the darker one is the weight of color cue. The vertical dotted line indicates the time for Fig. 10. higher weight. Sequence S3 to S6 are sequences where good color features are tracked. Table 5 shows the tracking results. In all the sequences, color cue has a higher average weight than motion cue. Fig. 12 shows a typical result of video sequence S4. The color model is initialized on the boy s orange T-shirt. The color cue becomes dominant after initialization, which can also be seen from Fig. 13. With the adaptive cue integration, the problem of changing appearance caused by human rotation can be handled as well,

9 H. Liu et al. / Pattern Recognition Letters 30 (2009) Fig. 10. Tracking results of Adaptive color-motion integration in low saturation color. The result is from frame 120 of sequence S1: (a) is combined probability density maps and (b) the corresponding tracking result. Motion cue is dominating (refer to Fig. 9). Fig. 13. Tracking result of the adaptive color-motion integration: track reliable color. The result is from frame 82 of sequence S4. (a) Combined probability density maps and (b) the corresponding tracking result. Fig. 11. Weight time curve of the quality functions in (Spengler and Schiele, 2003, 2000; compare with ours in Fig. 9). Fig. 14. Tracking results of the adaptive color-motion integration: object changes appearance. The lighter curve is the weight of motion cue, and darker one is the weight of color cue. The vertical dotted line indicates the time when Fig. 15 is taken. Table 5 Result of tracking reliable color cue using adaptive cue integration. In all sequences, the color cue has a higher average weight. the motion cue. In frame 120, the boy begins to turn back and the blue pattern is visible again, the color cue returns to increase again when the boy turns around. Video sequence Model initialization (nth frame) Average weight (color) S3 (200 frames) S4 (200 frames) S5 (384 frames) S6 (384 frames) Average weight (motion) Fig. 12. Adaptive color-motion integration: track reliable color. The lighter curve is the weight of motion cue, and the darker one is the weight of color cue. The vertical dotted line indicates the time when Fig. 15 is taken. which is demonstrated in Figs. 14 and 15. Color model is initialized with the bright blue pattern on the boy s T-shirt when he is facing to the camera. For the first few frames, the weight of color cue has a tendency to increase because of its high quality. In frame 100, the boy begins to turn left and walk to the right of the image. The blue pattern becomes invisible, in which case the color-based tracking algorithm fails. However, the object can still be tracked because the weight of the motion cue begins to increase and becomes dominant in the combined map. The failed color cue is compensated by Fig. 15. Tracking result of the adaptive color-motion integration: object changes appearance: a, c and e are combined probability density maps, b, d and f are the corresponding tracking results. These are taken from frame 100, 120, 140 of sequence S1 (refer to Fig. 14).

10 836 H. Liu et al. / Pattern Recognition Letters 30 (2009) difficulties in full occlusion case. This is shown in Fig. 16. When the target person is occluded by another, the adaptive integration can not realize that the target is occluded and mistakes the occluder for the target. It causes failure tracking. Therefore, we also try to extend the adaptive algorithm to the full occlusion cases. The direct-cue-integration method is performed in every frame in order to detect and handle full occlusion. The algorithm is tested in all the multi-person sequences. Fig. 17 shows one result in full occlusion case, compared with Fig. 16. The result is taken from sequence S10. The direct-cue integration is performed for every frame, and this result is used to detect and handle the full occlusion. In frame 96, full occlusion is detected, and the occlusion handler begins to search in a larger region near the disappearing point on the direct cue-integration PDM. In frame 102, the tracked person is detected when he reappears. 5. Conclusions Fig. 16. Tracking failure caused by full occlusion problem in adaptive color-motion integration. Result is taken from sequence S10: (a, b and d) are tracking results, (c) is the combined PDM corresponding to (b). This experiment also demonstrates that the motion probabilistic distribution map is suitable to Mean-Shift framework. Note the distractor in the left of Fig. 15c which is brought in by motion probabilistic distribution map. Mean-Shift is robust to this kind of small distractor. It has been mentioned in previous section that the adaptive algorithm acts very well in the single human case and can automatically change cues weights soundly. However, it will encounter This paper has demonstrated that the motion cue can be integrated with the color cue to solve the problems encountered in tracking an object with low saturation color and the background color clutter by using the color-based Mean-Shift algorithm. With the direct-cue-integration approach, an occlusion handler is proposed to handle the full occlusion for a couple of frames using Mean-Shift algorithm as well. We also applied Mean-Shift algorithm to the adaptive multi-cue-integration method to find out the region that the target person involved. A novel quality function is suggested to evaluate the reliability of each cue smoothly and soundly. When the color cue is more reliable, its weight will become higher than the motion cue. When the color cue is less reliable, it is compensated by the motion cue. Extensive experiments demonstrate that the proposed adaptive color-motion integration algorithms perform quite well in the unpleasant tracking situations, such as images with low situation color, tracking object which has similar color to the background, tracking objects occluded partially or fully. Acknowledgements This work is supported by National Natural Science Foundation of China (NSFC, Nos , ) and National High Technology Research and Development Program of China (863 Program, No. 2006AA04Z247). References Fig. 17. Adaptive color-motion integration: using the direct-cue integration with the occlusion handler to handle the full occlusion: a, b, d, e and f are tracking results, c is the adaptively combined PDM in frame 96 corresponding to b (refer to Fig. 16). Arulampalam, M.S., Maskell, S., Gordon, N., Clapp, T., A tutorial on particle filters for online nonlinear/non-gaussian Bayesian tracking. IEEE Trans. Signal Process. 50. Bradski, G.R., Computer vision face tracking for use in a perceptual user interface. IEEE Workshop Appl. Comput. Vision, Collins, R.T., Mean-shift blob tracking through scale space. IEEE Conf. Comput. Vision Pattern Recognition Proc. 2, Comaniciu, D., Ramesh, V., Meer, P., Real-time tracking of non-rigid objects using mean shift. IEEE Conf. Comput. Vision Pattern Recognition 2, Comaniciu, D., Ramesh, V., Meer, P., Kernel-based object tracking. IEEE Trans. Pattern Anal. Machine Intell. 25, Haritaoglu, I., Harwood, D., Davis, L.S., W4: Real-time surveillance of people and their activities. IEEE Trans. Pattern Anal. Machine Intell. 22. Hayman, E., Eklundh, JO., Probabilistic and voting approaches to cue integration for figure-ground segmentation. Proc. 7th Eur. Conf. Comput. Vision. Isard, M., Blake, A., CONDENSATION conditional density propagation for visual tracking. Int. J. Comput. Vision, Liu, T.-L., Chen, H.-T., Real-time tracking using trust-region methods. IEEE Trans. Pattern Anal. Machine Intell. 26 (3), McKenna, S., Jabri, S., Duric, Z., Rosenfeld, A., Wechsler, H., Tracking groups of people. Computer Vision and Image Understanding 80, Nummiaro, K., Koller-Meier, E., Van Gool, L., Object tracking with an adaptative color-based particle filter. Image Vision Comput., Perez, P., Hue, C., Vermaak, J., Gangnet, M., Color-based probabilistic tracking. Eur. Conf. Comput. Vision,

Object Tracking with an Adaptive Color-Based Particle Filter

Object Tracking with an Adaptive Color-Based Particle Filter Object Tracking with an Adaptive Color-Based Particle Filter Katja Nummiaro 1, Esther Koller-Meier 2, and Luc Van Gool 1,2 1 Katholieke Universiteit Leuven, ESAT/VISICS, Belgium {knummiar,vangool}@esat.kuleuven.ac.be

More information

A Modified Mean Shift Algorithm for Visual Object Tracking

A Modified Mean Shift Algorithm for Visual Object Tracking A Modified Mean Shift Algorithm for Visual Object Tracking Shu-Wei Chou 1, Chaur-Heh Hsieh 2, Bor-Jiunn Hwang 3, Hown-Wen Chen 4 Department of Computer and Communication Engineering, Ming-Chuan University,

More information

An Adaptive Background Model for Camshift Tracking with a Moving Camera. convergence.

An Adaptive Background Model for Camshift Tracking with a Moving Camera. convergence. 261 An Adaptive Background Model for Camshift Tracking with a Moving Camera R. Stolkin,I.Florescu,G.Kamberov Center for Maritime Systems, Dept. of Mathematical Sciences, Dept. of Computer Science Stevens

More information

New Models For Real-Time Tracking Using Particle Filtering

New Models For Real-Time Tracking Using Particle Filtering New Models For Real-Time Tracking Using Particle Filtering Ng Ka Ki and Edward J. Delp Video and Image Processing Laboratories (VIPER) School of Electrical and Computer Engineering Purdue University West

More information

Mean shift based object tracking with accurate centroid estimation and adaptive Kernel bandwidth

Mean shift based object tracking with accurate centroid estimation and adaptive Kernel bandwidth Mean shift based object tracking with accurate centroid estimation and adaptive Kernel bandwidth ShilpaWakode 1, Dr. Krishna Warhade 2, Dr. Vijay Wadhai 3, Dr. Nitin Choudhari 4 1234 Electronics department

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

People Tracking and Segmentation Using Efficient Shape Sequences Matching

People Tracking and Segmentation Using Efficient Shape Sequences Matching People Tracking and Segmentation Using Efficient Shape Sequences Matching Junqiu Wang, Yasushi Yagi, and Yasushi Makihara The Institute of Scientific and Industrial Research, Osaka University 8-1 Mihogaoka,

More information

Adaptive Feature Extraction with Haar-like Features for Visual Tracking

Adaptive Feature Extraction with Haar-like Features for Visual Tracking Adaptive Feature Extraction with Haar-like Features for Visual Tracking Seunghoon Park Adviser : Bohyung Han Pohang University of Science and Technology Department of Computer Science and Engineering pclove1@postech.ac.kr

More information

Elliptical Head Tracker using Intensity Gradients and Texture Histograms

Elliptical Head Tracker using Intensity Gradients and Texture Histograms Elliptical Head Tracker using Intensity Gradients and Texture Histograms Sriram Rangarajan, Dept. of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634 srangar@clemson.edu December

More information

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

An Approach for Real Time Moving Object Extraction based on Edge Region Determination An Approach for Real Time Moving Object Extraction based on Edge Region Determination Sabrina Hoque Tuli Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Target Tracking Based on Mean Shift and KALMAN Filter with Kernel Histogram Filtering

Target Tracking Based on Mean Shift and KALMAN Filter with Kernel Histogram Filtering Target Tracking Based on Mean Shift and KALMAN Filter with Kernel Histogram Filtering Sara Qazvini Abhari (Corresponding author) Faculty of Electrical, Computer and IT Engineering Islamic Azad University

More information

Research Article Model Based Design of Video Tracking Based on MATLAB/Simulink and DSP

Research Article Model Based Design of Video Tracking Based on MATLAB/Simulink and DSP Research Journal of Applied Sciences, Engineering and Technology 7(18): 3894-3905, 2014 DOI:10.19026/rjaset.7.748 ISSN: 2040-7459; e-issn: 2040-746 2014 Maxwell Scientific Publication Corp. Submitted:

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

Fragment-based Visual Tracking with Multiple Representations

Fragment-based Visual Tracking with Multiple Representations American Journal of Engineering and Applied Sciences Original Research Paper ragment-based Visual Tracking with Multiple Representations 1 Junqiu Wang and 2 Yasushi Yagi 1 AVIC Intelligent Measurement,

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction

More information

Image Cues Fusion for Object Tracking Based on Particle Filter

Image Cues Fusion for Object Tracking Based on Particle Filter Author manuscript, published in "Int. Workshop on Articulated Motion and Deformable Objects, AMDO'04 3179 (2004) 99-107" Image Cues Fusion for Object Tracking Based on Particle Filter Peihua Li and François

More information

A Spatio-Spectral Algorithm for Robust and Scalable Object Tracking in Videos

A Spatio-Spectral Algorithm for Robust and Scalable Object Tracking in Videos A Spatio-Spectral Algorithm for Robust and Scalable Object Tracking in Videos Alireza Tavakkoli 1, Mircea Nicolescu 2 and George Bebis 2,3 1 Computer Science Department, University of Houston-Victoria,

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Tracking Algorithms. Lecture16: Visual Tracking I. Probabilistic Tracking. Joint Probability and Graphical Model. Deterministic methods

Tracking Algorithms. Lecture16: Visual Tracking I. Probabilistic Tracking. Joint Probability and Graphical Model. Deterministic methods Tracking Algorithms CSED441:Introduction to Computer Vision (2017F) Lecture16: Visual Tracking I Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Deterministic methods Given input video and current state,

More information

Research on Recognition and Classification of Moving Objects in Mixed Traffic Based on Video Detection

Research on Recognition and Classification of Moving Objects in Mixed Traffic Based on Video Detection Hu, Qu, Li and Wang 1 Research on Recognition and Classification of Moving Objects in Mixed Traffic Based on Video Detection Hongyu Hu (corresponding author) College of Transportation, Jilin University,

More information

Segmentation in Noisy Medical Images Using PCA Model Based Particle Filtering

Segmentation in Noisy Medical Images Using PCA Model Based Particle Filtering Segmentation in Noisy Medical Images Using PCA Model Based Particle Filtering Wei Qu a, Xiaolei Huang b, and Yuanyuan Jia c a Siemens Medical Solutions USA Inc., AX Division, Hoffman Estates, IL 60192;

More information

Object Tracking Algorithm based on Combination of Edge and Color Information

Object Tracking Algorithm based on Combination of Edge and Color Information Object Tracking Algorithm based on Combination of Edge and Color Information 1 Hsiao-Chi Ho ( 賀孝淇 ), 2 Chiou-Shann Fuh ( 傅楸善 ), 3 Feng-Li Lian ( 連豊力 ) 1 Dept. of Electronic Engineering National Taiwan

More information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial

More information

A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b and Guichi Liu2, c

A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b and Guichi Liu2, c 4th International Conference on Mechatronics, Materials, Chemistry and Computer Engineering (ICMMCCE 2015) A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

A Feature Point Matching Based Approach for Video Objects Segmentation

A Feature Point Matching Based Approach for Video Objects Segmentation A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer

More information

An Edge-Based Approach to Motion Detection*

An Edge-Based Approach to Motion Detection* An Edge-Based Approach to Motion Detection* Angel D. Sappa and Fadi Dornaika Computer Vison Center Edifici O Campus UAB 08193 Barcelona, Spain {sappa, dornaika}@cvc.uab.es Abstract. This paper presents

More information

An EM-like algorithm for color-histogram-based object tracking

An EM-like algorithm for color-histogram-based object tracking An EM-like algorithm for color-histogram-based object tracking Zoran Zivkovic Ben Kröse Intelligent and Autonomous Systems Group University of Amsterdam The Netherlands email:{zivkovic,krose}@science.uva.nl

More information

An Adaptive Color-Based Particle Filter

An Adaptive Color-Based Particle Filter An Adaptive Color-Based Particle Filter Katja Nummiaro a,, Esther Koller-Meier b and Luc Van Gool a,b a Katholieke Universiteit Leuven, ESAT/PSI-VISICS, Kasteelpark Arenberg 10, 3001 Heverlee, Belgium

More information

Tracking Soccer Ball Exploiting Player Trajectory

Tracking Soccer Ball Exploiting Player Trajectory Tracking Soccer Ball Exploiting Player Trajectory Kyuhyoung Choi and Yongdeuk Seo Sogang University, {Kyu, Yndk}@sogang.ac.kr Abstract This paper proposes an algorithm for tracking the ball in a soccer

More information

Probabilistic Index Histogram for Robust Object Tracking

Probabilistic Index Histogram for Robust Object Tracking Probabilistic Index Histogram for Robust Object Tracking Wei Li 1, Xiaoqin Zhang 2, Nianhua Xie 1, Weiming Hu 1, Wenhan Luo 1, Haibin Ling 3 1 National Lab of Pattern Recognition, Institute of Automation,CAS,

More information

Learning the Three Factors of a Non-overlapping Multi-camera Network Topology

Learning the Three Factors of a Non-overlapping Multi-camera Network Topology Learning the Three Factors of a Non-overlapping Multi-camera Network Topology Xiaotang Chen, Kaiqi Huang, and Tieniu Tan National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy

More information

Research on QR Code Image Pre-processing Algorithm under Complex Background

Research on QR Code Image Pre-processing Algorithm under Complex Background Scientific Journal of Information Engineering May 207, Volume 7, Issue, PP.-7 Research on QR Code Image Pre-processing Algorithm under Complex Background Lei Liu, Lin-li Zhou, Huifang Bao. Institute of

More information

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Yang Wang Tele Tan Institute for Infocomm Research, Singapore {ywang, telctan}@i2r.a-star.edu.sg

More information

Project Report for EE7700

Project Report for EE7700 Project Report for EE7700 Name: Jing Chen, Shaoming Chen Student ID: 89-507-3494, 89-295-9668 Face Tracking 1. Objective of the study Given a video, this semester project aims at implementing algorithms

More information

Background Image Generation Using Boolean Operations

Background Image Generation Using Boolean Operations Background Image Generation Using Boolean Operations Kardi Teknomo Ateneo de Manila University Quezon City, 1108 Philippines +632-4266001 ext 5660 teknomo@gmail.com Philippine Computing Journal Proceso

More information

Shape Descriptor using Polar Plot for Shape Recognition.

Shape Descriptor using Polar Plot for Shape Recognition. Shape Descriptor using Polar Plot for Shape Recognition. Brijesh Pillai ECE Graduate Student, Clemson University bpillai@clemson.edu Abstract : This paper presents my work on computing shape models that

More information

Object detection using non-redundant local Binary Patterns

Object detection using non-redundant local Binary Patterns University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Object detection using non-redundant local Binary Patterns Duc Thanh

More information

Object tracking in a video sequence using Mean-Shift Based Approach: An Implementation using MATLAB7

Object tracking in a video sequence using Mean-Shift Based Approach: An Implementation using MATLAB7 International Journal of Computational Engineering & Management, Vol. 11, January 2011 www..org 45 Object tracking in a video sequence using Mean-Shift Based Approach: An Implementation using MATLAB7 Madhurima

More information

Target Tracking Using Mean-Shift And Affine Structure

Target Tracking Using Mean-Shift And Affine Structure Target Tracking Using Mean-Shift And Affine Structure Chuan Zhao, Andrew Knight and Ian Reid Department of Engineering Science, University of Oxford, Oxford, UK {zhao, ian}@robots.ox.ac.uk Abstract Inthispaper,wepresentanewapproachfortracking

More information

Robust Lip Contour Extraction using Separability of Multi-Dimensional Distributions

Robust Lip Contour Extraction using Separability of Multi-Dimensional Distributions Robust Lip Contour Extraction using Separability of Multi-Dimensional Distributions Tomokazu Wakasugi, Masahide Nishiura and Kazuhiro Fukui Corporate Research and Development Center, Toshiba Corporation

More information

MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES

MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES Mehran Yazdi and André Zaccarin CVSL, Dept. of Electrical and Computer Engineering, Laval University Ste-Foy, Québec GK 7P4, Canada

More information

ECSE-626 Project: An Adaptive Color-Based Particle Filter

ECSE-626 Project: An Adaptive Color-Based Particle Filter ECSE-626 Project: An Adaptive Color-Based Particle Filter Fabian Kaelin McGill University Montreal, Canada fabian.kaelin@mail.mcgill.ca Abstract The goal of this project was to discuss and implement a

More information

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image [6] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image Matching Methods, Video and Signal Based Surveillance, 6. AVSS

More information

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE Hongyu Liang, Jinchen Wu, and Kaiqi Huang National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science

More information

Multi-Scale Kernel Operators for Reflection and Rotation Symmetry: Further Achievements

Multi-Scale Kernel Operators for Reflection and Rotation Symmetry: Further Achievements 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Multi-Scale Kernel Operators for Reflection and Rotation Symmetry: Further Achievements Shripad Kondra Mando Softtech India Gurgaon

More information

Combining Edge and Color Features for Tracking Partially Occluded Humans

Combining Edge and Color Features for Tracking Partially Occluded Humans Combining Edge and Color Features for Tracking Partially Occluded Humans Mandar Dixit and K.S. Venkatesh Computer Vision Lab., Department of Electrical Engineering, Indian Institute of Technology, Kanpur

More information

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Jong Taek Lee, M. S. Ryoo, Matthew Riley, and J. K. Aggarwal Computer & Vision Research Center Dept. of Electrical & Computer Engineering,

More information

International Journal of Advance Engineering and Research Development

International Journal of Advance Engineering and Research Development Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Comparative

More information

Automatic Tracking of Moving Objects in Video for Surveillance Applications

Automatic Tracking of Moving Objects in Video for Surveillance Applications Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering

More information

A Background Subtraction Based Video Object Detecting and Tracking Method

A Background Subtraction Based Video Object Detecting and Tracking Method A Background Subtraction Based Video Object Detecting and Tracking Method horng@kmit.edu.tw Abstract A new method for detecting and tracking mo tion objects in video image sequences based on the background

More information

Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems

Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems Xiaoyan Jiang, Erik Rodner, and Joachim Denzler Computer Vision Group Jena Friedrich Schiller University of Jena {xiaoyan.jiang,erik.rodner,joachim.denzler}@uni-jena.de

More information

2006 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

2006 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, 6 IEEE Personal use of this material is permitted Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Research on Evaluation Method of Video Stabilization

Research on Evaluation Method of Video Stabilization International Conference on Advanced Material Science and Environmental Engineering (AMSEE 216) Research on Evaluation Method of Video Stabilization Bin Chen, Jianjun Zhao and i Wang Weapon Science and

More information

Detection and recognition of moving objects using statistical motion detection and Fourier descriptors

Detection and recognition of moving objects using statistical motion detection and Fourier descriptors Detection and recognition of moving objects using statistical motion detection and Fourier descriptors Daniel Toth and Til Aach Institute for Signal Processing, University of Luebeck, Germany toth@isip.uni-luebeck.de

More information

C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT Chennai

C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT Chennai Traffic Sign Detection Via Graph-Based Ranking and Segmentation Algorithm C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT

More information

Real-Time Human Detection using Relational Depth Similarity Features

Real-Time Human Detection using Relational Depth Similarity Features Real-Time Human Detection using Relational Depth Similarity Features Sho Ikemura, Hironobu Fujiyoshi Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai, Aichi, 487-8501 Japan. si@vision.cs.chubu.ac.jp,

More information

Detecting motion by means of 2D and 3D information

Detecting motion by means of 2D and 3D information Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,

More information

A Texture-based Method for Detecting Moving Objects

A Texture-based Method for Detecting Moving Objects A Texture-based Method for Detecting Moving Objects M. Heikkilä, M. Pietikäinen and J. Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500

More information

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

More information

Drywall state detection in image data for automatic indoor progress monitoring C. Kropp, C. Koch and M. König

Drywall state detection in image data for automatic indoor progress monitoring C. Kropp, C. Koch and M. König Drywall state detection in image data for automatic indoor progress monitoring C. Kropp, C. Koch and M. König Chair for Computing in Engineering, Department of Civil and Environmental Engineering, Ruhr-Universität

More information

CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning

CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning Justin Chen Stanford University justinkchen@stanford.edu Abstract This paper focuses on experimenting with

More information

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN Gengjian Xue, Jun Sun, Li Song Institute of Image Communication and Information Processing, Shanghai Jiao

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

A Texture-Based Method for Modeling the Background and Detecting Moving Objects

A Texture-Based Method for Modeling the Background and Detecting Moving Objects A Texture-Based Method for Modeling the Background and Detecting Moving Objects Marko Heikkilä and Matti Pietikäinen, Senior Member, IEEE 2 Abstract This paper presents a novel and efficient texture-based

More information

Mean Shift Tracking. CS4243 Computer Vision and Pattern Recognition. Leow Wee Kheng

Mean Shift Tracking. CS4243 Computer Vision and Pattern Recognition. Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS4243) Mean Shift Tracking 1 / 28 Mean Shift Mean Shift

More information

Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion

Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion Zhe Lin, Larry S. Davis, David Doermann, and Daniel DeMenthon Institute for Advanced Computer Studies University of

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

A Bayesian Approach to Background Modeling

A Bayesian Approach to Background Modeling A Bayesian Approach to Background Modeling Oncel Tuzel Fatih Porikli Peter Meer CS Department & ECE Department Mitsubishi Electric Research Laboratories Rutgers University Cambridge, MA 02139 Piscataway,

More information

Detecting and Identifying Moving Objects in Real-Time

Detecting and Identifying Moving Objects in Real-Time Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary

More information

Human pose estimation using Active Shape Models

Human pose estimation using Active Shape Models Human pose estimation using Active Shape Models Changhyuk Jang and Keechul Jung Abstract Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body

More information

An indirect tire identification method based on a two-layered fuzzy scheme

An indirect tire identification method based on a two-layered fuzzy scheme Journal of Intelligent & Fuzzy Systems 29 (2015) 2795 2800 DOI:10.3233/IFS-151984 IOS Press 2795 An indirect tire identification method based on a two-layered fuzzy scheme Dailin Zhang, Dengming Zhang,

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

An Adaptive Eigenshape Model

An Adaptive Eigenshape Model An Adaptive Eigenshape Model Adam Baumberg and David Hogg School of Computer Studies University of Leeds, Leeds LS2 9JT, U.K. amb@scs.leeds.ac.uk Abstract There has been a great deal of recent interest

More information

Motion Detection Algorithm

Motion Detection Algorithm Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection

More information

Color Content Based Image Classification

Color Content Based Image Classification Color Content Based Image Classification Szabolcs Sergyán Budapest Tech sergyan.szabolcs@nik.bmf.hu Abstract: In content based image retrieval systems the most efficient and simple searches are the color

More information

A new approach to reference point location in fingerprint recognition

A new approach to reference point location in fingerprint recognition A new approach to reference point location in fingerprint recognition Piotr Porwik a) and Lukasz Wieclaw b) Institute of Informatics, Silesian University 41 200 Sosnowiec ul. Bedzinska 39, Poland a) porwik@us.edu.pl

More information

Automatic Logo Detection and Removal

Automatic Logo Detection and Removal Automatic Logo Detection and Removal Miriam Cha, Pooya Khorrami and Matthew Wagner Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 {mcha,pkhorrami,mwagner}@ece.cmu.edu

More information

Key Frame Extraction and Indexing for Multimedia Databases

Key Frame Extraction and Indexing for Multimedia Databases Key Frame Extraction and Indexing for Multimedia Databases Mohamed AhmedˆÃ Ahmed Karmouchˆ Suhayya Abu-Hakimaˆˆ ÃÃÃÃÃÃÈÃSchool of Information Technology & ˆˆÃ AmikaNow! Corporation Engineering (SITE),

More information

CLASSIFYING AND TRACKING MULTIPLE PERSONS FOR PROACTIVE SURVEILLANCE OF MASS TRANSPORT SYSTEMS

CLASSIFYING AND TRACKING MULTIPLE PERSONS FOR PROACTIVE SURVEILLANCE OF MASS TRANSPORT SYSTEMS CLASSIFYING AND TRACKING MULTIPLE PERSONS FOR PROACTIVE SURVEILLANCE OF MASS TRANSPORT SYSTEMS Suyu Kong 1, C. Sanderson 2 and Brian C. Lovell 1,2 1 University of Queensland, Brisbane QLD 4072, Australia

More information

Input sensitive thresholding for ancient Hebrew manuscript

Input sensitive thresholding for ancient Hebrew manuscript Pattern Recognition Letters 26 (2005) 1168 1173 www.elsevier.com/locate/patrec Input sensitive thresholding for ancient Hebrew manuscript Itay Bar-Yosef * Department of Computer Science, Ben Gurion University,

More information

A Comparative Study of Skin-Color Models

A Comparative Study of Skin-Color Models A Comparative Study of Skin-Color Models Juwei Lu, Qian Gu, K.N. Plataniotis, and Jie Wang Bell Canada Multimedia Laboratory, The Edward S. Rogers Sr., Department of Electrical and Computer Engineering,

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

Multi-Object Tracking Using Dynamical Graph Matching

Multi-Object Tracking Using Dynamical Graph Matching Copyright c 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Multi-Object Tracking Using Dynamical Graph Matching Hwann-Tzong Chen Horng-Horng Lin Tyng-Luh Liu Institute

More information

Face detection in a video sequence - a temporal approach

Face detection in a video sequence - a temporal approach Face detection in a video sequence - a temporal approach K. Mikolajczyk R. Choudhury C. Schmid INRIA Rhône-Alpes GRAVIR-CNRS, 655 av. de l Europe, 38330 Montbonnot, France {Krystian.Mikolajczyk,Ragini.Choudhury,Cordelia.Schmid}@inrialpes.fr

More information

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING Proceedings of MUSME 2011, the International Symposium on Multibody Systems and Mechatronics Valencia, Spain, 25-28 October 2011 HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING Pedro Achanccaray, Cristian

More information

A Novel Algorithm for Color Image matching using Wavelet-SIFT

A Novel Algorithm for Color Image matching using Wavelet-SIFT International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 A Novel Algorithm for Color Image matching using Wavelet-SIFT Mupuri Prasanth Babu *, P. Ravi Shankar **

More information

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

Texture Sensitive Image Inpainting after Object Morphing

Texture Sensitive Image Inpainting after Object Morphing Texture Sensitive Image Inpainting after Object Morphing Yin Chieh Liu and Yi-Leh Wu Department of Computer Science and Information Engineering National Taiwan University of Science and Technology, Taiwan

More information

Iterative Removing Salt and Pepper Noise based on Neighbourhood Information

Iterative Removing Salt and Pepper Noise based on Neighbourhood Information Iterative Removing Salt and Pepper Noise based on Neighbourhood Information Liu Chun College of Computer Science and Information Technology Daqing Normal University Daqing, China Sun Bishen Twenty-seventh

More information

DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING

DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING Tomasz Żabiński, Tomasz Grygiel, Bogdan Kwolek Rzeszów University of Technology, W. Pola 2, 35-959 Rzeszów, Poland tomz, bkwolek@prz-rzeszow.pl

More information

Restoring Chinese Documents Images Based on Text Boundary Lines

Restoring Chinese Documents Images Based on Text Boundary Lines Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Restoring Chinese Documents Images Based on Text Boundary Lines Hong Liu Key Laboratory

More information

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Critique: Efficient Iris Recognition by Characterizing Key Local Variations Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher

More information

Background Initialization with A New Robust Statistical Approach

Background Initialization with A New Robust Statistical Approach Background Initialization with A New Robust Statistical Approach Hanzi Wang and David Suter Institute for Vision System Engineering Department of. Electrical. and Computer Systems Engineering Monash University,

More information

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 2 Issue 11, November 2015. Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

More information

Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space

Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space MATEC Web of Conferences 95 83 (7) DOI:.5/ matecconf/79583 ICMME 6 Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space Tao Ni Qidong Li Le Sun and Lingtao Huang School

More information