Towards Understanding the Limits of Gait Recognition
|
|
- Reynard Weaver
- 6 years ago
- Views:
Transcription
1 Towards Understanding the Limits of Gait Recognition Zongyi Liu, Laura Malave, Adebola Osuntugun, Preksha Sudhakar, and Sudeep Sarkar Computer Vision and Image Informatics Laboratory Computer Science and Engineering University of South Florida Tampa, FL ABSTRACT Most state of the art video-based gait recognition algorithms start from binary silhouettes. These silhouettes, defined as foreground regions, are usually detected by background subtraction methods, which results in holes or missed parts due to similarity of foreground and background color, and boundary errors due to video compression artifacts. Errors in low-level representation make it hard to understand the effect of certain conditions, such as surface and time, on gait recognition. In this paper, we present a part-level, manual silhouette database consisting of 71 subjects, over one gait cycle, with differences in surface, shoe-type, carrying condition, and time. We have a total of about 11, manual silhouette frames. The purpose of this manual silhouette database is twofold. First, this is a resource that we make available at for use by the gait community to test and design better silhouette detection algorithms. These silhouettes can also be used to learn gait dynamics. Second, using the baseline gait recognition algorithm, which was specified along with the HumanID Gait Challenge problem, we show that performance from manual silhouettes is similar and only sometimes better than that from automated silhouettes detected by statistical background subtraction. Low performances when comparing sequences with differences in walking surfaces and time-variation are not fully explained by silhouette quality. We also study the recognition power in each body part and show that recognition based on just the legs is equal to that from the whole silhouette. There is also significant recognition power in the head and torso shape. Keywords: Gait recognition, gait analysis, segmentation, ground truth, silhouettes, human shape, gait dynamics, pedestrian detection, behavioral biometrics 1. INTRODUCTION In the 197 s Cutting and Kozlowski, 1 using point light display based experiments patterned after those of Johansson s, 2 demonstrated the ability to recognize friends from gait. In computer vision, while gait analysis has been a research topic for a while, it is only recently that identification from gait has received attention and has become an active area of computer vision For biometics research, gait is usually refered to in its broadest sense to include both body shape and dynamics, i.e. any information that can be extracted from the video of a walking person to robustly identify the person under various condition variations. It is one biometric source that can be acquired at a distance, making it important for some early warning or monitoring applications that needs to perform recognition when the subject is far away. While one cannot expect it to be a perfect biometric source, it important to understand its scope. For some application scenarios, less than perfect recognition is better than nothing. However, before deploying gait base recognition systems, it is important to study the associated scientific questions such as: With what confidence level can we identify persons from their gait? Is gait good for verification or is it better for identification scenarios? What are the limitations of recognition from gait? What factors effect gait recognition and to what extent? Answers to these questions are not known. To help answer these kinds of questions, the HumanID Gait Challenge Problem was formulated. 16,17 It consists of (i) a large gait database with 187 sequences from 122 subjects, spanning 5 covariates: across view (about 3 degrees), across shoe-type, across briefcase, across surface types (grass/concrete), and across time; (ii) a set of twelve experiments to investigate the effect of five factors on performance. The five factors are studied both individually and in combinations. The results from the twelve experiments provide an ordering of the difficulty of the experiments. And, it consists of (iii) a baseline algorithm based on the Tanimoto distance between silhouettes, extracted by simple background differencing, to provide the needed performance benchmark. Some sample frames are shown in Figure 1. Send correspondences to Sudeep Sarkar, sarkar@csee.usf.edu, Telephone:
2 Table 1. Summary the top rank recognition for experiments A (viewpoint variation between probe and gallery), B (shoe-type variation), D (surface variation), and K (Time variation) in the Gait Challenge dataset. The numbers for the first two columns are as read from graphs in the cited papers. Exp. Fusion (UMD) 19 DTW (UMD) HMM (UMD) 21 Body Shape (CMU) 22 HMM (MIT) 23 Body (CAS) 24 Baseline (USF) 16 A (view) 52% 78% 99% 87% 88% 7% 79% B (shoe) 4% 65% 89% 81% 75% 59% 66% D (surface) % 29% 36% 21% 25% 34% 29% K (time) 3% # subjects in gallery For some of the key gait challenge experiments, Table 1 lists summary performances that have been so far reported in the literature by various gait recognition strategies, such those based on Hidden Markov Models (HMMs), Dynamic Time Warping (DTW), body shape matching, and shape moments. The listed performance numbers are the correct identification rates at the top most rank. This is a standard performance metric used in biometrics 18 for the identification scenario, where one is interested to find a match to a given probe from whole the gallery set, i.e. one-to-many match. (For the verification scenario, where one is interested in matching one probe to one gallery (one-to-one match), the performance is specified in terms of standard false alarm and detection rates. In general, identification is considered to be a harder problem than verification.) We see that performance of the baseline gait recognition algorithm is quite effective and competitive with other algorithms. Another observation of particular interest, is the significant effect of change in surface type on the identification rate; this effect is consistent across different types of gait recognition algorithms. For the baseline algorithm, we also see the significant effect of time variation of about six months. This effect of time on gait recognition has been documented by others, but on different data sets taken indoors. When the difference in time between gallery (the prestored template) and probe (the input data) is in the order of minutes, the identification performance ranges from 91% to 95%, 9, 13, 15 whereas the performances drop to 3% to 45% when the differences are in the order of months and days 11, 13, 19 for similar sized datasets. Since all the algorithms rely on silhouette as the low-level representation of choice, it is reasonable to suspect the flaky low-level processes for the low performances across surfaces and time. These experiments involve either a change in the background or substantial change in illumination, which very likely impact silhouette quality. Also, the high recognition rates that we see for the other covariates, such as view or shoe type, might be due to error correlation in the silhouettes, such as those due to shadows or holes. The error correlations is expected to be high since they involve data collected over a short time period of each other. To unravel these factors, we present results with part-level manual silhouettes. One conclusion of this work is that the quality of automated silhouettes, detected using standard techniques, involving background difference with Mahanalobis distance in the color space, does not seem to be the limiting factor in gait recognition. We arrive at this conclusion based on results with manually specified silhouettes. Second conclusion is that the recognition from legs is almost equal to that from the whole silhouette. The recognition from torso shape is also significant. 2. CREATION OF MANUAL SILHOUETTES For the sake of gaining insight into the relationship between recognition and silhouette quality, we manually created silhouettes over one gait cycle for 71 subjects under 4 different conditions, exhibiting the impact of cross shoe-type, surface, and time, respectively. This cycle was chosen to begin at the right heel strike phase of the walking cycle through to the next right heel strike, thus including one complete walking cycle. We attempted to pick this gait cycle from the same 3D location in each sequence, whenever possible. In addition, we tried to exclude the portion that included the calibration box with high contrast (see Figure 1), which frequently leads to high background subtraction errors. We not only mark a pixel as being from the background or subject, but provided more detailed specifications in terms of body parts too. We explicitly labeled the head, torso, left arm, right arm, left upper leg, left lower leg, right upper leg,
3 (a) (b) (c) (d) Figure 1. Sample frames in the gait challenge dataset as viewed from (a) the left camera on concrete surface, (b) the right camera on concrete surface, (c) the left camera on grass surface, (d) the right camera on grass surface. and right lower leg using different colors. Figure 2 shows some examples of part-level manual silhouettes corresponding to the original color images in the left column. Quality control checks looked for miscolored parts and backgrounds, randomly colored isolated pixels, errors on the boundary of the body, and missed body parts. Some of the difficulties encountered during the creating process include low-image quality due to varying overall intensity, occlusion of feet in the grass sequences, similarity of dark skin tones of some subjects with the background, frequent occlusion of the right arm, and the presence of dark or baggy clothing, which made it hard to delineate various body parts. However, despite these difficulties we were able to create pretty consistent quality silhouettes, as judged visually by another subject, across the subjects. 3. GAIT SIMILARITY COMPUTATION For recognition, the manual silhouettes are first height scaled so that they are all 128 pixels tall, which is around the average original resolution. Figure 3 shows images from the same person before and after scaling. In addition, the silhouettes are also centered to the middle of the frames so that frames can be compared by simple projection. Note that these simple operations, in essence, helps us in arriving at Kendall s notion of pre-shape by removing translation and scaling. Since all the subjects are upright and the camera is stationary, there is no need for rotation normalization before silhouette shape matching Multiple Gait Cycle Sequences We compare sequences, each with multiple but not necessarily equal gait cycles, using the baseline gait recognition algorithm that was specified along with the Gait Challenge problem. 17 The similarity computation is based on spatio-temporal correlation. Let the probe, consisting of M frames, and the gallery, consisting of N frames, be denoted by S P = {S P (1),, S P (M)} and S G = {S G (1),,S G (N)}, respectively. First, the probe (input) sequence is partitioned into subsequences,
4 Figure 2. Part level manual silhouettes over one gait cycle along with the corresponding color images, cropped around the person.
5 Figure 3. Top row shows the color images, cropped around the person for four different camera views. The middle row shows the corresponding part-level, manually specified silhouettes. And the bottom row shows the scaled silhouettes of the kind used by gait recognition algorithms. each roughly over one gait period, N Gait. Gait periodicity is estimated based on periodic variation of the count the number of foreground pixels in the lower part of the silhouette in each frame over time. This number will reach a maximum when the two legs are farthest apart (full stride stance) and drop to a minimum when the legs overlap (heels together stance). Second, each of these probe subsequences, S Pk = {S P (k),,s P (k + N Gait )}, is cross correlated with the given gallery sequence, S G. N Gait Corr(S Pk,S G )(l)= j=1 S(S P (k + j),s G (l + j)) (1) where, the similarity between two image frames, S(S P (i),s G ( j)), is defined to be the Tanimoto similarity between the silhouettes, i.e. the ratio of the number of common pixels to the number of pixels in their union. The overall similarity measure is chosen to be the median value of the maximum correlation of the gallery sequence with each of these probe subsequences. The strategy for breaking up the probe sequence into subsequences allows us to address the case when we have segmentation errors in some contiguous sets of frames due to some background subtraction artifact or due to localized motion in the background. ) Sim(S P,S G )=Median k (maxcorr(s Pk,S G )(l) (2) l The above strategy is effective for gait recognition when compared with performances of more complicated strategies, as we can see in Table 1. Note that both body shape and dynamics contribute to the similarity measure.
6 Figure 4. The illustration of the warping between sequences of different sizes Single Gait Cycle Sequences The baseline similarity computation strategy outlined above will not work, as is, for computing similarity from manual silhouettes, since they are defined over only one gait cycle. The correlation part is not needed. We simply linearly timewarp the sequences, but with the same frame-to-frame Tanimoto similarity measure. Let the two silhouette sequences be denoted by S 1 = {S 1 (1),, S 1 (M)} and S 2 = {S 2 (1),,S 2 (N)}. Without loss of generality, let M N. Then the similarity between the two sequences is Sim 1 (S 1,S 2 )= N i=1 ( S S 1 ( im ) N ),S 2(i) where, the similarity between two image frames, S(S 1 (i),s 2 ( j)), is defined to be the Tanimoto similarity between the silhouettes, i.e. the ratio of the number of common pixels to the number of pixels in their union. The overall similarity measure is the warped distance between the two sequences. Figure 4 illustrates this warping. Note that since the starting and ending stances of the two sequences are guaranteed to match, the chosen warping strategy is reasonable. 4. RECOGNITION RESULTS We compare the recognition from manual silhouettes with recognition based on automated silhouettes, both over one gait cycle and multiple gait cycles. The comparison over one gait cycle is, of course, the fairer of the two. We match each probe sequence to the gallery sequences, thus obtaining a similarity matrix with size that is the number of probe sequences by the gallery size. Following the pattern of the FERET evaluations, 18 we measure performance for both identification and verification scenarios using cumulative match characteristics (CMCs) and receiver operating characteristics (ROCs), respectively. In the identification scenario, the task is to identify a given probe to be one of the given gallery images. To quantify performance, we sort the gallery images based on computed similarities with the given probe. In terms of the similarity matrix, this would correspond to sorting the rows of the similarity matrix. If the correct gallery image corresponding to the given probe occurs within rank k in this sorted set, then we have a successful identification at rank k. A cumulative match characteristic plots these identification rates (P I ) against the rank k. In the verification scenario, a (3)
7 Table 2. The summary of the identification rate (P I ) at rank 1 and verification rate (P V ) at 1% false alarm rate (P F ) of the automated silhouettes and manual silhouettes. Exp. Automated Multi-cycle P I at 1 Automated Single-cycle Manual Single-cycle Automated Multi-cycle P V at 1% P F Automated Single-cycle Manual Single-cycle B (Shoe-type) 81% 54% 49% 83% 59% 39% D (Surface) 39% 24% % 46% % 21% H (Briefcase) 78% 37% 12% 73% 39% % K (Time) 15% 9% 12% 15% 9% 9% (a) (b) 9 9 Manual Single cycle (c) (d) Figure 5. The cummulative match characteristics capturing identification performance from the automated multi-cycle, automated single-cycle, and manual single-cycle silhouettes for variation in (a) shoe-type, (b) surface, (c) briefcase, and (d) time between probe and gallery. system either rejects or accepts if a person is who they claim to be. Operationally, a person presents (1). a new signature, the probe, and (2). an identity claim. The system then compares the probe with the stored gallery sequence that corresponds to the claimed identity. The claim is accepted if the match between the probe and gallery is above an operating threshold, otherwise it is rejected. For a given operating threshold, there is a verification rate (or detection rate) and a false accept rate. Changing the operating threshold can change the verification and false accept rates. The complete set of verification and false accept rates is plotted on a receiver operating characteristic (ROC).
8 Verification Rate 5 4 Verification Rate False Alarm Rate (a) False Alarm Rate (b) 9 9 Manual Single cycle Verification Rate 5 4 Verification Rate False Alarm Rate (c) False Alarm Rate (d) Figure 6. The receiver operating characteristics capturing verification performance from the automated multi-cycle, automated singlecycle, and manual single-cycle silhouettes for variation in (a) shoe-type, (b) surface, (c) briefcase, and (d) time between probe and gallery. First, we consider performance based on the whole silhouette. Figures 5 and 6 plot the Cumulative Match Curves (CMCs) for the first 5 rank and the Receiver Operating Characteristics (ROCs) up to the 5% false alarm rate, respectively. The results are summarized in Table 2. We see that the identification rate at rank 1 of the automated multi-cycle silhouettes is 81%, 39%, 78%, and 15% for cross shoe-type, surface, briefcase, time, respectively. This rates are, of course, higher than that obtained with just single cycle gait sequences. For the automated single cycle silhouettes, the number is 54%, 24%, 37%, and 9%, respectively. And for the manual single cycle silhouettes, the number is 49%, %, 12%, and 12%, respectively. The non-parametric Mcnemar s test confirms that the differences of the identification rates are statistically significant, except for the observed differences for the experiment across time. Comparing the single cycles performances, we see that the recognition with manual silhouettes outperforms the manual silhouette performance for all the experiments except for the time difference experiment. The biggest improvement is for the experiment comparing sequences with briefcase with sequences without briefcase. The removal of the briefcases in the manual silhouettes possibly contributed to higher recognition rates. Another observation worth noting is that performances when comparing sequences with differences in walking surface condition and time is still low with manual silhouettes, suggesting that silhouette quality does not fully explain the low performances. Second, we consider recognition from parts. What are the relative recognition rates based on silhouette portions from individual body parts, i.e. legs, hands, torso, or their combination? The manual silhouettes, which are specified at part-level, readily facilitates such as study. Figure 7 shows the CMCs upto rank 5 and Table 3 summarizes the top
9 Arms Only Arms + Legs Head + Torso Legs Only Full Body 9 8 (a) Arms Only Arms + Legs Head + Torso Legs Only Full Body 9 8 (b) Arms Only Arms + Legs Head + Torso Legs Full Body Arms Only Arms + Legs Head + Torso Legs Only Full Body (c) (d) Figure 7. The cummulative match characteristics capturing identification performance from body parts in the manual silhouettes for variations in (a) shoe-type, (b) surface, (c) briefcase, and (d) time between probe and gallery. Table 3. The top rank recognition rates and verification rates at 1% false alarm rate from different body parts for the key experiments. P I at rank 1 P V at P F is 1% B (View) D (Surface) H (Carry) K (Time) B (View) D (Surface) H (Carry) K (Time) Hands 22% % 8% 6% 24% 15% 12% 6% Hands+Legs 54% 24% 16% 12% 37% 18% 16% 9% Head+Torso 37% 23% 25% 12% 42% 23% 18% 9% Legs 49% 15% 16% 12% 51% 18% 16% 12% Full Body 49% % 12% 12% 39% 21% % 12% rank identification rates and the verification rate for a 1% false alarm rate, when considering different body portions for the different experiments. We see that the recognition from just the legs match the recognition from the full body. The recognition from the combination of legs and hands, which convey the dynamic component of gait, somewhat outperforms the recognition from full body silhouettes. Also notice that significant recognition power also exists in the head and torso portions, which basically conveys body shape.
10 5. CONCLUSION We presented a large manual silhouettes database for gait recognition and modeling research. The database contains part-level silhouettes, over one gait cycle, for about 7 subjects, spanning variation in surface, shoe, carrying condition, and time. This database could be used by others for a variety of purposes, such as designing better silhouette detection algorithms, learning gait kinematic models, and studying human gait variations. It is marker-less, normal, non-threadmill, gait data that is essentially uncorrupted by placement of markers and other intrusive or restrictive devices. It has been noted by many researchers that the use of markers or threadmill changes gait. Gait recognition experiments with the manual silhouettes suggest several interesting conclusions. First, recognition with manual silhouettes improved over automated silhouettes. However, while the improvement was the largest for comparing sequences with and without briefcase, it was not large for surface and time variation experiments. This suggests that automated silhouette quality is only partly to explain the low performance on these experiments. Second, recognition from just silhouette portions from the legs is almost equal to that from the whole body. The head and torso portions, which capture the static body shape, also has recognition power. This seems to suggest that both gait dynamics and body shape contribute towards recognition from gait video. 6. ACKNOWLEDGMENT This research was supported by funds from DARPA (AFOSR-F ) and the USF College of Engineering REU program. Thanks to the gait researchers at CMU, MIT, and Southampton to help correct some of the errors in the manual silhouettes. REFERENCES 1. J. E. Cutting and L. T. Kozlowski, Recognition of friends by their walk, Bulletin of the Psychonomic Society 9, pp , G. Johansson, Visual motion perception, SciAmer 232, pp , June S. Niyogi and E. Adelson, Analyzing gait with spatiotemporal surfaces, in Computer Vision and Pattern Recognition, J. Little and J. Boyd, Recognizing people by their gait: The shape of motion, Videre 1(2), pp. 1 33, J. Shutler, M. Nixon, and C. Carter, Statistical gait description via temporal moments, in 4th IEEE Southwest Symp. on Image Analysis and Int., pp ,. 6. A. Bobick and A. Johnson, Gait recognition using static, activity-specific parameters, in Computer Vision and Pattern Recognition, pp. I:423 43, R. Tanawongsuwan and A. Bobick, Gait recognition from time-normalized joint-angle trajectories in the walking plane, in Computer Vision and Pattern Recognition, pp. II: , G. Shakhnarovich, L. Lee, and T. Darrell, Integrated face and gait recognition from multiple views, in Computer Vision and Pattern Recognition, pp. I: , J. Hayfron-Acquah, M. Nixon, and J. Carter, Automatic gait recognition by symmetry analysis, in International Conference on Audio- and Video-Based Biometric Person Authentication, pp , 1.. C. BenAbdelkader, R. Cutler, and L. Davis, Motion-based recognition of people in eigengait space, in International Conference on Automatic Face and Gesture Recognition, pp , L. Lee and W. Grimson, Gait analysis for recognition and classification, in International Conference on Automatic Face and Gesture Recognition, pp , A. Kale, A. Rajagopalan, N. Cuntoor, and V. Kruger, Gait-based recognition of humans using continuous HMMs, in International Conference on Automatic Face and Gesture Recognition, pp , R. Collins, R. Gross, and J. Shi, Silhouette-based human identification from body shape and gait, in International Conference on Automatic Face and Gesture Recognition, pp , I. Robledo Vega and S. Sarkar, Representation of the evolution of feature relationship statistics: Human gait-based recognition, IEEE Trans. Pattern Anal. and Mach. Intel. 25, pp , Oct L. Wang, W. Hu, and T. Tan, A new attempt to gait-based human identification, in International Conference on Pattern Recognition, 1, pp , 2.
11 16. P. Jonathon Phillips, S. Sarkar, I. Robledo, P. Grother, and K. Bowyer, Baseline results for the challenge problem of Human ID using gait analysis, in International Conference on Automatic Face and Gesture Recognition, pp , P. Jonathon Phillips, S. Sarkar, I. Robledo, P. Grother, and K. Bowyer, The gait identification challenge problem: Data sets and baseline algorithm, in International Conference on Pattern Recognition, pp , P. Jonathon Phillips, H. Moon, S. Rizvi, and P. Rauss, The FERET evaluation methodology for face-recognition algorithms, IEEE Trans. Pattern Anal. and Mach. Intel. 22(), pp. 9 14,. 19. N. Cuntoor, A. Kale, and R. Chellappa, Combining multiple evidences for gait recognition, in IEEE International Conference on Acoustics, Speech and Signal Processing, 3.. A. Kale, C. B., B. Yegnanarayana, A. N. Rajagopalan, and R. Chellappa, Gait analysis for human identification, in International Conference on Audio- and Video-Based Biometric Person Authentication, A. Sunderesan, A. K. Roy Chowdhury, and R. Chellappa, A hidden markov model based framework for recognition of humans from gait sequences, in IEEE International Conference on Image Processing, D. Tolliver and R. Collins, Gait shape estimation for identification, in International Conference on Audio- and Video-Based Biometric Person Authentication, L. Lee, G. Dalley, and K. Tieu, Learning pedestrian models for silhouette refinement, in International Conference on Computer Vision, L. Wang, T. Tan, H. Ning, and W. Hu, Silhouette analysis-based gait recognition for human identification, IEEE Trans. Pattern Anal. and Mach. Intel. 25, pp , Dec. 3.
Automatic Gait Recognition. - Karthik Sridharan
Automatic Gait Recognition - Karthik Sridharan Gait as a Biometric Gait A person s manner of walking Webster Definition It is a non-contact, unobtrusive, perceivable at a distance and hard to disguise
More informationIndividual Recognition Using Gait Energy Image
Individual Recognition Using Gait Energy Image Ju Han and Bir Bhanu Center for Research in Intelligent Systems University of California, Riverside, California 92521, USA jhan,bhanu @cris.ucr.edu Abstract
More informationMatching Gait Image Sequences in the Frequency Domain for Tracking People at a Distance
Matching Gait Image Sequences in the Frequency Domain for Tracking People at a Distance Ryusuke Sagawa, Yasushi Makihara, Tomio Echigo, and Yasushi Yagi Institute of Scientific and Industrial Research,
More informationGait Shape Estimation for Identification
Gait Shape Estimation for Identification David Tolliver and Robert T. Collins Robotics Institute, Carnegie Mellon University, Pittsburgh PA 15213, USA tolliver@ri.cmu.edu Abstract. A method is presented
More informationExtraction of Human Gait Features from Enhanced Human Silhouette Images
2009 IEEE International Conference on Signal and Image Processing Applications Extraction of Human Gait Features from Enhanced Human Silhouette Images Hu Ng #1, Wooi-Haw Tan *2, Hau-Lee Tong #3, Junaidi
More informationThe Human ID Gait Challenge Problem: Data Sets, Performance, and Analysis
The Human ID Gait Challenge Problem: Data Sets, Performance, and Analysis Sudeep Sarkar 1, P. Jonathon Phillips 2, Zongyi Liu 1, Isidro Robledo 1, Patrick Grother 2, Kevin Bowyer 3 1 Computer Science and
More informationUsing Bilinear Models for View-invariant Identity Recognition from Gait. Abstract
Using Bilinear Models for View-invariant Identity Recognition from Gait Fabio Cuzzolin and Stefano Soatto Abstract Human identification from gait is a challenging task in realistic surveillance scenarios
More informationPerson identification from human walking sequences using affine moment invariants
2009 IEEE International Conference on Robotics and Automation Kobe International Conference Center Kobe, Japan, May 2-7, 2009 Person identification from human walking sequences using affine moment invariants
More informationA Layered Deformable Model for Gait Analysis
A Layered Deformable Model for Gait Analysis Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos Bell Canada Multimedia Laboratory The Edward S. Rogers Sr. Department of Electrical and Computer Engineering
More informationHuman Gait Recognition
Human Gait Recognition 1 Rong Zhang 2 Christian Vogler 1 Dimitris Metaxas 1 Department of Computer Science 2 Gallaudet Research Institute Rutgers University Gallaudet University 110 Frelinghuysen Road
More informationP RW GEI: Poisson Random Walk based Gait Recognition
P RW GEI: Poisson Random Walk based Gait Recognition Pratheepan Yogarajah, Joan V. Condell and Girijesh Prasad Intelligent System Research Centre (ISRC) School of Computing and Intelligent Systems University
More informationPerson identification from spatio-temporal 3D gait
200 International Conference on Emerging Security Technologies Person identification from spatio-temporal 3D gait Yumi Iwashita Ryosuke Baba Koichi Ogawara Ryo Kurazume Information Science and Electrical
More informationUniprojective Features for Gait Recognition
Uniprojective Features for Gait Recognition Daoliang Tan, Kaiqi uang, Shiqi Yu, and Tieniu Tan Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of Automation,
More informationGait Recognition Using Gait Entropy Image
Gait Recognition Using Gait Entropy Image Khalid Bashir, Tao Xiang, Shaogang Gong School of Electronic Engineering and Computer Science, Queen Mary University of London, United Kingdom {khalid,txiang,sgg}@dcs.qmul.ac.uk
More informationGait recognition using linear time normalization
Pattern Recognition 39 (26) 969 979 www.elsevier.com/locate/patcog Gait recognition using linear time normalization Nikolaos V. Boulgouris a,, Konstantinos N. Plataniotis b, Dimitrios Hatzinakos b a Department
More informationHID DARPA Niyogi Adelson [5] Cunado [6] 1D. Little Boyd [7] Murase Sakai [8] Huang [9] Shutler [10] TP391 1M02J , [1] [3]
* 8 D D TP39 [] [3] DARPA HID [] 6 [4] [5-7] iyogi Adelson [5] Cunado [6] D Little Boyd [7] Murase Sakai [8] Huang [9] Shutler [] * 69855, 65 MJ4. Hayfron-Acquah [] Johnson Bobick [] Yam [7] D PCA LPR.
More informationAutomatic Gait Recognition Based on Statistical Shape Analysis
1120 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 9, SEPTEMBER 2003 Automatic Gait Recognition Based on Statistical Shape Analysis Liang Wang, Tieniu Tan, Senior Member, IEEE, Weiming Hu, and Huazhong
More informationUse of Gait Energy Image in Implementation of Real Time Video Surveillance System
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 1, Ver. 5 (Jan. 2014), PP 88-93 Use of Gait Energy Image in Implementation of Real Time Video Surveillance
More informationGait Curves for Human Recognition, Backpack Detection and Silhouette Correction in a Nighttime Environment
Gait Curves for Human Recognition, Backpack Detection and Silhouette Correction in a Nighttime Environment Brian DeCann and Arun Ross West Virginia University, Morgantown, West Virginia, USA ABSTRACT The
More informationUsing Bilinear Models for View-invariant Action and Identity Recognition
Using Bilinear Models for View-invariant Action and Identity Recognition Fabio Cuzzolin UCLA Vision Lab, University of California at Los Angeles Boelter Hall, 925 Los Angeles, CA cuzzolin@cs.ucla.edu Abstract
More informationGait Style and Gait Content: Bilinear Models for Gait Recognition Using Gait Re-sampling
Gait Style and Gait Content: Bilinear Models for Gait Recognition Using Gait Re-sampling Chan-Su Lee Department of Computer Science Rutgers University New Brunswick, NJ, USA chansu@caip.rutgers.edu Ahmed
More informationBackpack: Detection of People Carrying Objects Using Silhouettes
Backpack: Detection of People Carrying Objects Using Silhouettes Ismail Haritaoglu, Ross Cutler, David Harwood and Larry S. Davis Computer Vision Laboratory University of Maryland, College Park, MD 2742
More informationIDENTIFYING people automatically and accurately is an
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 14, NO. 2, FEBRUARY 2004 149 Fusion of Static and Dynamic Body Biometrics for Gait Recognition Liang Wang, Huazhong Ning, Tieniu Tan,
More informationIEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 9, SEPTEMBER Automatic Gait Recognition Based on Statistical Shape Analysis
TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 9, SEPTEMBER 2003 1 Automatic Gait Recognition Based on Statistical Shape Analysis Liang Wang, Tieniu Tan, Senior Member,, Weiming Hu, and Huazhong Ning Abstract
More informationImproved Gait Recognition by Gait Dynamics Normalization. Zongyi Liu and Sudeep Sarkar, Senior Member, IEEE
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 6, JUNE 2006 863 Improved Gait Recognition by Gait Dynamics Normalization Zongyi Liu and Sudeep Sarkar, Senior Member, IEEE
More informationHUMAN movement analysis is not new. Biomechanical
162 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 2, FEBRUARY 2005 The HumanID Gait Challenge Problem: Data Sets, Performance, and Analysis Sudeep Sarkar, Member, IEEE, P.
More informationGabor Jets. Gabor Wavelets. Gabor Transform
G Gabor Jets Gabor Wavelets Gabor jets are a set of filters that are used to extract the local frequency information from the face images. These filters are generally linear filter with impulse responses
More informationCHAPTER 1 GAIT-BASED HUMAN IDENTIFICATION FROM A MONOCULAR VIDEO SEQUENCE
CHAPTER 1 GAIT-BASED HUMAN IDENTIFICATION FROM A MONOCULAR VIDEO SEQUENCE Amit Kale Center for Visualization and Virtual Environments 1, Quality St Suite 800-B KY 40507 USA E-mail: amit@cs.uky.edu Aravind
More informationActivity and Individual Human Recognition in Infrared Imagery
Activity and Individual Human Recognition in Infrared Imagery Bir Bhanu and Ju Han Center for Research in Intelligent Systems University of California, Riverside, California 92521, USA {bhanu, jhan}@cris.ucr.edu
More informationExpanding gait identification methods from straight to curved trajectories
Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods
More informationGait Analysis for Criminal Identification. Based on Motion Capture. Nor Shahidayah Razali Azizah Abdul Manaf
Gait Analysis for Criminal Identification Based on Motion Capture Nor Shahidayah Razali Azizah Abdul Manaf Gait Analysis for Criminal Identification Based on Motion Capture Nor Shahidayah Razali, Azizah
More informationPart I: HumanEva-I dataset and evaluation metrics
Part I: HumanEva-I dataset and evaluation metrics Leonid Sigal Michael J. Black Department of Computer Science Brown University http://www.cs.brown.edu/people/ls/ http://vision.cs.brown.edu/humaneva/ Motivation
More informationCSE/EE-576, Final Project
1 CSE/EE-576, Final Project Torso tracking Ke-Yu Chen Introduction Human 3D modeling and reconstruction from 2D sequences has been researcher s interests for years. Torso is the main part of the human
More informationFace Recognition using Eigenfaces SMAI Course Project
Face Recognition using Eigenfaces SMAI Course Project Satarupa Guha IIIT Hyderabad 201307566 satarupa.guha@research.iiit.ac.in Ayushi Dalmia IIIT Hyderabad 201307565 ayushi.dalmia@research.iiit.ac.in Abstract
More informationPerson Identification using Shadow Analysis
IWASHITA, STOICA, KURAZUME: PERSON IDENTIFICATION USING SHADOW ANALYSIS1 Person Identification using Shadow Analysis Yumi Iwashita 1 yumi@ait.kyushu-u.ac.jp Adrian Stoica 2 adrian.stoica@jpl.nasa.gov Ryo
More informationDefinition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos
Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Sung Chun Lee, Chang Huang, and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu,
More informationPeople Tracking and Segmentation Using Efficient Shape Sequences Matching
People Tracking and Segmentation Using Efficient Shape Sequences Matching Junqiu Wang, Yasushi Yagi, and Yasushi Makihara The Institute of Scientific and Industrial Research, Osaka University 8-1 Mihogaoka,
More informationSign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features
Sign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features Pat Jangyodsuk Department of Computer Science and Engineering The University
More informationGait Recognition from Time-normalized Joint-angle Trajectories in the Walking Plane
Gait Recognition from Time-normalized Joint-angle Trajectories in the Walking Plane Rawesak Tanawongsuwan and Aaron Bobick College of Computing, GVU Center, Georgia Institute of Technology Atlanta, GA
More informationRecognition Rate. 90 S 90 W 90 R Segment Length T
Human Action Recognition By Sequence of Movelet Codewords Xiaolin Feng y Pietro Perona yz y California Institute of Technology, 36-93, Pasadena, CA 925, USA z Universit a dipadova, Italy fxlfeng,peronag@vision.caltech.edu
More informationEnhancing Person Re-identification by Integrating Gait Biometric
Enhancing Person Re-identification by Integrating Gait Biometric Zheng Liu 1, Zhaoxiang Zhang 1, Qiang Wu 2, Yunhong Wang 1 1 Laboratory of Intelligence Recognition and Image Processing, Beijing Key Laboratory
More informationGait Representation Using Flow Fields
BASHIR et al.: GAIT REPRESENTATION USING FLOW FIELDS 1 Gait Representation Using Flow Fields Khalid Bashir khalid@dcs.qmul.ac.uk Tao Xiang txiang@dcs.qmul.ac.uk Shaogang Gong sgg@dcs.qmul.ac.uk School
More informationDistance-driven Fusion of Gait and Face for Human Identification in Video
X. Geng, L. Wang, M. Li, Q. Wu, K. Smith-Miles, Distance-Driven Fusion of Gait and Face for Human Identification in Video, Proceedings of Image and Vision Computing New Zealand 2007, pp. 19 24, Hamilton,
More informationA new gait-based identification method using local Gauss maps
A new gait-based identification method using local Gauss maps Hazem El-Alfy 1,2, Ikuhisa Mitsugami 1 and Yasushi Yagi 1 1 The Institute of Scientific and Industrial Research, Osaka University, Japan 2
More information3D Gait Recognition Using Spatio-Temporal Motion Descriptors
3D Gait Recognition Using Spatio-Temporal Motion Descriptors Bogdan Kwolek 1, Tomasz Krzeszowski 3, Agnieszka Michalczuk 2, Henryk Josinski 2 1 AGH University of Science and Technology 30 Mickiewicza Av.,
More informationCS 664 Segmentation. Daniel Huttenlocher
CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical
More informationCS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning
CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning Justin Chen Stanford University justinkchen@stanford.edu Abstract This paper focuses on experimenting with
More informationA Performance Evaluation of HMM and DTW for Gesture Recognition
A Performance Evaluation of HMM and DTW for Gesture Recognition Josep Maria Carmona and Joan Climent Barcelona Tech (UPC), Spain Abstract. It is unclear whether Hidden Markov Models (HMMs) or Dynamic Time
More informationGait analysis for person recognition using principal component analysis and support vector machines
Gait analysis for person recognition using principal component analysis and support vector machines O V Strukova 1, LV Shiripova 1 and E V Myasnikov 1 1 Samara National Research University, Moskovskoe
More informationAn Evaluation of Multimodal 2D+3D Face Biometrics
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 4, APRIL 2005 619 An Evaluation of Multimodal 2D+3D Face Biometrics Kyong I. Chang, Kevin W. Bowyer, and Patrick J. Flynn Abstract
More informationA Multi-view Method for Gait Recognition Using Static Body Parameters
A Multi-view Method for Gait Recognition Using Static Body Parameters Amos Y. Johnson 1 and Aaron F. Bobick 2 1 Electrical and Computer Engineering Georgia Tech, Atlanta, GA 30332 amos@cc.gatech.edu 2
More informationHuman Motion Detection and Tracking for Video Surveillance
Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,
More information3D Tracking for Gait Characterization and Recognition
3D Tracking for Gait Characterization and Recognition Raquel Urtasun and Pascal Fua Computer Vision Laboratory EPFL Lausanne, Switzerland raquel.urtasun, pascal.fua@epfl.ch Abstract We propose an approach
More informationRandom Subspace Method for Gait Recognition
Random Subspace Method for Gait Recognition Yu Guan 1,Chang-Tsun Li 1 and Yongjian Hu 2 Department of Computer Science, University of Warwick, Coventry, UK {g.yu, c-t.li}@warwick.ac.uk 1, yongjian.hu@dcs.warwick.ac.uk
More informationGait-based Recognition of Humans Using Continuous HMMs
Gait-based Recognition of Humans Using Continuous HMMs A. Kale, A.N. Rajagopalan, N. Cuntoor and V. Krüger Center for Automation Research University of Maryland at College Park College Park, MD 20742 Department
More informationNIST. Support Vector Machines. Applied to Face Recognition U56 QC 100 NO A OS S. P. Jonathon Phillips. Gaithersburg, MD 20899
^ A 1 1 1 OS 5 1. 4 0 S Support Vector Machines Applied to Face Recognition P. Jonathon Phillips U.S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards and Technology Information
More informationA NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION
A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION A. Hadid, M. Heikkilä, T. Ahonen, and M. Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering
More informationDetecting Coarticulation in Sign Language using Conditional Random Fields
Detecting Coarticulation in Sign Language using Conditional Random Fields Ruiduo Yang and Sudeep Sarkar Computer Science and Engineering Department University of South Florida 4202 E. Fowler Ave. Tampa,
More informationCovariate Analysis for View-Point Independent Gait Recognition
Covariate Analysis for View-Point Independent Gait Recognition I. Bouchrika, M. Goffredo, J.N. Carter, and M.S. Nixon ISIS, Department of Electronics and Computer Science University of Southampton, SO17
More informationBackground Subtraction Techniques
Background Subtraction Techniques Alan M. McIvor Reveal Ltd PO Box 128-221, Remuera, Auckland, New Zealand alan.mcivor@reveal.co.nz Abstract Background subtraction is a commonly used class of techniques
More informationDeep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks
Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Si Chen The George Washington University sichen@gwmail.gwu.edu Meera Hahn Emory University mhahn7@emory.edu Mentor: Afshin
More informationPROBLEM FORMULATION AND RESEARCH METHODOLOGY
PROBLEM FORMULATION AND RESEARCH METHODOLOGY ON THE SOFT COMPUTING BASED APPROACHES FOR OBJECT DETECTION AND TRACKING IN VIDEOS CHAPTER 3 PROBLEM FORMULATION AND RESEARCH METHODOLOGY The foregoing chapter
More informationGait Recognition Using Static, Activity-Specific Parameters
Gait Recognition Using Static, Activity-Specific Parameters Aaron F. Bobick Amos Y. Johnson GVU Center/College of Computing Electrical and Computer Engineering Georgia Tech Georgia Tech Atlanta, GA 30332
More informationEmpirical Evaluation of Advanced Ear Biometrics
Empirical Evaluation of Advanced Ear Biometrics Ping Yan Kevin W. Bowyer Department of Computer Science and Engineering University of Notre Dame, IN 46556 Abstract We present results of the largest experimental
More informationObject and Action Detection from a Single Example
Object and Action Detection from a Single Example Peyman Milanfar* EE Department University of California, Santa Cruz *Joint work with Hae Jong Seo AFOSR Program Review, June 4-5, 29 Take a look at this:
More informationOn Moving Object Reconstruction By Moments
On Moving Object Reconstruction By Moments Stuart P. Prismall, Mark S. Nixon and John N. Carter Image, Speech and Intelligent Systems Department of Electronics and Computer Science University of Southampton
More informationUnsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme
Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme Costas Panagiotakis and Anastasios Doulamis Abstract In this paper, an unsupervised, automatic video human members(human
More informationEstimating Human Pose in Images. Navraj Singh December 11, 2009
Estimating Human Pose in Images Navraj Singh December 11, 2009 Introduction This project attempts to improve the performance of an existing method of estimating the pose of humans in still images. Tasks
More informationIMPROVEMENT OF BACKGROUND SUBTRACTION METHOD FOR REAL TIME MOVING OBJECT DETECTION INTRODUCTION
IMPROVEMENT OF BACKGROUND SUBTRACTION METHOD FOR REAL TIME MOVING OBJECT DETECTION Sina Adham Khiabani and Yun Zhang University of New Brunswick, Department of Geodesy and Geomatics Fredericton, Canada
More informationHuman Action Recognition Using Independent Component Analysis
Human Action Recognition Using Independent Component Analysis Masaki Yamazaki, Yen-Wei Chen and Gang Xu Department of Media echnology Ritsumeikan University 1-1-1 Nojihigashi, Kusatsu, Shiga, 525-8577,
More informationRobust Tracking of People by a Mobile Robotic Agent
Robust Tracking of People by a Mobile Robotic Agent Rawesak Tanawongsuwan, Alexander Stoytchev, Irfan Essa College of Computing, GVU Center, Georgia Institute of Technology Atlanta, GA 30332-0280 USA ftee
More informationAn Object Detection System using Image Reconstruction with PCA
An Object Detection System using Image Reconstruction with PCA Luis Malagón-Borja and Olac Fuentes Instituto Nacional de Astrofísica Óptica y Electrónica, Puebla, 72840 Mexico jmb@ccc.inaoep.mx, fuentes@inaoep.mx
More informationDetecting Pedestrians Using Patterns of Motion and Appearance
Detecting Pedestrians Using Patterns of Motion and Appearance Paul Viola Michael J. Jones Daniel Snow Microsoft Research Mitsubishi Electric Research Labs Mitsubishi Electric Research Labs viola@microsoft.com
More informationA Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b and Guichi Liu2, c
4th International Conference on Mechatronics, Materials, Chemistry and Computer Engineering (ICMMCCE 2015) A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b
More informationBetter than best: matching score based face registration
Better than best: based face registration Luuk Spreeuwers University of Twente Fac. EEMCS, Signals and Systems Group Hogekamp Building, 7522 NB Enschede The Netherlands l.j.spreeuwers@ewi.utwente.nl Bas
More informationGait-Based Person Identification Robust to Changes in Appearance
Sensors 2013, 13, 7884-7901; doi:10.3390/s130607884 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article Gait-Based Person Identification Robust to Changes in Appearance Yumi Iwashita
More informationMATLAB Based Interactive Music Player using XBOX Kinect
1 MATLAB Based Interactive Music Player using XBOX Kinect EN.600.461 Final Project MATLAB Based Interactive Music Player using XBOX Kinect Gowtham G. Piyush R. Ashish K. (ggarime1, proutra1, akumar34)@jhu.edu
More informationPose Normalization for Robust Face Recognition Based on Statistical Affine Transformation
Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation Xiujuan Chai 1, 2, Shiguang Shan 2, Wen Gao 1, 2 1 Vilab, Computer College, Harbin Institute of Technology, Harbin,
More informationDetecting Pedestrians Using Patterns of Motion and Appearance
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Detecting Pedestrians Using Patterns of Motion and Appearance Viola, P.; Jones, M.; Snow, D. TR2003-90 August 2003 Abstract This paper describes
More informationDynamic Human Shape Description and Characterization
Dynamic Human Shape Description and Characterization Z. Cheng*, S. Mosher, Jeanne Smith H. Cheng, and K. Robinette Infoscitex Corporation, Dayton, Ohio, USA 711 th Human Performance Wing, Air Force Research
More informationUnsupervised Motion Classification by means of Efficient Feature Selection and Tracking
Unsupervised Motion Classification by means of Efficient Feature Selection and Tracking Angel D. Sappa Niki Aifanti Sotiris Malassiotis Michael G. Strintzis Computer Vision Center Informatics & Telematics
More informationAUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE
AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE Md. Akhtaruzzaman, Amir A. Shafie and Md. Raisuddin Khan Department of Mechatronics Engineering, Kulliyyah of Engineering, International
More informationMotion Detection Algorithm
Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection
More informationA Fast Moving Object Detection Technique In Video Surveillance System
A Fast Moving Object Detection Technique In Video Surveillance System Paresh M. Tank, Darshak G. Thakore, Computer Engineering Department, BVM Engineering College, VV Nagar-388120, India. Abstract Nowadays
More informationEvaluating Example-based Pose Estimation: Experiments on the HumanEva Sets
Evaluating Example-based Pose Estimation: Experiments on the HumanEva Sets Ronald Poppe Human Media Interaction Group, Department of Computer Science University of Twente, Enschede, The Netherlands poppe@ewi.utwente.nl
More informationOn Clustering Human Gait Patterns
Appeared in Proc. of 22nd International Conference on Pattern Recognition (ICPR), (Stockholm, Sweden), August 214 On Clustering Human Gait Patterns Brian DeCann West Virginia Univeristy Email: bdecann@mix.wvu.edu
More informationVideo shot segmentation using late fusion technique
Video shot segmentation using late fusion technique by C. Krishna Mohan, N. Dhananjaya, B.Yegnanarayana in Proc. Seventh International Conference on Machine Learning and Applications, 2008, San Diego,
More informationSuspicious Activity Detection of Moving Object in Video Surveillance System
International Journal of Latest Engineering and Management Research (IJLEMR) ISSN: 2455-4847 ǁ Volume 1 - Issue 5 ǁ June 2016 ǁ PP.29-33 Suspicious Activity Detection of Moving Object in Video Surveillance
More informationMULTI-VIEW GAIT RECOGNITION USING 3D CONVOLUTIONAL NEURAL NETWORKS. Thomas Wolf, Mohammadreza Babaee, Gerhard Rigoll
MULTI-VIEW GAIT RECOGNITION USING 3D CONVOLUTIONAL NEURAL NETWORKS Thomas Wolf, Mohammadreza Babaee, Gerhard Rigoll Technische Universität München Institute for Human-Machine Communication Theresienstrae
More informationLearning the Three Factors of a Non-overlapping Multi-camera Network Topology
Learning the Three Factors of a Non-overlapping Multi-camera Network Topology Xiaotang Chen, Kaiqi Huang, and Tieniu Tan National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy
More informationCross-View Gait Recognition Using View-Dependent Discriminative Analysis
Cross-View Gait Recognition Using View-Dependent Discriminative Analysis Al Mansur, Yasushi Makihara, Daigo Muramatsu and Yasushi Yagi Osaka University {mansur,makihara,muramatsu,yagi}@am.sanken.osaka-u.ac.jp
More informationSpatial Frequency Domain Methods for Face and Iris Recognition
Spatial Frequency Domain Methods for Face and Iris Recognition Dept. of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 e-mail: Kumar@ece.cmu.edu Tel.: (412) 268-3026
More informationGait Recognition: Databases, Representations, and Applications
Gait Recognition: Databases, Representations, and Applications YASUSHI MAKIHARA 1), DARKO S. MATOVSKI 2), MARK S. NIXON 2), JOHN N. CARTER 2), YASUSHI YAGI 1) 1) Osaka University, Osaka, Japan 2) University
More informationWP1: Video Data Analysis
Leading : UNICT Participant: UEDIN Fish4Knowledge Final Review Meeting - November 29, 2013 - Luxembourg Workpackage 1 Objectives Fish Detection: Background/foreground modeling algorithms able to deal with
More informationUsing temporal seeding to constrain the disparity search range in stereo matching
Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department
More informationSegmentation and Tracking of Multiple Humans in Complex Situations Λ
Segmentation and Tracking of Multiple Humans in Complex Situations Λ Tao Zhao Ram Nevatia Fengjun Lv University of Southern California Institute for Robotics and Intelligent Systems Los Angeles CA 90089-0273
More informationShort Survey on Static Hand Gesture Recognition
Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of
More informationDetecting Pedestrians Using Patterns of Motion and Appearance
International Journal of Computer Vision 63(2), 153 161, 2005 c 2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands. Detecting Pedestrians Using Patterns of Motion and Appearance
More informationPerson Re-identification for Improved Multi-person Multi-camera Tracking by Continuous Entity Association
Person Re-identification for Improved Multi-person Multi-camera Tracking by Continuous Entity Association Neeti Narayan, Nishant Sankaran, Devansh Arpit, Karthik Dantu, Srirangaraj Setlur, Venu Govindaraju
More informationNeural Network Based Authentication using Compound Gait Biometric Features
Neural Network Based Authentication using Compound Gait Biometric Features C. Nandini 1, Mohammed Tajuddin 2 1 Dept. of CSE, DSATM, Bangalore 2 Dept. of CSE, DSCE, Bangalore Abstract : Authentication of
More information