A Multi-view Method for Gait Recognition Using Static Body Parameters

Size: px
Start display at page:

Download "A Multi-view Method for Gait Recognition Using Static Body Parameters"

Transcription

1 A Multi-view Method for Gait Recognition Using Static Body Parameters Amos Y. Johnson 1 and Aaron F. Bobick 2 1 Electrical and Computer Engineering Georgia Tech, Atlanta, GA amos@cc.gatech.edu 2 GVU Center/College of Computing Georgia Tech, Atlanta, GA afb@cc.gatech.edu Abstract. A multi-view gait recognition method using recovered static body parameters of subjects is presented; we refer to these parameters as activity-specific biometrics. Our data consists of 18 subjects walking at both an angled and frontal-parallel view with respect to the camera. When only considering data from a single view, subjects are easily discriminated; however, discrimination decreases when data across views are considered. To compare between views, we use ground truth motioncapture data of a reference subject to find scale factors that can transform data from different viewsinto a common frame ( walking-space ). Instead of reporting percent correct from a limited database, we report our results using an expected confusion metric that allows us to predict how our static body parameters filter identity in a large population: lower confusion yields higher expected discrimination power. We show that using motion-capture data to adjust vision data of different views to a common reference frame, we can get achieve expected confusions rateson the order of 6%. 1 Introduction Automatic gait recognition is new emerging research field with only a few researched techniques. It has the advantage of being unobtrusive because bodyinvading equipment is not needed to capture gait information. From a surveillance perspective, gait recognition is an attractive modality because it may be performed at a distance, surreptitiously. In this paper we present a gait recognition technique that identifies people based on static body parameters recovered during the walking action across multiple views. The hope is that because these parameters are directly related to the three-dimensional structure of the person they will be less sensitive to error introduced by variation in view angle. Also, instead of reporting percent correct (or recognition rates) in a limited database of subjects, we derive an expected confusion metric that allows us to predict how well a given feature vector will filter identity over a large population. J. Bigun and F. Smeraldi (Eds.): AVBPA 2001, LNCS 2091, pp , c Springer-Verlag Berlin Heidelberg 2001

2 302 AmosY. Johnson and Aaron F. Bobick 1.1 Previous Work Perhaps the first papers in the area of gait recognition comes from the Psychology field. Kozlowski and Cutting [8,4] determined that people could identify other people base solely on gait information. Stevenage, Nixon, and Vince [12] extended the works by exploring the limits of human ability to identify other humans by gait under various viewing conditions. Automatic gait-recognition techniques can be roughly divided into model-free and model-based approaches. Model-free approaches [7,9,10] only analyze the shape or motion a subject makes as they walk, and the features recovered from the shape and motion are used for recognition. Model-based techniques either model the person [11] or model the walk of the person [3]. In person models, a body model is fit to the person in every frame of the walking sequence, and parameters (i.e. angular velocity, trajectory) are measured on the body model as the model deforms over the walking sequence. In walking models, a model of how the person moves is created, and the parameters of the model are learned for every person. Because of the recency of the field, most gait recognition approaches only analyze gait from the side view without exploring the variation in gait measurements caused by differing view angles. Also, subject databases used for testing are typically small (often less than ten people); however, even though subject databases are small, results are reported as percent correct. That is, on how many trials could the system correctly recognize the individual by choosing its best match. Such a result gives little insight as to how the technique might scale when the database contains hundreds or thousands or more people. 1.2 Our Approach Our approach to the study of gait recognition attempts to overcome these deficiencies by taking three fundamentally different steps than previous researchers. First, we developa gait-recognition method that recovers static body and stride parameters of subjects as they walk. Our technique does not directly analyze the dynamic gait patterns, but uses the action of walking to extract relative body parameters. This method is an example of what we call activityspecific biometrics. That is, we developa method of extracting some identifying properties of an individual or of an individual s behavior that is only applicable when a person is performing that specific action. Gait is a excellent example of this approach because not only do people walk much of the time making the data accessible, but also many techniques for activity recognition are able to detect when someone is walking. Examples include the motion-history method of Bobick and Davis [5] and even the walker identification method of Nyogi and Adelson [11]. Second, we developa walking-space adjustment method that allows for the identification of a subject walking at different view angles to the viewing plane of a camera. Our static body parameters are related to the three-dimensional structure of the body so they are less sensitive to variation in view angle. However, because of projection into an image, static body parameters recovered from different views need to be transformed to a common frame.

3 A Multi-view Method for Gait Recognition Using Static Body Parameters 303 Finally, as opposed to reporting percent correct, we will establish the uncertainty reduction that occurs when a measurement is taken. For a given measured property, we establish the spread of the density of the overall population. To do so requires only enough subjects such that our estimate of the population density approaches some stable value. It is with respect to that density that we determine the expected variation in the measurement when applied to a given individual. The remainder of this paper is as follows: we describe the expected confusion metric we used to evaluate our technique, present the gait-recognition method, and describe how to convert the different view-angle spaces to a common walkingspace. Last, we will assess the performance of our technique using the expected confusion metric. 2 Expected Confusion As mentioned our goal is not to report a percent correct of identification. To do so requires us to have an extensive database of thousands of individuals being observed under a variety of conditions. Rather, our goal is to characterize a particular measurement as to how much it reduces the uncertainty of identity after the measurement is taken. Many approaches are possible. Each entails first estimating the probability density of a given property vector x for an entire population P p (x). Next we must estimate the uncertainty of that property for a given individual once the measurement is known P I (x η = x 0 ) (interpreted as what is the probability density of the true value of the property x after the measurement η is taken). Finally, we need to express the average reduction in uncertainty or the remaining confusion that results after having taken the measurement. 1/M P(x) Individual uncertainty P i (x) 1/N Population density P p (x) M N x Fig. 1. Uniform probability illustration of how the density of the overall population compares to the the individual uncertainty after the measurement is taken. In this case the remaining confusion the percentage of the population that could have given rise to the measurement is M/N.

4 304 AmosY. Johnson and Aaron F. Bobick Information theory argues for a mutual information [2] measure: I(X; Y) =H(X) H(X Y). (1) where H(X) is the entropy of a random variable X defined by H(X) = p(x)lnp(x), x and H(X Y) is the conditional entropy of a random variable X given another random variable Y defined by: H(X Y) = p(x, y)lnp(x y). x,y For our case the random variable X is the underlying property (of identity) of an individual before a measurement is taken and is represented by the population density of the particular metric used for identification. The random variable Y is an actual measurement retrieved from an individual and is represented by a distribution of the individual variation of an identity measurement. Given these definitions, the uncertainty of the property (of identity) of the individual given a specific measurement, H(X Y), is just the uncertainty of the measurement, H(Y). Therefore the mutual information reduces to: I(X; Y) H(X) H(Y). (2) Since the goal of gait recognition is filtering human identity this derivation of mutual information is representative of filtering identity. However, we believe that a better assessment (and comparable to mutual information) of a metric s ability to filter identity is the expected value of the percentage of the population eliminated after the measurement is taken. This is illustrated in Figure 1. Using a uniform density for illustration we let the density of the feature in the population P p be 1/N in the interval [0,N]. The individual density P i is much narrower, being uniform in [x 0 M/2,x 0 + M/2]. The confusion that remains is the area of the density P p that lies under P i. In this case, that confusion ratio is M/N. An analogous measure can be derived for the Gaussian case under the assumption that the population density σ p is much greater than the individual variation σ i. In that case the expected confusion is simply the ratio σ i /σ p,the ratio of standard deviation of the uncertainty after measurement to that before the measurement is taken. Note that if the negative natural logarithm of this is taken we get: ln( σ i )=lnσ p ln σ i, (3) σ p we arrive at an expression that is the mutual information (of two 1D Gaussian distributions) from Equation 2. For the multidimensional Gaussian case, the result is Expected Confusion = Σ i 1/2. (4) Σ p 1/2

5 A Multi-view Method for Gait Recognition Using Static Body Parameters 305 This quantity is the ratio of the individual variation volume over the population volume. These are volumes of equal probability hyper-ellipsoids as defined by the Gaussian densities. See [1] for complete proof. 3 Gait Recognition Method Using a single camera with the viewing plane perpendicular to the ground plane, 18 subjects walked in an open indoor-space at two view angles: a 45 path (angleview) toward the camera, and a frontal-parallel path (side-view) in relation to the viewing plane of the camera. The side-view data was captured at two different depths, 3.9 meters and 8.3 meters from camera. These three viewing conditions are used to evaluate our multi-view technique. In the following subsections we explain our body part labeling technique and our depth compensation method. The body part labeling technique is used to arrive at the static body parameters of a subject. Depth compensation is used to compensate for depth changes of the subject as the walk. Lastly, before stating the results of the experiments, we present the static body parameters used and how we adjust for viewing angle. Fig. 2. Automatic segmenting of the body silhouette into regions. 3.1 Body Part Labeling Body parts are labeled by analyzing the binary silhouette of the subject in each video frame. Silhouettes are created by background subtraction using a static background frame. A series of morphological operations are applied to the resulting images to reduce noise. Once a silhouette is generated, a bounding box is placed around the silhouette and divided into three sections head section, pelvis section, and foot section (see Figure 2) of predefined sizes similar to the body part labeling method in [6]. The head is found by finding the centroid of the pixels located in the head section. The pelvis is contained in pelvis section, and is the centroid of this section. The foot section houses the lower legs and feet, and is further sub-divided down the center into foot region 1 and foot region 2. Within foot region 1 and foot region 2, the distance (L2 norm) between each pixel and the previously discovered head location is calculated. The pixel location with the highest distance in each region is labeled foot 1 and foot 2. The labels do not distinguish between left and right foot because it is not necessary in

6 306 AmosY. Johnson and Aaron F. Bobick 40 Displacement Between Feet (Vision) pixels frame number Fig. 3. The distance between the two feet as measured in pixels. The value increases as the subject approaches the camera. The curve is an average that underestimates the value of the peak but well localizes them. The lower trace indicates the maximal and minimal separations. our technique. This method of body part labeling allows for imperfections in the silhouette due to noisy background subtraction by using local body part searches and placing soft constraints on body part locations. 3.2 Depth Compensation The static body parameters used for identification will be a set of distances between the body parts locations, and the distances will be measured in pixels; however, a conversion factor from pixels to centimeters is needed for the possible depth locations of the subjects in the video footage. We have created a depth compensation method to handle this situation by having a subject of known height walk at an angle towards the camera. At the points of minimal separation of the subject s feet (see Figure 3), the system measures the height (this is taken to be the height of the bounding box around the subject) of the subject in pixels at that location on the ground plane. The minimal point represents the time instances where the subject is at his or her maximal height during the walking action. A conversion factor from pixels to centimeters at each known location on the ground (taken to be the lower y-value of the bounding box) is calculated by: known height (centimeters) Conversion Factor = measured height (pixels). (5) To extrapolate the conversion factors for the other unknown locations on the ground plane a hyperbola is fit to the known conversion factors. Assuming a world coordinate system located at the camera focal point and an image plane perpendicular to ground plane, using perspective projection we derive a

7 A Multi-view Method for Gait Recognition Using Static Body Parameters 307 conversion factor hyperbola, Conversion Factor(y b )= A, (6) B y b where A is the vertical distance between the ground and focal point times the focal length, B is the optical center (y component ) of the image plus a residual (if the image plane is not exactly perpendicular to the ground), and y b is the current y-location of the subject s feet. We implicitly estimate the parameters A and B by fitting the conversion factor hyperbola (Equation 6) to the known locations of the subject and the required conversion factors needed to covert the measured height in pixels to its known height in centimeters (see Figure 4). 2 Conversion Factor scale factor location on ground Fig. 4. Hyperbola fit to the data relating the lower y position of the bounding box to the required conversion factor. The data points are generated by observing a subject of known height walking in the space. 3.3 Static Body Parameters After body labeling and depth compensation, a 4D-walk vector (which are the static body parameters) is computed as (see Figure 5): d 1 : The height of the bounding box around the silhouette. d 2 : The distance (L2 norm) between the head and pelvis locations. d 3 : The maximum value of the distance between the pelvis and left foot location, and the distance between the pelvis and right foot location. d 4 : The distance between the left and right foot. These distances are concatenated to from a 4D-walk vector w =< d 1,d 2,d 3,d 4 >, and they are only measured when the subjects feet are maximally spread during the walking action. As subjects walk they have multiple maximally spread

8 308 AmosY. Johnson and Aaron F. Bobick d 2 d 1 d 3 d 4 Fig. 5. The 4 static body parameters: w =< d 1,d 2,d 3,d 4 >. points (see Figure 3), and the mean value of w at these points is found to generate one walk vector per walking sequence. Measurements are taken only at these points because the body parts are not self-occluding at these points, and this is a repeatable point in the walk action to record similar measurements. 3.4 Walking-Space Adjustment The static body parameters recovered from subjects, from a single view angle, produce high discrimination power. When comparing across views, however, discrimination power decreases. The most obvious reason is that forshortening changes the value of many of the features. Furthermore, variations in how the part labeling techniques work in the different views can lead to a systematic variation between the views. And finally, other random error can occur when doing vision processing on actual imagery; this error will tend be larger across different views. In this paper we did not attempt to adjust for random error, but instead compensate for a variety of systematic error including forshortening. We assume that the same systematic error is being made for all subjects for each view angle. Therefore, we can use one subject as a reference subject and use his vision data, from different view angles, to find a scale factor to convert his vision data to a common frame using his motion-capture data as the reference. Motion-capture data of a reference subject, is considered to be the ground truth information from the subject with minimal error. Our motion-capture system uses magnetic sensors to capture the three-dimensional position and orientation of the limbs of the subject as he (or she) walks along a platform. Sixteen

9 A Multi-view Method for Gait Recognition Using Static Body Parameters 309 sensors in all are used: (1) head, (2) torso, (1) pelvis, (2) hands, (2) forearms, (2) upper-arms, (2) thighs, (2) calfs, (2) feet. If the error is truly systematic, then the scale factor found, using the motion-capture system, can be applied to the other subjects vision data. To achieve this, we model the error as a simple scaling in each dimension of the 4D-walk vector, which can be removed by a simple constant scale factor for each dimension. A mean 4D-walk vector x =< d x1,d x2,d x3,d x4 > from motion-capture walking sequences of a reference subject is recovered. Next, several (vision recovered) 4D-walk vectors, w ij =<d w1,d w2,d w3,d w4 > where i is the view angle and j is the walk vector number, are found of the reference subject from the angle-view, the near-side-view, and the far-side-view. The walk-vector, x, from the motion-capture system is used to find the constant scale factors needed to convert the vision data of the reference subject for each dimension and view angle separately by: S ij =< d x1, d x2, d x3, d x4 > d w1 d w2 d w3 d w4 where S ij is scale factor vector for view angle i and walk vector j, andthescale factor vector for a given view angle is SF i =<sf 1,sf 2,sf 3,sf 4 >= 1 N N S ij. (7) The 4D-walk vectors of each subject are converted to walking-space by j=1 w ij SF i =<d 1 sf 1,d 2 sf 2,d 3 sf 3,d 4 sf 4 >. 3.5 Results We recorded 18 subjects, walking at the angle-view, far-side-view, and near-sideview. There are six data points (walk vectors) per subject for the angle-view, three data points per subject for the side-view far away, and three data per subject for the side-view close upyielding 108 walk vectors for the angle-view and 108 walk vectors for the side-view (54 far way, and 54 close up). The results are listed in Table 1. Table 1 is divided into two sets of results: Expected Confusion and Recognition Rates. The Expected Confusion is the metric discussed in Section 2. The Recognition Rates are obtain using Maxim Likelihood. Where, recognition is computed by modeling each individual as a single Gaussian and selecting the class with the greater likelihood. Results are reported from the angle-view, near-side-view and far-side-view. Finally results are posted after the vision data was scaled to walking-space using

10 310 AmosY. Johnson and Aaron F. Bobick Table 1. The results of the multi-view gait-recognition method using static body parameters. Viewing Expected Recognition Condition Confusion Rates Angle View 1.53% 100% Side View Far.71% 91% Side View Near.43% 96% Side View Adjusted (far and near) 4.57% 100% Combine Angle and Side ViewsAdjusted 6.37% 94% the appropriate scale factor based on the viewing condition. The results in the last row, titled Combine Angle and Side Views Adjusted, are the numbers of interests because this data set contains all data adjusted using the walking-space adjustment technique. Once the data points are adjusted by the appropriate scale factors the expected confusion of the Side View (combining near and far) is only 4.57%. Also, the Combined Angle and Side views yield an expected confusion of 6.37%. This tells us that an individual s static body parameters will yield on average 6% confusion with another individual s parameters under these different views. 4 Conclusion This paper has demonstrated that gait recognition can be achieved by static body parameters. In addition, a method to reduce the variance between static body parameters recovered from different views was present by using the actual ground truth information (using motion-capture data) of the static body parameters found for a reference subject. As with any new work, there are several next steps to be undertaken. We must expand our database to test how well the expected confusion metric predicts performance over larger databases. Experiments must be ran under more view angles, so the error over other possible views can be characterize. Also, the relationshipbetween the motion-capture data and vision data needs to be explored further to find the best possible scaling parameters to reduce the expected confusion even lower than presented here. Lastly, in this paper we compensated for systematic error, and not random error. In future work, we will analyze how to determine random error, and attempt to compensate for (or minimize the effects of) the random error.

11 A Multi-view Method for Gait Recognition Using Static Body Parameters 311 References 1. Bobick, A. F. and A. Y. Johnson, Expected Confusion as a Method of Evaluating Recognition Techniques, Technical Report GIT.GVU-01-01, Georgia Institute of Technology, Cover, T. M. and J. A. Thomas, Elements of Information Theory, John Wilety & Sons, Inc., New York, Cunado, D., M. S. Nixion, and J. N. Carter, Automatic Gait Recognition via Model-Based Evidence Gathering, accepted for IEEE AutoID99, Summit NJ, Cutting, J. and L. Kozlowski, Recognizing friends by their walk: Gait perception without familiarity cues, Bulletin of the Psychonomic Society 9 pp , Davis, J.W. and A.F. Bobick, The representation and recognition of action using temporal templates, Proc. IEEE Computer Vision and Pattern Recognition, San Juan, Puerto Rico, pp , Haritaoglu, I., D. Harwood, and L. Davis, W4: Who, When, Where, What: A real time system for detecting and tracking people, Proc. of Third Face and Gesture Recognition Conference, pp , April Huang, P.S., C. J. Harris, and M. S. Nixon, Human Gait Recognition in Canonical Space using Temporal Templates, IEEE Procs. Vision Image and Signal Processing, 146(2), pp , Kozlowski, L. and J. Cutting, Recognizing the sex of a walker from a dynamic point-light display, Perception and Psychophysics, 21 pp , Little, J.J. and J.E. Boyd, Recognizing people by their gait: the shape of motion, Videre, 1, Murase, H. and R. Sakai, Moving object recognition in eigenspace representation: gait analysis and lip reading, Pattern Recognition Letters, 17, pp , Niyogi, S. and E. Adelson, Analyzing and Recognizing Walking Figures in XYT, Proc. Computer Vision and Pattern Recognition, pp , Seattle, Stevenage, S., M. S. Nixon, and K. Vince, Visual Analysis of Gait as a Cue to Identity, Applied Cognitive Psychology, 13, pp , 1999.

Gait Recognition Using Static, Activity-Specific Parameters

Gait Recognition Using Static, Activity-Specific Parameters Gait Recognition Using Static, Activity-Specific Parameters Aaron F. Bobick Amos Y. Johnson GVU Center/College of Computing Electrical and Computer Engineering Georgia Tech Georgia Tech Atlanta, GA 30332

More information

Gait Recognition from Time-normalized Joint-angle Trajectories in the Walking Plane

Gait Recognition from Time-normalized Joint-angle Trajectories in the Walking Plane Gait Recognition from Time-normalized Joint-angle Trajectories in the Walking Plane Rawesak Tanawongsuwan and Aaron Bobick College of Computing, GVU Center, Georgia Institute of Technology Atlanta, GA

More information

Matching Gait Image Sequences in the Frequency Domain for Tracking People at a Distance

Matching Gait Image Sequences in the Frequency Domain for Tracking People at a Distance Matching Gait Image Sequences in the Frequency Domain for Tracking People at a Distance Ryusuke Sagawa, Yasushi Makihara, Tomio Echigo, and Yasushi Yagi Institute of Scientific and Industrial Research,

More information

Expanding gait identification methods from straight to curved trajectories

Expanding gait identification methods from straight to curved trajectories Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods

More information

Human Action Recognition Using Independent Component Analysis

Human Action Recognition Using Independent Component Analysis Human Action Recognition Using Independent Component Analysis Masaki Yamazaki, Yen-Wei Chen and Gang Xu Department of Media echnology Ritsumeikan University 1-1-1 Nojihigashi, Kusatsu, Shiga, 525-8577,

More information

Extraction of Human Gait Features from Enhanced Human Silhouette Images

Extraction of Human Gait Features from Enhanced Human Silhouette Images 2009 IEEE International Conference on Signal and Image Processing Applications Extraction of Human Gait Features from Enhanced Human Silhouette Images Hu Ng #1, Wooi-Haw Tan *2, Hau-Lee Tong #3, Junaidi

More information

Individual Recognition Using Gait Energy Image

Individual Recognition Using Gait Energy Image Individual Recognition Using Gait Energy Image Ju Han and Bir Bhanu Center for Research in Intelligent Systems University of California, Riverside, California 92521, USA jhan,bhanu @cris.ucr.edu Abstract

More information

HID DARPA Niyogi Adelson [5] Cunado [6] 1D. Little Boyd [7] Murase Sakai [8] Huang [9] Shutler [10] TP391 1M02J , [1] [3]

HID DARPA Niyogi Adelson [5] Cunado [6] 1D. Little Boyd [7] Murase Sakai [8] Huang [9] Shutler [10] TP391 1M02J , [1] [3] * 8 D D TP39 [] [3] DARPA HID [] 6 [4] [5-7] iyogi Adelson [5] Cunado [6] D Little Boyd [7] Murase Sakai [8] Huang [9] Shutler [] * 69855, 65 MJ4. Hayfron-Acquah [] Johnson Bobick [] Yam [7] D PCA LPR.

More information

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Chris J. Needham and Roger D. Boyle School of Computing, The University of Leeds, Leeds, LS2 9JT, UK {chrisn,roger}@comp.leeds.ac.uk

More information

Person identification from human walking sequences using affine moment invariants

Person identification from human walking sequences using affine moment invariants 2009 IEEE International Conference on Robotics and Automation Kobe International Conference Center Kobe, Japan, May 2-7, 2009 Person identification from human walking sequences using affine moment invariants

More information

Person identification from spatio-temporal 3D gait

Person identification from spatio-temporal 3D gait 200 International Conference on Emerging Security Technologies Person identification from spatio-temporal 3D gait Yumi Iwashita Ryosuke Baba Koichi Ogawara Ryo Kurazume Information Science and Electrical

More information

Person Identification using Shadow Analysis

Person Identification using Shadow Analysis IWASHITA, STOICA, KURAZUME: PERSON IDENTIFICATION USING SHADOW ANALYSIS1 Person Identification using Shadow Analysis Yumi Iwashita 1 yumi@ait.kyushu-u.ac.jp Adrian Stoica 2 adrian.stoica@jpl.nasa.gov Ryo

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

of human activities. Our research is motivated by considerations of a ground-based mobile surveillance system that monitors an extended area for

of human activities. Our research is motivated by considerations of a ground-based mobile surveillance system that monitors an extended area for To Appear in ACCV-98, Mumbai-India, Material Subject to ACCV Copy-Rights Visual Surveillance of Human Activity Larry Davis 1 Sandor Fejes 1 David Harwood 1 Yaser Yacoob 1 Ismail Hariatoglu 1 Michael J.

More information

CSE/EE-576, Final Project

CSE/EE-576, Final Project 1 CSE/EE-576, Final Project Torso tracking Ke-Yu Chen Introduction Human 3D modeling and reconstruction from 2D sequences has been researcher s interests for years. Torso is the main part of the human

More information

Automatic Gait Recognition. - Karthik Sridharan

Automatic Gait Recognition. - Karthik Sridharan Automatic Gait Recognition - Karthik Sridharan Gait as a Biometric Gait A person s manner of walking Webster Definition It is a non-contact, unobtrusive, perceivable at a distance and hard to disguise

More information

Recognition Rate. 90 S 90 W 90 R Segment Length T

Recognition Rate. 90 S 90 W 90 R Segment Length T Human Action Recognition By Sequence of Movelet Codewords Xiaolin Feng y Pietro Perona yz y California Institute of Technology, 36-93, Pasadena, CA 925, USA z Universit a dipadova, Italy fxlfeng,peronag@vision.caltech.edu

More information

Human Activity Recognition Using Multidimensional Indexing

Human Activity Recognition Using Multidimensional Indexing Human Activity Recognition Using Multidimensional Indexing By J. Ben-Arie, Z. Wang, P. Pandit, S. Rajaram, IEEE PAMI August 2002 H. Ertan Cetingul, 07/20/2006 Abstract Human activity recognition from a

More information

Gait Analysis for Criminal Identification. Based on Motion Capture. Nor Shahidayah Razali Azizah Abdul Manaf

Gait Analysis for Criminal Identification. Based on Motion Capture. Nor Shahidayah Razali Azizah Abdul Manaf Gait Analysis for Criminal Identification Based on Motion Capture Nor Shahidayah Razali Azizah Abdul Manaf Gait Analysis for Criminal Identification Based on Motion Capture Nor Shahidayah Razali, Azizah

More information

Optical Flow-Based Person Tracking by Multiple Cameras

Optical Flow-Based Person Tracking by Multiple Cameras Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry Steven Scher December 2, 2004 Steven Scher SteveScher@alumni.princeton.edu Abstract Three-dimensional

More information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Ana González, Marcos Ortega Hortas, and Manuel G. Penedo University of A Coruña, VARPA group, A Coruña 15071,

More information

3D Tracking for Gait Characterization and Recognition

3D Tracking for Gait Characterization and Recognition 3D Tracking for Gait Characterization and Recognition Raquel Urtasun and Pascal Fua Computer Vision Laboratory EPFL Lausanne, Switzerland raquel.urtasun, pascal.fua@epfl.ch Abstract We propose an approach

More information

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline 1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation

More information

CHAPTER 5 MOTION DETECTION AND ANALYSIS

CHAPTER 5 MOTION DETECTION AND ANALYSIS CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series

More information

Gait Extraction and Description by Evidence-Gathering

Gait Extraction and Description by Evidence-Gathering Gait Extraction and Description by Evidence-Gathering David Cunado, Jason M. Nash, Mark S. Nixon and John N. Carter Department of Electronics and Computer Science University of Southampton Southampton

More information

Unsupervised Motion Classification by means of Efficient Feature Selection and Tracking

Unsupervised Motion Classification by means of Efficient Feature Selection and Tracking Unsupervised Motion Classification by means of Efficient Feature Selection and Tracking Angel D. Sappa Niki Aifanti Sotiris Malassiotis Michael G. Strintzis Computer Vision Center Informatics & Telematics

More information

Idle Object Detection in Video for Banking ATM Applications

Idle Object Detection in Video for Banking ATM Applications Research Journal of Applied Sciences, Engineering and Technology 4(24): 5350-5356, 2012 ISSN: 2040-7467 Maxwell Scientific Organization, 2012 Submitted: March 18, 2012 Accepted: April 06, 2012 Published:

More information

Detecting Moving Humans Using Color and Infrared Video

Detecting Moving Humans Using Color and Infrared Video IEEE Conference on Multisensor Fusion and Integration for Intelligent Systems 2003 11-3 Detecting Moving Humans Using Color and Infrared Video JuHan and BirBhanu Center for Research in Intelligent Systems

More information

A Real Time System for Detecting and Tracking People. Ismail Haritaoglu, David Harwood and Larry S. Davis. University of Maryland

A Real Time System for Detecting and Tracking People. Ismail Haritaoglu, David Harwood and Larry S. Davis. University of Maryland W 4 : Who? When? Where? What? A Real Time System for Detecting and Tracking People Ismail Haritaoglu, David Harwood and Larry S. Davis Computer Vision Laboratory University of Maryland College Park, MD

More information

Tri-modal Human Body Segmentation

Tri-modal Human Body Segmentation Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4

More information

Human Shape from Silhouettes using Generative HKS Descriptors and Cross-Modal Neural Networks

Human Shape from Silhouettes using Generative HKS Descriptors and Cross-Modal Neural Networks Human Shape from Silhouettes using Generative HKS Descriptors and Cross-Modal Neural Networks Endri Dibra 1, Himanshu Jain 1, Cengiz Öztireli 1, Remo Ziegler 2, Markus Gross 1 1 Department of Computer

More information

Human Gait Recognition using All Pair Shortest Path

Human Gait Recognition using All Pair Shortest Path 2011 International Conference on Software and Computer Applications IPCSIT vol.9 (2011) (2011) IACSIT Press, Singapore Human Gait Recognition using All Pair Shortest Path Jyoti Bharti 1+, M.K Gupta 2 1

More information

Human Gait Recognition

Human Gait Recognition Human Gait Recognition 1 Rong Zhang 2 Christian Vogler 1 Dimitris Metaxas 1 Department of Computer Science 2 Gallaudet Research Institute Rutgers University Gallaudet University 110 Frelinghuysen Road

More information

Automatic Gait Recognition Based on Statistical Shape Analysis

Automatic Gait Recognition Based on Statistical Shape Analysis 1120 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 9, SEPTEMBER 2003 Automatic Gait Recognition Based on Statistical Shape Analysis Liang Wang, Tieniu Tan, Senior Member, IEEE, Weiming Hu, and Huazhong

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

3. International Conference on Face and Gesture Recognition, April 14-16, 1998, Nara, Japan 1. A Real Time System for Detecting and Tracking People

3. International Conference on Face and Gesture Recognition, April 14-16, 1998, Nara, Japan 1. A Real Time System for Detecting and Tracking People 3. International Conference on Face and Gesture Recognition, April 14-16, 1998, Nara, Japan 1 W 4 : Who? When? Where? What? A Real Time System for Detecting and Tracking People Ismail Haritaoglu, David

More information

Gesture Recognition using Temporal Templates with disparity information

Gesture Recognition using Temporal Templates with disparity information 8- MVA7 IAPR Conference on Machine Vision Applications, May 6-8, 7, Tokyo, JAPAN Gesture Recognition using Temporal Templates with disparity information Kazunori Onoguchi and Masaaki Sato Hirosaki University

More information

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology Corner Detection Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu April 11, 2006 Abstract Corners and edges are two of the most important geometrical

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Dynamic Human Shape Description and Characterization

Dynamic Human Shape Description and Characterization Dynamic Human Shape Description and Characterization Z. Cheng*, S. Mosher, Jeanne Smith H. Cheng, and K. Robinette Infoscitex Corporation, Dayton, Ohio, USA 711 th Human Performance Wing, Air Force Research

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

9.913 Pattern Recognition for Vision. Class I - Overview. Instructors: B. Heisele, Y. Ivanov, T. Poggio

9.913 Pattern Recognition for Vision. Class I - Overview. Instructors: B. Heisele, Y. Ivanov, T. Poggio 9.913 Class I - Overview Instructors: B. Heisele, Y. Ivanov, T. Poggio TOC Administrivia Problems of Computer Vision and Pattern Recognition Overview of classes Quick review of Matlab Administrivia Instructors:

More information

Tracking of Human Body using Multiple Predictors

Tracking of Human Body using Multiple Predictors Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,

More information

Uniprojective Features for Gait Recognition

Uniprojective Features for Gait Recognition Uniprojective Features for Gait Recognition Daoliang Tan, Kaiqi uang, Shiqi Yu, and Tieniu Tan Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of Automation,

More information

( ) =cov X Y = W PRINCIPAL COMPONENT ANALYSIS. Eigenvectors of the covariance matrix are the principal components

( ) =cov X Y = W PRINCIPAL COMPONENT ANALYSIS. Eigenvectors of the covariance matrix are the principal components Review Lecture 14 ! PRINCIPAL COMPONENT ANALYSIS Eigenvectors of the covariance matrix are the principal components 1. =cov X Top K principal components are the eigenvectors with K largest eigenvalues

More information

Detecting and Identifying Moving Objects in Real-Time

Detecting and Identifying Moving Objects in Real-Time Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary

More information

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

Use of Gait Energy Image in Implementation of Real Time Video Surveillance System

Use of Gait Energy Image in Implementation of Real Time Video Surveillance System IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 1, Ver. 5 (Jan. 2014), PP 88-93 Use of Gait Energy Image in Implementation of Real Time Video Surveillance

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 9, SEPTEMBER Automatic Gait Recognition Based on Statistical Shape Analysis

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 9, SEPTEMBER Automatic Gait Recognition Based on Statistical Shape Analysis TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 9, SEPTEMBER 2003 1 Automatic Gait Recognition Based on Statistical Shape Analysis Liang Wang, Tieniu Tan, Senior Member,, Weiming Hu, and Huazhong Ning Abstract

More information

Structural Human Shape Analysis for Modeling and Recognition

Structural Human Shape Analysis for Modeling and Recognition Structural Human Shape Analysis for Modeling and Recognition Chutisant Kerdvibulvech 1 and Koichiro Yamauchi 2 1 Rangsit University, 52/347 Muang-Ake, Paholyothin Rd, Lak-Hok, Patum Thani 12000, Thailand

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme

Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme Costas Panagiotakis and Anastasios Doulamis Abstract In this paper, an unsupervised, automatic video human members(human

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Markerless human motion capture through visual hull and articulated ICP

Markerless human motion capture through visual hull and articulated ICP Markerless human motion capture through visual hull and articulated ICP Lars Mündermann lmuender@stanford.edu Stefano Corazza Stanford, CA 93405 stefanoc@stanford.edu Thomas. P. Andriacchi Bone and Joint

More information

Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences

Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences Presentation of the thesis work of: Hedvig Sidenbladh, KTH Thesis opponent: Prof. Bill Freeman, MIT Thesis supervisors

More information

Gait Style and Gait Content: Bilinear Models for Gait Recognition Using Gait Re-sampling

Gait Style and Gait Content: Bilinear Models for Gait Recognition Using Gait Re-sampling Gait Style and Gait Content: Bilinear Models for Gait Recognition Using Gait Re-sampling Chan-Su Lee Department of Computer Science Rutgers University New Brunswick, NJ, USA chansu@caip.rutgers.edu Ahmed

More information

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij Intelligent Systems Lab Amsterdam, University of Amsterdam ABSTRACT Performance

More information

arxiv: v1 [cs.cv] 2 May 2016

arxiv: v1 [cs.cv] 2 May 2016 16-811 Math Fundamentals for Robotics Comparison of Optimization Methods in Optical Flow Estimation Final Report, Fall 2015 arxiv:1605.00572v1 [cs.cv] 2 May 2016 Contents Noranart Vesdapunt Master of Computer

More information

Automatic Tracking of Moving Objects in Video for Surveillance Applications

Automatic Tracking of Moving Objects in Video for Surveillance Applications Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering

More information

AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE

AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE Md. Akhtaruzzaman, Amir A. Shafie and Md. Raisuddin Khan Department of Mechatronics Engineering, Kulliyyah of Engineering, International

More information

A Layered Deformable Model for Gait Analysis

A Layered Deformable Model for Gait Analysis A Layered Deformable Model for Gait Analysis Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos Bell Canada Multimedia Laboratory The Edward S. Rogers Sr. Department of Electrical and Computer Engineering

More information

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation Michael J. Black and Allan D. Jepson Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto,

More information

Silhouette Coherence for Camera Calibration under Circular Motion

Silhouette Coherence for Camera Calibration under Circular Motion Silhouette Coherence for Camera Calibration under Circular Motion Carlos Hernández, Francis Schmitt and Roberto Cipolla Appendix I 2 I. ERROR ANALYSIS OF THE SILHOUETTE COHERENCE AS A FUNCTION OF SILHOUETTE

More information

An Edge-Based Approach to Motion Detection*

An Edge-Based Approach to Motion Detection* An Edge-Based Approach to Motion Detection* Angel D. Sappa and Fadi Dornaika Computer Vison Center Edifici O Campus UAB 08193 Barcelona, Spain {sappa, dornaika}@cvc.uab.es Abstract. This paper presents

More information

STATISTICS AND ANALYSIS OF SHAPE

STATISTICS AND ANALYSIS OF SHAPE Control and Cybernetics vol. 36 (2007) No. 2 Book review: STATISTICS AND ANALYSIS OF SHAPE by H. Krim, A. Yezzi, Jr., eds. There are numerous definitions of a notion of shape of an object. These definitions

More information

3D Face and Hand Tracking for American Sign Language Recognition

3D Face and Hand Tracking for American Sign Language Recognition 3D Face and Hand Tracking for American Sign Language Recognition NSF-ITR (2004-2008) D. Metaxas, A. Elgammal, V. Pavlovic (Rutgers Univ.) C. Neidle (Boston Univ.) C. Vogler (Gallaudet) The need for automated

More information

Distance-driven Fusion of Gait and Face for Human Identification in Video

Distance-driven Fusion of Gait and Face for Human Identification in Video X. Geng, L. Wang, M. Li, Q. Wu, K. Smith-Miles, Distance-Driven Fusion of Gait and Face for Human Identification in Video, Proceedings of Image and Vision Computing New Zealand 2007, pp. 19 24, Hamilton,

More information

GENDER PREDICTION BY GAIT ANALYSIS BASED ON TIME SERIES VARIATION OF JOINT POSITIONS

GENDER PREDICTION BY GAIT ANALYSIS BASED ON TIME SERIES VARIATION OF JOINT POSITIONS GENDER PREDICTION BY GAIT ANALYSIS BASED ON TIME SERIES VARIATION OF JOINT POSITIONS Ryusuke Miyamoto Dept. of Computer Science School of Science and Technology Meiji University 1-1-1 Higashimita Tama-ku

More information

This is a preprint of an article published in Computer Animation and Virtual Worlds, 15(3-4): , 2004.

This is a preprint of an article published in Computer Animation and Virtual Worlds, 15(3-4): , 2004. This is a preprint of an article published in Computer Animation and Virtual Worlds, 15(3-4):399-406, 2004. This journal may be found at: http://www.interscience.wiley.com Automated Markerless Extraction

More information

Summarization of Egocentric Moving Videos for Generating Walking Route Guidance

Summarization of Egocentric Moving Videos for Generating Walking Route Guidance Summarization of Egocentric Moving Videos for Generating Walking Route Guidance Masaya Okamoto and Keiji Yanai Department of Informatics, The University of Electro-Communications 1-5-1 Chofugaoka, Chofu-shi,

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

W 4 : Real-Time Surveillance of People and Their Activities

W 4 : Real-Time Surveillance of People and Their Activities IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 22, NO. 8, AUGUST 2000 809 W 4 : Real-Time Surveillance of People and Their Activities Ismail Haritaoglu, Member, IEEE, David Harwood,

More information

Backpack: Detection of People Carrying Objects Using Silhouettes

Backpack: Detection of People Carrying Objects Using Silhouettes Backpack: Detection of People Carrying Objects Using Silhouettes Ismail Haritaoglu, Ross Cutler, David Harwood and Larry S. Davis Computer Vision Laboratory University of Maryland, College Park, MD 2742

More information

A Performance Evaluation of HMM and DTW for Gesture Recognition

A Performance Evaluation of HMM and DTW for Gesture Recognition A Performance Evaluation of HMM and DTW for Gesture Recognition Josep Maria Carmona and Joan Climent Barcelona Tech (UPC), Spain Abstract. It is unclear whether Hidden Markov Models (HMMs) or Dynamic Time

More information

Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map

Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map Markus Turtinen, Topi Mäenpää, and Matti Pietikäinen Machine Vision Group, P.O.Box 4500, FIN-90014 University

More information

Fast Lighting Independent Background Subtraction

Fast Lighting Independent Background Subtraction Fast Lighting Independent Background Subtraction Yuri Ivanov Aaron Bobick John Liu [yivanov bobick johnliu]@media.mit.edu MIT Media Laboratory February 2, 2001 Abstract This paper describes a new method

More information

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Euclidean Reconstruction Independent on Camera Intrinsic Parameters Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean

More information

Today s Topics. Percentile ranks and percentiles. Standardized scores. Using standardized scores to estimate percentiles

Today s Topics. Percentile ranks and percentiles. Standardized scores. Using standardized scores to estimate percentiles Today s Topics Percentile ranks and percentiles Standardized scores Using standardized scores to estimate percentiles Using µ and σ x to learn about percentiles Percentiles, standardized scores, and the

More information

4th Grade Math: State Standards, MPS Objectives and Essential Learnings

4th Grade Math: State Standards, MPS Objectives and Essential Learnings Grade Math: s, s and s MA 4.1 Students will communicate number sense concepts using multiple representations to reason, solve problems, and make connections within mathematics and across disciplines. MA

More information

Tutorial: Using Tina Vision s Quantitative Pattern Recognition Tool.

Tutorial: Using Tina Vision s Quantitative Pattern Recognition Tool. Tina Memo No. 2014-004 Internal Report Tutorial: Using Tina Vision s Quantitative Pattern Recognition Tool. P.D.Tar. Last updated 07 / 06 / 2014 ISBE, Medical School, University of Manchester, Stopford

More information

Human Hand Gesture Recognition Using Motion Orientation Histogram for Interaction of Handicapped Persons with Computer

Human Hand Gesture Recognition Using Motion Orientation Histogram for Interaction of Handicapped Persons with Computer Human Hand Gesture Recognition Using Motion Orientation Histogram for Interaction of Handicapped Persons with Computer Maryam Vafadar and Alireza Behrad Faculty of Engineering, Shahed University Tehran,

More information

Scott Foresman Investigations in Number, Data, and Space Content Scope & Sequence

Scott Foresman Investigations in Number, Data, and Space Content Scope & Sequence Scott Foresman Investigations in Number, Data, and Space Content Scope & Sequence Correlated to Academic Language Notebooks The Language of Math Grade 4 Content Scope & Sequence Unit 1: Factors, Multiples,

More information

A Bottom Up Algebraic Approach to Motion Segmentation

A Bottom Up Algebraic Approach to Motion Segmentation A Bottom Up Algebraic Approach to Motion Segmentation Dheeraj Singaraju and RenéVidal Center for Imaging Science, Johns Hopkins University, 301 Clark Hall, 3400 N. Charles St., Baltimore, MD, 21218, USA

More information

Chapter 7: Computation of the Camera Matrix P

Chapter 7: Computation of the Camera Matrix P Chapter 7: Computation of the Camera Matrix P Arco Nederveen Eagle Vision March 18, 2008 Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 1 / 25 1 Chapter 7: Computation of the camera Matrix

More information

Detecting and Tracking Moving Objects for Video Surveillance. Isaac Cohen and Gerard Medioni University of Southern California

Detecting and Tracking Moving Objects for Video Surveillance. Isaac Cohen and Gerard Medioni University of Southern California Detecting and Tracking Moving Objects for Video Surveillance Isaac Cohen and Gerard Medioni University of Southern California Their application sounds familiar. Video surveillance Sensors with pan-tilt

More information

Performance Characterization in Computer Vision

Performance Characterization in Computer Vision Performance Characterization in Computer Vision Robert M. Haralick University of Washington Seattle WA 98195 Abstract Computer vision algorithms axe composed of different sub-algorithms often applied in

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

Efficient Acquisition of Human Existence Priors from Motion Trajectories

Efficient Acquisition of Human Existence Priors from Motion Trajectories Efficient Acquisition of Human Existence Priors from Motion Trajectories Hitoshi Habe Hidehito Nakagawa Masatsugu Kidode Graduate School of Information Science, Nara Institute of Science and Technology

More information

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR Detecting Missing and Spurious Edges in Large, Dense Networks Using Parallel Computing Samuel Coolidge, sam.r.coolidge@gmail.com Dan Simon, des480@nyu.edu Dennis Shasha, shasha@cims.nyu.edu Technical Report

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Measurement of 3D Foot Shape Deformation in Motion

Measurement of 3D Foot Shape Deformation in Motion Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The

More information

Motion Detection Algorithm

Motion Detection Algorithm Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection

More information

Intelligent Cutting of the Bird Shoulder Joint

Intelligent Cutting of the Bird Shoulder Joint Intelligent Cutting of the Bird Shoulder Joint Ai-Ping Hu, Sergio Grullon, Debao Zhou, Jonathan Holmes, Wiley Holcombe, Wayne Daley and Gary McMurray Food Processing Technology Division, ATAS Laboratory,

More information