Automatically Building Appearance Models from Image Sequences using Salient Features.
|
|
- Marshall Woods
- 5 years ago
- Views:
Transcription
1 Automatically Building Appearance Models from Image Sequences using Salient Features. K.N.Walker, T.F.Cootes and C.J.Taylor Dept. Medical Biophysics, Manchester University, UK Tel: +44 (0) Fax: +44 (0) Abstract We address the problem of automatically placing landmarks across an image sequence to define correspondences between frames. The marked up sequence is then used to build a statistical model of the appearance of the object within the sequence. We locate the most salient object features from within the first frame and attempt to track them throughout the sequence. Salient features are those which have a low probability of being mis-classified as any other feature, and are therefore more likely to be robustly tracked throughout the sequence. The method automatically builds statistical models of the objects shape and the salient features appearance as the sequence is tracked. These models are used in subsequent frames to further improve the probability of finding accurate matches. Results are shown for several face image sequences. The quality of the model is comparable with that generated from hand labelled images. 1 Introduction Statistical models of shape and appearance have proved powerful tools for interpreting images, particularly when combined with algorithms to match the models to new images rapidly [5, 4, 8]. In order to construct such models we require sets of labelled training images. The labels consist of landmark points defining the correspondences between similar structures in each image across the set. The most time consuming and scientifically unsatisfactory part of building the models is the labeling of the training images. Manually placing hundreds of points on every image is both tedious and error prone. To reduce the burden, semi-automatic systems have been developed. In these a model is built from the current set of examples (possibly with extra artificial modes included in the early stages) and used to search the next image. The user can edit the result where necessary, then add the example to the training set. Though this can considerably reduce the time and effort required, labelling large sets of images is still difficult. We present a method which, given a sequence of an object and its outline in the first frame, automatically returns a set of correspondences across the entire sequence. We take advantage of the fact that it is easier to track correspondences across a sequence than find 463 BMVC 1999 doi: /c.13.46
2 them between two arbitrary images from a training set. The found correspondences can then be used to build an appearance model of the object. The local image structure at each image point is described by vectors of Gaussian partial derivatives (or feature vectors) computed at a range of scales. We can calculate the spatial positions and scales of the salient features within the first frame by defining salient feature vectors [14, 13] to be the ones which lie in the low density regions of the distribution of all feature vectors. Salient features have a low probability of being mistaken for any other feature within the object. They therefore have a low probability of generating false positive matches when searched for in subsequent frames. As the sequence is searched, we build statistical models of each salient feature. These models enable us to use all the information gained from searching previous frames to help find the correct matches in subsequent frames. We also build a model of the relative positions of each feature and ensure that only sensible configurations of features are selected. In the following we briefly review how to construct appearance models. We describe how salient features are located in the first frame, and how they are used to derive correspondences across an image sequence. Finally we show some examples of automatically built appearance models and compare them with appearance models built from manually labelled training data. 2 Background 2.1 Saliency and Automatic Landmarking Many authors have shown that using the saliency of image features can improve the robustness in object recognition algorithms [3, 12], but this typically been applied to finding salient segments on an object boundary. Approaches to automatic landmark placement in 2D have assumed that contours have already been segmented from the training sets [7, 1]. This is a process in its self can time consuming and involve human intervention. Baumberg and Hogg [1] describe a system which generates landmarks automatically for outlines of walking people. While this process is satisfactory for silhouettes of pedestrians, it is unlikely that it will be generally successful. The authors went on to describe how the position of the landmarks can be iteratively updated in order to generate improved shape models generated from the landmarks [2]. Many people have used jets of Gabor Wavelets or Gaussian Partial Derivative to describe and locate features in new image examples [9, 10]. Typically the location of the features modelled are selected by hand, choosing for example, heavily textured areas. We believe that there is an optimum set of these features which best determine the object, and attempting to select them manually risks compromising system performance. 2.2 Appearance Models An appearance model can represent both the shape and texture variability seen in a training set. The training set consists of labelled images, where key landmark points are marked on each example object. 464
3 Given such a set we can generate a statistical model of shape variation by applying Principal Component Analysis (PCA) to the set of vectors describing the shapes in the training set (see [5] for details). The labelled points, x, on a single object describe the shape of that object. Any example can then be approximated using: x = μx + P s b s (1) where μx is the mean shape vector, P s is a set of orthogonal modes of shape variation and b s is a vector of shape parameters. To build a statistical model of the grey-level appearance we warp each example image so that its control points match the mean shape (using a triangulation algorithm). Figure 1 shows three examples of labelled faces. We then sample the intensity information from the shape-normalised image over the region covered by the mean shape. To minimise the effect of global lighting variation, we normalise the resulting samples. Figure 1: Examples of faces labelled with consistent landmarks By applying PCA to the normalised data we obtain a linear model: g = μg + P g b g (2) where μg is the mean normalised grey-level vector, P g is a set of orthogonal modes of intensity variation and b g is a set of grey-level parameters. The shape and texture are correlated. By further analysis [4] we can derive a joint model of shape and texture: x = μx + Q x c (3) y = μy + Q y c (4) where Q x and Q y represent modes of shape and texture variation, and c is a vector of appearance model parameters. An example image can be synthesized for a given c by generating the shape-free greylevel image from the vector g and warping it using the control points described by x. 3 Automatically Building Appearance Models To build an Appearance Model it is necessary to calculate a dense correspondence between all frames in the training set. This is typically achieved by placing consistent landmarks on all training frames and using a triangulation algorithm to approximate the dense 465
4 correspondence. We have attempted to find a consistent set of landmarks automatically by selecting salient features in the first image in the sequence. These are the features which have the lowest probability of being mistaken with any other features in that image. By searching for these features in subsequent images we minimize the probability of false positives. Robustness can further be improved by applying shape constraints learnt from previous frames to searches in subsequent frames. We begin by explaining how salient features are chosen from the initial frame. 3.1 Locating salient features in the first image The aim is to locate salient features, those which are most likely to be relocated correctly in a subsequent frame. Given only one example of the object, the best we can do is to attempt to find those features which are significantly different to all other features in the object. Ideally, these features would occur exactly once in each example of the object. For every pixel within the object boundary we construct several feature vectors, each of which describes a feature centered on the pixel at a range of scales. In the following the feature vectors used were the first and second order normalised Gaussian partial derivatives, giving a five dimensional feature vector for each scale considered. Higher orders could be used depending on the computational power available. The full set of vectors describing all object features (one per image pixel), forms a multi-variate distribution in a feature space. By modeling the density in feature space we can estimate how likely a given feature is to be confused with other features. Salient features lie in low density areas of feature space. In the rest of this section we describe how we modeled the density of feature space and how this can be used to select salient features.see [14, 13] for a more detailed description Modeling the Density of Feature Space We estimate the local density ^p at point x in a feature space by summing the contribution from n Gaussian kernels, one centered on each sample, x i, in the distribution. ^p(x) = 1 n nx G(x x i ; ±) (5) i=1 where G(x; ±) gives the probability of x being part of the normalised multi-variant Gaussian distribution, with covariance ± and a mean of 0, ± = hs where S is the covariance of the entire distribution and h is a scaling factor. This method of modeling the density of feature space gives very good results but at a cost of a complexity of n 2. An approximation to this method is to model the distribution with less than n Gaussian Kernels [14] Selecting Salient Features BMVC99 The density estimate for each feature vector v(ff) corresponds directly to the saliency of the feature at scale ff. The lower the density, the more salient the point. A saliency image can be constructed for each scale analysed by setting each pixel to the probability density at the corresponding feature vector at that scale. This forms a 466
5 saliency image stack. The positions and scales of the most salient features are then found by locating the lowest troughs in the saliency images. To ensure that the features selected are distributed evenly across the object, the following algorithm is used: REPEAT Select the next lowest trough in the saliency image stack IF trough is more than 3ff pixel s from any previously selected trough Mark trough as a representing a salient feature ELSE Discard trough ENDIF UNTIL no more trough s Figure 2 shows a saliency image stack calculated at 3 scales from an image of a face. The features selected are shown on the original image, the circles around each feature indicate the scale of the features (i.e. the region filtered in order to construct the feature vector). The result of this analysis is n salient features which are described by v1i, s i and x1i, where v ji is the feature vector of salient feature i in frame j, s i is the scale at which feature i was found to be salient, and x ji = (x ji ;y ji ) are the x and y coordinates of feature i in frame j respectively. 3.2 Locating the features in the jth frame When locating the features in the jth frame we make the following two assumptions: ffl The features will not move more than a given number, r, pixels between frames. ffl The scale of the object and therefore features will not change significantly. These assumptions help constrain the search and reduce processing time. Only ßr 2 candidate pixels in each frame need to be considered in order to locate the best match for one particular feature. A candidate pixel has an associated feature vector, v. We assume the quality of match to the i th feature model using M i (v); M i (v) = (v μv i ) T S 1 (v μv i ) (6) where μv is the mean of the matches for the i th feature, μv i = j 1 1 Pj 1 k=1 v ki, and 8< S = : Csi if j =2 1 (Pj 1 j k=1 (v ki μv i ) 2 + r I) if j>2 C ff is the covariance matrix of all the features at scale ff within the object boundary from frame one and r I is a regularising term to avoid singular matrix inversion. The smaller M i (v), the better the match. M i (v) is linearly related to the log of the probability that v comes from the distribution of feature model i. 467
6 (a) (b) (c) (d) Figure 2: Images (b), (c) and (d) are saliency images extracted from image (a) at scales 6, 7 and 8 standard deviations respectively. The circles on image (a) show the locations of the salient features selected. The size of the circles represent the image region that the salient feature represents (or the scale). By calculating M i (v) for all candidates for a particular feature, i, we can form a similarity image. This is done by plotting M i (v) in the image domain. We locate the m best matches for the feature by locating the m lowest trough s in the similarity image. Figure 3 shows an example of this. The shape of an object is represented by n salient features. We have selected m possible candidate matches for each salient feature. By selecting one candidate match for each salient feature we form an object hypothesis. Generating all possible combinations would result in m n object hypotheses. In order to reduce the number of hypotheses, candidates which correspond to the largest similarity values, M i (v), can be discarded (i.e. remove the most improbable matches). The number of matches discarded depends on the computational power available. Each hypothesis is given a probability of being correct, based on the evidence obtained from previous frames. We define the probability of a hypothesis being correct to be equal to the probability of all features being correct together with the probability of the shape 468
7 (a) (b) (c) Figure 3: Calculating a feature match in subsequent frames. (c) is the similarity image obtained whilst searching for the feature in (a) in the frame shown in (b). The trough s in (c) represent the matches, these are highlighted in (b) formed by the salient features being correct. The shape formed by the n salient features, for a particular frame, j, is described by a 2n element shape vector, x j,where: x j =(x j1;:::::;x jn ;y j1;:::::;y jn ) T (7) The quality of fit of a shape vector, x j, in frame, j, is given by: M (x j ) = (x j μx) T H 1 (x j μx) (8) where μx and H are the mean and covariance matrix of shape vectors from frame 1 to j 1 after they have been aligned using Procustes Analysis [6]. If j =2then H = Iff where I is a (2nx2n) identity matrix and ff is an estimate of the variance of the features between frames in pixel s. M (x j ) is also linearly related to the log probability that x j is drawn from the distribution shape vectors from frames 1 to j 1. We assume features can be treated as independent of one another and the global shape. A measure of the quality of a particular hypothesis, M (h k ), is thus given by: nx M (h k )=M(x hk )+ M i (v hk i) (9) i=1 where x hk is the shape vector for hypothesis h k and v hk i is the feature vector that hypothesis h k has selected to describe feature i. We select the hypothesis with the smallest M (h k ) as the most likely solution. 4 Results The automatic method gives qualitatively good results if the sequence is correctly tracked. Figure 5(a) illustrates the modes of variation for an automatically trained model. As a 469
8 (a) (b) Figure 4: Landmarks chosen manually (a) and automatically (b). comparison we also build two other models. The first trained using n hand placed landmarks, the second trained using only 4 landmarks placed on each corner of the smallest rectangle which could contain the face in each frame, essentially an eigenface [11]. This second model was used to examine the affect on the texture error if the texture and shape information where not modeled separately (no normalization of shape). Figure 4 shows the positions for the landmarks for the hand placed model (a) and the automatic model (b). Figure 5(b) shows the modes for the manually trained model. They appear very similar to those for the automatically trained model. It is difficult to assess the results quantitatively without considering the application to which the model will be put. For effective coding (compression), we require that truncating the number of models modes has a minimal effect on the reconstruction error. We can thus compare models by how the reconstruction error varies with the number of modes. Figure 6 shows how the reconstruction texture error, (a), and the shape error, (b), increase as the model modes decrease. The graphs show that both the texture error and shape error are reduced by training the model automatically. The automatic method thus leads to a more compact model. 5 Discussion We have demonstrated one possible approach to automatically training appearance models. The system both locates the most suitable features to use as model points, and tracks their correspondence across frames. We have shown that the method can be used to automatically make models of the human face for image sequences. However, if features move and change shape significantly between frames, false correspondences can result. Also in long sequences, features which were salient in the first frame are not necessarily still salient in the 50th frame. Walker et al [15] proposed a method of calculating the locations of salient features over a number of frames given an approximate correspondence. This idea could be incorporated into our scheme. When 470
9 (a) (b) Figure 5: Models of variation for an automatically trained (a) and manually trained (b) model. searching frame j, instead of locating features which were salient in frame 1, we could locate features which were salient in frames j 1 to, say, j 5. However, this would require additional work (eg using interpolation between points) to compute correspondences across the entire sequence. The results presented in this paper have used a relatively small number of landmarks (25). We have observed that the coarse scale features are, in general, much more reliable than the fine scale features. We would like to extend this scheme to a find a greater number of correspondences by adopting a multiscale approach. Coarse scale salient features would be found first, then the resulting coarse scale correspondence could be used to constrain a search for finer scale salient features. This would lead to a more accurate overall correspondence. Our method would be well suited to in the animation industry for building models of faces or other deformable objects with the intention of using the models to produce photo realistic animations. References [1] Adam Baumberg and David Hogg. Learning flexible models from image sequences. In 3 nd European Conference on Computer Vision, pages , [2] Adam Baumberg and David Hogg. An adaptive eigenshape model. In David Pycock, editor, 6 th British Machine Vison Conference, pages BMVA Press, September [3] R. C. Bolles and R. A. Cain. Recognising and locating partially visible objects: the localfeature-focus method. Int. J. Robotics Res, 1:57 82, [4] T.F. Cootes, G. J. Edwards, and C. J. Taylor. Active appearance models. In H.Burkhardt and B. Neumann, editors, 5 th European Conference on Computer Vision, volume 2, pages Springer,
10 Automatic model Hand marked model Eigenface model Automatic model Hand marked model Texture error in pixels Shape error in pixels Number of modes in the model (a) Number of modes in the model (b) Figure 6: Compares the texture error (a) and shape error (b) of an automatically built model with that of an eigen face model and a model trained using hand placed landmarks. [5] Tim F. Cootes, Christopher J. Taylor, David H. Cooper, and Jim Graham. Active shape models - their training and application. Computer Vision and Image Understanding, 61(1):38 59, January [6] C. Goodall. Procrustes methods in the statistical analysis of shape. Journal of the Royal Statistical Society B, 53(2): , [7] Andrew Hill and Christopher J. Taylor. Automatic landmark identification using a new method of non-rigid correspondence. In 15 th Conference on Information Processing in Medical Imaging, pages , [8] A. Lanitis, C.J. Taylor, and T.F. Cootes. Automatic interpretation and coding of face images using flexible models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7): , [9] Micheal Lyons and Shigeru Akamatsu. Coding Facial Expressions with Gabor Wavelets. In 3 rd International Conference on Automatic Face andgesture Recognition 1998, pages , Nara, Japan, [10] Hai Tao, Ricardo Lopez, and Thomas Huang. Tracking Facial Features Using Probabilistic Network. In 3 rd International Conference on Automatic Face andgesture Recognition 1998, pages , Nara, Japan, [11] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71 86, [12] J. L. Turney, T. N. Mudge, and R. A. Voltz. Recognising partially occluded parts. IEEE Trans. PAMI, 7: , [13] K. N. Walker, T. F. Cootes, and C. J. Taylor. Locating salient facial features using image invariants. In 3 rd International Conference on Automatic Face andgesture Recognition 1998, pages , Nara, Japan, [14] K. N. Walker, T.F. Cootes,, and C. J. Taylor. Correspondence based on distinct points using image invariants. In 8 th British Machine Vison Conference, pages , Colchester, UK, [15] K. N. Walker, T.F. Cootes,, and C. J. Taylor. Locating salient object features. In P.H. Lewis and M.S. Nixon, editors, 9 th British Machine Vison Conference, volume 2, pages , Southampton, UK, September BMVA Press. 472
Locating Salient Object Features
Locating Salient Object Features K.N.Walker, T.F.Cootes and C.J.Taylor Dept. Medical Biophysics, Manchester University, UK knw@sv1.smb.man.ac.uk Abstract We present a method for locating salient object
More informationA Comparative Evaluation of Active Appearance Model Algorithms
A Comparative Evaluation of Active Appearance Model Algorithms T.F. Cootes, G. Edwards and C.J. Taylor Dept. Medical Biophysics, Manchester University, UK tcootes@server1.smb.man.ac.uk Abstract An Active
More informationAn Adaptive Eigenshape Model
An Adaptive Eigenshape Model Adam Baumberg and David Hogg School of Computer Studies University of Leeds, Leeds LS2 9JT, U.K. amb@scs.leeds.ac.uk Abstract There has been a great deal of recent interest
More informationFace Recognition Using Active Appearance Models
Face Recognition Using Active Appearance Models G.J. Edwards, T.F. Cootes, and C.J. Taylor Wolfson Image Analysis Unit, Department of Medical Biophysics, University of Manchester, Manchester M13 9PT, U.K.
More informationAn Automatic Face Identification System Using Flexible Appearance Models
An Automatic Face Identification System Using Flexible Appearance Models A. Lanitis, C.J.Taylor and T.F.Cootes Dpt. of Medical Biophysics, University of Manchester email: lan@wiau.mb.man.ac.uk We describe
More informationActive Appearance Models
Active Appearance Models Edwards, Taylor, and Cootes Presented by Bryan Russell Overview Overview of Appearance Models Combined Appearance Models Active Appearance Model Search Results Constrained Active
More informationA Method of Automated Landmark Generation for Automated 3D PDM Construction
A Method of Automated Landmark Generation for Automated 3D PDM Construction A. D. Brett and C. J. Taylor Department of Medical Biophysics University of Manchester Manchester M13 9PT, Uk adb@sv1.smb.man.ac.uk
More informationStatistical Models of Face Images - Improving Specificity
Statistical Models of Face Images - Improving Specificity G.J. Edwards, A. Lanitis, C.J. Taylor, T. F. Cootes Department of Medical Biophysics University of Manchester Oxford Road, Manchester, M13 9PT,
More informationActive Shape Models - 'Smart Snakes'
Active Shape Models - 'Smart Snakes' T.F.Cootes and C.J.Taylor Department of Medical Biophysics University of Manchester Oxford Road Manchester M13 9PT email: bim@wiau.mb.man.ac.uk Abstract We describe
More informationFlexible 3D Models from Uncalibrated Cameras
Flexible 3D Models from Uncalibrated Cameras T.F.Cootes, E.C. Di Mauro, C.J.Taylor, AXanitis Department of Medical Biophysics, University of Manchester Manchester M13 9PT email: bim@svl.smb.man.ac.uk Abstract
More informationActive Appearance Models
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 23, NO. 6, JUNE 2001 681 Active Appearance Models Timothy F. Cootes, Gareth J. Edwards, and Christopher J. Taylor AbstractÐWe describe
More informationActive Wavelet Networks for Face Alignment
Active Wavelet Networks for Face Alignment Changbo Hu, Rogerio Feris, Matthew Turk Dept. Computer Science, University of California, Santa Barbara {cbhu,rferis,mturk}@cs.ucsb.edu Abstract The active appearance
More informationAutomated Pivot Location for the Cartesian-Polar Hybrid Point Distribution Model
Automated Pivot Location for the Cartesian-Polar Hybrid Point Distribution Model Tony Heap and David Hogg School of Computer Studies, University of Leeds, Leeds LS2 9JT, UK email: ajh@scs.leeds.ac.uk Abstract
More informationHuman pose estimation using Active Shape Models
Human pose estimation using Active Shape Models Changhyuk Jang and Keechul Jung Abstract Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body
More informationEnhanced Active Shape Models with Global Texture Constraints for Image Analysis
Enhanced Active Shape Models with Global Texture Constraints for Image Analysis Shiguang Shan, Wen Gao, Wei Wang, Debin Zhao, Baocai Yin Institute of Computing Technology, Chinese Academy of Sciences,
More informationFace Recognition At-a-Distance Based on Sparse-Stereo Reconstruction
Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,
More informationFeature Detection and Tracking with Constrained Local Models
Feature Detection and Tracking with Constrained Local Models David Cristinacce and Tim Cootes Dept. Imaging Science and Biomedical Engineering University of Manchester, Manchester, M3 9PT, U.K. david.cristinacce@manchester.ac.uk
More informationFacial Feature Detection
Facial Feature Detection Rainer Stiefelhagen 21.12.2009 Interactive Systems Laboratories, Universität Karlsruhe (TH) Overview Resear rch Group, Universität Karlsruhe (TH H) Introduction Review of already
More informationDISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS
DISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS Sylvain Le Gallou*, Gaspard Breton*, Christophe Garcia*, Renaud Séguier** * France Telecom R&D - TECH/IRIS 4 rue du clos
More informationComparing Variations on the Active Appearance Model Algorithm
Comparing Variations on the Active Appearance Model Algorithm T.F. Cootes and P.Kittipanya-ngam Dept. Imaging Science and Biomedical Engineering University of Manchester, Manchester M13 9PT U.K. t.cootes@man.ac.uk
More informationLearning to Recognize Faces in Realistic Conditions
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
More informationIntensity-Depth Face Alignment Using Cascade Shape Regression
Intensity-Depth Face Alignment Using Cascade Shape Regression Yang Cao 1 and Bao-Liang Lu 1,2 1 Center for Brain-like Computing and Machine Intelligence Department of Computer Science and Engineering Shanghai
More informationGeneric Face Alignment Using an Improved Active Shape Model
Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn
More informationFace detection and recognition. Many slides adapted from K. Grauman and D. Lowe
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe Face detection and recognition Detection Recognition Sally History Early face recognition systems: based on features and distances
More informationA Unified Framework for Atlas Matching using Active Appearance Models
A Unified Framework for Atlas Matching using Active Appearance Models T.F. Cootes, C. Beeston, G.J. Edwards and C.J. Taylor Imaging Science and Biomedical Engineering, University of Manchester, Manchester
More informationOccluded Facial Expression Tracking
Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento
More informationSparse Shape Registration for Occluded Facial Feature Localization
Shape Registration for Occluded Facial Feature Localization Fei Yang, Junzhou Huang and Dimitris Metaxas Abstract This paper proposes a sparsity driven shape registration method for occluded facial feature
More informationAutomatic Construction of Active Appearance Models as an Image Coding Problem
Automatic Construction of Active Appearance Models as an Image Coding Problem Simon Baker, Iain Matthews, and Jeff Schneider The Robotics Institute Carnegie Mellon University Pittsburgh, PA 1213 Abstract
More informationDimension Reduction CS534
Dimension Reduction CS534 Why dimension reduction? High dimensionality large number of features E.g., documents represented by thousands of words, millions of bigrams Images represented by thousands of
More informationFace Detection and Recognition in an Image Sequence using Eigenedginess
Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Statistical Models for Shape and Appearance Note some material for these slides came from Algorithms
More informationFace Alignment Under Various Poses and Expressions
Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.
More informationOn Modelling Nonlinear Shape-and-Texture Appearance Manifolds
On Modelling Nonlinear Shape-and-Texture Appearance Manifolds C. Mario Christoudias Trevor Darrell Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Cambridge,
More informationThe Template Update Problem
The Template Update Problem Iain Matthews, Takahiro Ishikawa, and Simon Baker The Robotics Institute Carnegie Mellon University Abstract Template tracking dates back to the 1981 Lucas-Kanade algorithm.
More informationFace detection and recognition. Detection Recognition Sally
Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification
More informationImage Coding with Active Appearance Models
Image Coding with Active Appearance Models Simon Baker, Iain Matthews, and Jeff Schneider CMU-RI-TR-03-13 The Robotics Institute Carnegie Mellon University Abstract Image coding is the task of representing
More information3D Model Acquisition by Tracking 2D Wireframes
3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract
More informationBody Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model
Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model Shunta Saito* a, Makiko Kochi b, Masaaki Mochimaru b, Yoshimitsu Aoki a a Keio University, Yokohama, Kanagawa, Japan; b
More informationThe Analysis of Animate Object Motion using Neural Networks and Snakes
The Analysis of Animate Object Motion using Neural Networks and Snakes Ken Tabb, Neil Davey, Rod Adams & Stella George e-mail {K.J.Tabb, N.Davey, R.G.Adams, S.J.George}@herts.ac.uk http://www.health.herts.ac.uk/ken/vision/
More informationA Multi-Stage Approach to Facial Feature Detection
A Multi-Stage Approach to Facial Feature Detection David Cristinacce, Tim Cootes and Ian Scott Dept. Imaging Science and Biomedical Engineering University of Manchester, Manchester, M3 9PT, U.K. david.cristinacce@stud.man.ac.uk
More informationFace Alignment Using Statistical Models and Wavelet Features
Face Alignment Using Statistical Models and Wavelet Features Feng Jiao 1*, Stan Li, Heung-Yeung Shum, Dale Schuurmans 1 1 Department of Computer Science, University of Waterloo Waterloo, Canada Microsoft
More informationNon-linear Point Distribution Modelling using a Multi-layer Perceptron
Non-linear Point Distribution Modelling using a Multi-layer Perceptron P.D. Sozou, T.F. Cootes, C.J. Taylor and E.C. Di Mauro Department of Medical Biophysics University of Manchester M13 9PT email: pds@svl.smb.man.ac.uk
More informationProbabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information
Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Mustafa Berkay Yilmaz, Hakan Erdogan, Mustafa Unel Sabanci University, Faculty of Engineering and Natural
More informationTraining Models of Shape from Sets of Examples
Training Models of Shape from Sets of Examples TECootes, CJ.Taylor, D.H.Cooper and J.Graham Department of Medical Biophysics University of Manchester Oxford Road Manchester M13 9PT email: bim@wiau.mb.man.ac.uk
More informationA Dynamic Human Model using Hybrid 2D-3D Representations in Hierarchical PCA Space
A Dynamic Human Model using Hybrid 2D-3D Representations in Hierarchical PCA Space Eng-Jon Ong and Shaogang Gong Department of Computer Science, Queen Mary and Westfield College, London E1 4NS, UK fongej
More informationOn-line, Incremental Learning of a Robust Active Shape Model
On-line, Incremental Learning of a Robust Active Shape Model Michael Fussenegger 1, Peter M. Roth 2, Horst Bischof 2, Axel Pinz 1 1 Institute of Electrical Measurement and Measurement Signal Processing
More informationLOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM
LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, University of Karlsruhe Am Fasanengarten 5, 76131, Karlsruhe, Germany
More informationWavelet Applications. Texture analysis&synthesis. Gloria Menegaz 1
Wavelet Applications Texture analysis&synthesis Gloria Menegaz 1 Wavelet based IP Compression and Coding The good approximation properties of wavelets allow to represent reasonably smooth signals with
More informationDetecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution
Detecting Salient Contours Using Orientation Energy Distribution The Problem: How Does the Visual System Detect Salient Contours? CPSC 636 Slide12, Spring 212 Yoonsuck Choe Co-work with S. Sarma and H.-C.
More informationThree-Dimensional Face Recognition: A Fishersurface Approach
Three-Dimensional Face Recognition: A Fishersurface Approach Thomas Heseltine, Nick Pears, Jim Austin Department of Computer Science, The University of York, United Kingdom Abstract. Previous work has
More informationHand Gesture Extraction by Active Shape Models
Hand Gesture Extraction by Active Shape Models Nianjun Liu, Brian C. Lovell School of Information Technology and Electrical Engineering The University of Queensland, Brisbane 4072, Australia National ICT
More informationData-driven Methods: Faces. Portrait of Piotr Gibas Joaquin Rosales Gomez
Data-driven Methods: Faces Portrait of Piotr Gibas Joaquin Rosales Gomez 15-463: Computational Photography Alexei Efros, CMU, Fall 2010 The Power of Averaging 8-hour exposure Atta Kim Fun with long exposures
More informationELEC Dr Reji Mathew Electrical Engineering UNSW
ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion
More informationOptimal Sub-Shape Models by Minimum Description Length
Optimal Sub-Shape Models by Minimum Description Length Paper Number : 060 Abstract Active shape models are a powerful and widely used tool to interpret complex image data. By building models of shape variation
More informationA Distributed Approach to Image Interpretation Using Model-Based Spatial Reasoning
A Distributed Approach to Image Interpretation Using Model-Based Spatial Reasoning A.Ratter, O.Baujard, CJ.Taylor and T.F. Cootes Department of Medical Biophysics, Manchester University, Oxford Road, Manchester
More informationFace Modeling. Portrait of Piotr Gibas Joaquin Rosales Gomez
Face Modeling Portrait of Piotr Gibas Joaquin Rosales Gomez 15-463: Computational Photography Alexei Efros, CMU, Fall 2006 The Power of Averaging Figure-centric averages Antonio Torralba & Aude Oliva (2002)
More informationCS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow
CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement
More informationTracking humans from a moving platform
Tracking humans from a moving platform Larry Davis, Vasanth Philomin and Ramani Duraiswami Computer Vision Laboratory Institute for Advanced Computer Studies University of Maryland, College Park, MD 20742,
More informationThe Analysis of Animate Object Motion using Neural Networks and Snakes
The Analysis of Animate Object Motion using Neural Networks and Snakes Ken Tabb, Neil Davey, Rod Adams & Stella George e-mail {K.J.Tabb, N.Davey, R.G.Adams, S.J.George}@herts.ac.uk http://www.health.herts.ac.uk/ken/vision/
More informationVideo-Based Online Face Recognition Using Identity Surfaces
Video-Based Online Face Recognition Using Identity Surfaces Yongmin Li, Shaogang Gong and Heather Liddell Department of Computer Science, Queen Mary, University of London, London E1 4NS, UK Email: yongmin,sgg,heather
More informationHierarchical Shape Statistical Model for Segmentation of Lung Fields in Chest Radiographs
Hierarchical Shape Statistical Model for Segmentation of Lung Fields in Chest Radiographs Yonghong Shi 1 and Dinggang Shen 2,*1 1 Digital Medical Research Center, Fudan University, Shanghai, 232, China
More information2013, IJARCSSE All Rights Reserved Page 213
Volume 3, Issue 9, September 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com International
More informationBinary Principal Component Analysis
1 Binary Principal Component Analysis Feng Tang and Hai Tao University of California, Santa Cruz tang tao@soe.ucsc.edu Abstract Efficient and compact representation of images is a fundamental problem in
More informationApplication of the Active Shape Model in a commercial medical device for bone densitometry
Application of the Active Shape Model in a commercial medical device for bone densitometry H. H. Thodberg and A. Rosholm, Pronosco, Kohavevej 5, 2950 Vedbaek, Denmark hht@pronosco.com Abstract Osteoporosis
More informationLow Dimensional Surface Parameterisation with Applications in Biometrics
Low Dimensional Surface Parameterisation with Applications in Biometrics Wei Quan, Bogdan J. Matuszewski, Lik-Kwan Shark, Djamel Ait-Boudaoud Applied Digital Signal and Image Processing Centre University
More informationLecture 15 Active Shape Models
Lecture 15 Active Shape Models Methods in Medical Image Analysis - Spring 2018 16-725 (CMU RI) : BioE 2630 (Pitt) Dr. John Galeotti The content of these slides by John Galeotti, 2012-2018 Carnegie Mellon
More informationEECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline
EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)
More informationImage Segmentation and Registration
Image Segmentation and Registration Dr. Christine Tanner (tanner@vision.ee.ethz.ch) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation
More informationChapter 9 Object Tracking an Overview
Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging
More informationTowards the completion of assignment 1
Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?
More informationApplying Synthetic Images to Learning Grasping Orientation from Single Monocular Images
Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic
More informationA Model Based Approach for Expressions Invariant Face Recognition
A Model Based Approach for Expressions Invariant Face Recognition Zahid Riaz 1, Christoph Mayer 1, Matthias Wimmer 1, Michael Beetz 1, Bernd Radig 1 1 Department of Informatics, Technische Universität
More informationImage and Multidimensional Signal Processing
Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Representation and Description 2 Representation and
More informationFacial Feature Points Tracking Based on AAM with Optical Flow Constrained Initialization
Journal of Pattern Recognition Research 7 (2012) 72-79 Received Oct 24, 2011. Revised Jan 16, 2012. Accepted Mar 2, 2012. Facial Feature Points Tracking Based on AAM with Optical Flow Constrained Initialization
More informationAutomatic Rapid Segmentation of Human Lung from 2D Chest X-Ray Images
Automatic Rapid Segmentation of Human Lung from 2D Chest X-Ray Images Abstract. In this paper, we propose a complete framework that segments lungs from 2D Chest X-Ray (CXR) images automatically and rapidly.
More informationUnsupervised learning in Vision
Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual
More informationAnnouncements. Recognition I. Gradient Space (p,q) What is the reflectance map?
Announcements I HW 3 due 12 noon, tomorrow. HW 4 to be posted soon recognition Lecture plan recognition for next two lectures, then video and motion. Introduction to Computer Vision CSE 152 Lecture 17
More information[Gaikwad *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor
GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES LBP AND PCA BASED ON FACE RECOGNITION SYSTEM Ashok T. Gaikwad Institute of Management Studies and Information Technology, Aurangabad, (M.S), India ABSTRACT
More informationEnsemble registration: Combining groupwise registration and segmentation
PURWANI, COOTES, TWINING: ENSEMBLE REGISTRATION 1 Ensemble registration: Combining groupwise registration and segmentation Sri Purwani 1,2 sri.purwani@postgrad.manchester.ac.uk Tim Cootes 1 t.cootes@manchester.ac.uk
More informationIntuitive, Localized Analysis of Shape Variability
Intuitive, Localized Analysis of Shape Variability Paul Yushkevich, Stephen M. Pizer, Sarang Joshi, and J. S. Marron Medical Image Display and Analysis Group, University of North Carolina at Chapel Hill.
More informationIn Between 3D Active Appearance Models and 3D Morphable Models
In Between 3D Active Appearance Models and 3D Morphable Models Jingu Heo and Marios Savvides Biometrics Lab, CyLab Carnegie Mellon University Pittsburgh, PA 15213 jheo@cmu.edu, msavvid@ri.cmu.edu Abstract
More informationCHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS
38 CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS 3.1 PRINCIPAL COMPONENT ANALYSIS (PCA) 3.1.1 Introduction In the previous chapter, a brief literature review on conventional
More informationEar Recognition based on 2D Images
Ear Recognition based on 2D Images Li Yuan, Zhi-chun Mu Abstract-Research of ear recognition and its application is a new subject in the field of biometrics authentication. Ear normalization and alignment
More informationBetter than best: matching score based face registration
Better than best: based face registration Luuk Spreeuwers University of Twente Fac. EEMCS, Signals and Systems Group Hogekamp Building, 7522 NB Enschede The Netherlands l.j.spreeuwers@ewi.utwente.nl Bas
More informationThe Quotient Image: Class Based Recognition and Synthesis Under Varying Illumination Conditions
The Quotient Image: Class Based Recognition and Synthesis Under Varying Illumination Conditions Tammy Riklin-Raviv and Amnon Shashua Institute of Computer Science, The Hebrew University, Jerusalem 91904,
More informationFACE RECOGNITION USING INDEPENDENT COMPONENT
Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major
More informationSIFT - scale-invariant feature transform Konrad Schindler
SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective
More informationImage Enhancement Techniques for Fingerprint Identification
March 2013 1 Image Enhancement Techniques for Fingerprint Identification Pankaj Deshmukh, Siraj Pathan, Riyaz Pathan Abstract The aim of this paper is to propose a new method in fingerprint enhancement
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationAutomatized & Interactive. Muscle tissues characterization using. Na MRI
Automatized & Interactive Human Skeletal Muscle Segmentation Muscle tissues characterization using 23 Na MRI Noura Azzabou 30 April 2013 What is muscle segmentation? Axial slice of the thigh of a healthy
More informationAdding Curvature to Minimum Description Length Shape Models
Adding Curvature to Minimum Description Length Shape Models Hans Henrik Thodberg and Hildur Olafsdottir Informatics & Mathematical Modelling Technical University of Denmark 2800 Lyngby, Denmark hht@imm.dtu.dk
More informationA Model Based Approach for Expressions Invariant Face Recognition
A Model Based Approach for Expressions Invariant Face Recognition Zahid Riaz, Christoph Mayer, Matthias Wimmer, Michael Beetz, and Bernd Radig Department of Informatics, Technische Universität München
More informationModel Based Perspective Inversion
Model Based Perspective Inversion A. D. Worrall, K. D. Baker & G. D. Sullivan Intelligent Systems Group, Department of Computer Science, University of Reading, RG6 2AX, UK. Anthony.Worrall@reading.ac.uk
More informationA Statistical Consistency Check for the Space Carving Algorithm.
A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper
More informationGlobal fitting of a facial model to facial features for model based video coding
Global fitting of a facial model to facial features for model based video coding P M Hillman J M Hannah P M Grant University of Edinburgh School of Engineering and Electronics Sanderson Building, King
More informationPerformance Evaluation Metrics and Statistics for Positional Tracker Evaluation
Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Chris J. Needham and Roger D. Boyle School of Computing, The University of Leeds, Leeds, LS2 9JT, UK {chrisn,roger}@comp.leeds.ac.uk
More informationA Hierarchical Statistical Framework for the Segmentation of Deformable Objects in Image Sequences Charles Kervrann and Fabrice Heitz IRISA / INRIA -
A hierarchical statistical framework for the segmentation of deformable objects in image sequences Charles Kervrann and Fabrice Heitz IRISA/INRIA, Campus Universitaire de Beaulieu, 35042 Rennes Cedex,
More informationLecture 8 Object Descriptors
Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh
More informationAutomatic Modelling Image Represented Objects Using a Statistic Based Approach
Automatic Modelling Image Represented Objects Using a Statistic Based Approach Maria João M. Vasconcelos 1, João Manuel R. S. Tavares 1,2 1 FEUP Faculdade de Engenharia da Universidade do Porto 2 LOME
More informationDigital Image Processing Chapter 11: Image Description and Representation
Digital Image Processing Chapter 11: Image Description and Representation Image Representation and Description? Objective: To represent and describe information embedded in an image in other forms that
More information