A Dynamc urvature Based Approach for Facal Actvty Analyss n 3D Space Shaun anavan, Y Sun, Xng Zhang, and Ljun Yn Department of omputer Scence, State Unversty of New York at Bnghamton Abstract Ths paper presents a novel dynamc curvature based approach (dynamc shape-ndex based approach) for 3D face analyss. Ths method s nspred by the dea of 2D dynamc texture and 3D surface descrptors. The dynamc texture (DT) based approaches [30][31][32] encode and model the local texture features n the temporal axs, and have acheved great success n applcatons of 2D facal expresson recognton. In ths paper, we propose a socalled Dynamc urvature (D) approach for 3D facal actvty analyss. To do so, the 3D dynamc surface s descrbed by ts surface curvature-based shape-ndex nformaton. The surface features are characterzed n local regons along the temporal axs. A dynamc curvature descrptor s constructed from local regons as well as temporal domans. To locate the local regons, we also appled a 3D trackng model based method for detectng and trackng 3D features across 3D dynamc sequences. Our method s valdated through our experment on 3D facal actvty analyss for dstngushng neutral vs. non-neutral expressons, prototypc expressons, and ther ntenstes. Keywords: dynamc curvature, face analyss, 3D facal expressons, dynamc texture. 1. Introducton Facal actvty analyss usng 3D vdeos has become an ntensfed research topc n recent years [14][27][28][29]. 3D representaton of real lfe objects allows for a more realstc behavor analyss and understandng. However, t s dffcult to process the data n a 3D dynamc space. The major challenges le n the dffcultes of 3D data regstraton, 3D feature extracton, and 3D data descrpton. In ths paper, we nvestgate approaches for effectve 3D feature representatons n order to characterze the dynamc geometrc features across tme for facal actvty analyss. Dynamc Texture (DT) s an effectve method for appearance-based facal analyss from consecutve vdeoframes [30]. Some exstng approaches to represent and extract dynamc textures were based on optcal flow [34], moton hstory mages [33], volume local bnary patterns [32], and free form deformaton [31]. Dynamc texture based methods have been successfully used for applcatons n facal expresson recognton [32][33][34]. However, they are essentally 2D-based approaches wth lmtatons of varous magng condtons (e.g., llumnatons, poses, etc.). Motvated by the dynamc texture approaches from 2D vdeos, we propose a new approach to descrbe the 3D facal actvty n 3D vdeos, whch s dynamc curvature n a 3D dynamc space for facal actvty analyss. We segment the 3D facal meshes nto several solated local regons based on facal actons. Then, the hstograms of shape-ndex from curvatures across mult-frame geometrc surfaces are concatenated to form a unque descrptor - dynamc curvature for 3D facal behavor representaton. Such a descrptor that represents the temporal dynamcs of the facal surface wll be nput to a classfer (e.g, SVM) for further classfcaton of facal actvtes (e.g., expressons, denttes, etc.). In order to segment the facal regons, t s crtcal to detect and track facal features across 3D geometrc sequences. Whle research n 2D modalty based trackng has produced a number of successful and wdely used algorthms [10][35][36][9][11][4], research on 3D modalty based analyss stll faces the challenges of geometrc landmark detecton, mesh regstraton, moton trackng, and data representaton. Tradtonally, feature detecton n 3D geometrc space was performed by regstraton or 2D-to-3D mappng methods on statc models [5][6][1][12][2][13][7][8]. In ths paper, we apply a trackng model constructed from a temporal 3D pont dstrbuton for ths task. We wll evaluate the performance through an applcaton for facal actvty classfcaton: neutral vs. non-neutral; sx prototypc expressons; and the status of expresson actvty n low ntensty vs. n hgh ntensty. The rest of the paper s organzed as follows: Secton 2 provdes a bref descrpton of our trackng model. Secton 3 descrbes dynamc curvature based 3D feature
representaton. Secton 4 reports experments and evaluatons on the feature pont detecton and dynamc curvature classfcaton for facal actvty recognton. Fnally, dscusson and concluson are gven n Secton 5. 2. 3D Shape Trackng Model 3D range data exhbts shapes of facal surfaces explctly. Ths shape representaton provdes a drect match wth the 3D actve shape model due to ts nherent and explct shape representaton n 3D space. We present a 3D shape trackng model to descrbe the shape varaton across the 3D sequences. To construct a shape model, we apply a smlar representaton of the pont dstrbuton model to descrbe the 3D shape, n whch a parameterzed model S s constructed by 83 landmark ponts on each model frame. An example of landmark ponts s shown n Fgure 1. Such a set of feature ponts (shape vector) s algned by a Procrustes analyss method [9]. Then the prncpal component analyss (PA) s then performed on the new algned feature vector. Ths s done to estmate the dfferent varatons of all the tranng shape data. To do so, each shape devaton from the mean and the covarance matrx are calculated, resultng n the modes of varaton, V, of the tranng shapes along the prncpal axes. Gven V and a vector of weghts, w, that controls the shape, we can approxmate any shape from the tranng data by: S = s + Vw (1) The vector of weghts, w, allows us to generate new samples by varyng ts parameters wthn certan lmts. When approxmatng a new shape S, the pont dstrbuton model s constraned by both the varatons n shape and the shapes of neghbor frames. Fgure 1 shows an example of the shape model and the tracked 83 feature ponts. The detaled algorthm s descrbed n [37]. Fgure 1: Example of tracked 83 feature ponts on a surprse expresson sequence. 3. Dynamc urvature Based Approach Gven the detected facal features and the resultng local regons, the shape (curvature) change along the 3D model sequences can be observed n ndvdual local regons. Inspred by the recent work on facal analyss from statc curvature based approaches [2] and dynamc texture based approaches [31][32], we propose a so-called Dynamc urvature based descrptor for facal actvty classfcaton. Vsual texture of an object s the reflecton of ts physcal surface and lghtng reflectance. Physcal surface of an object can be characterzed by ts surface descrptor, e.g., prmtve curvature type, shape-ndex, normal, etc. Gven ths observaton, we extend the concept of Dynamc Texture n 2D space to the concept of Dynamc urvature n 3D space (Dynamc Shape-Index). Unlke dynamc texture based approaches, whch requre buldng a rotaton/scale nvarant vector for feature representaton, we use 3D shape descrptors (e.g., prmtve curvature types, shape ndex) as our feature representaton. urvature s a good representaton of local surface geometrc characterstcs. It s nvarant to affne transformaton lke shft or rotaton. Facal surface change reflects facal expresson change. Encodng the surface changes of local facal regon usng dynamc curvature representaton, we are able to capture the temporal dynamcs of facal surface for expresson classfcaton. After the model regons have been localzed, the regonal shape s descrbed and quantfed by curvature based shape-ndex. The dynamc curvature descrptor s then generated for classfcaton. 3.1. Shape descrpton and quantzaton Shape ndex s a quanttatve measure of the shape of a surface at a pont [15][16]. It gves a numercal value to a shape thus makng t possble to mathematcally compare shapes and categorze them. Shape Index s defned as follows: 2 k2 + k1 S = arctan( ) (2) π k2 k1 where k 1 and k 2 are the prncpal (mnmum and maxmum) curvatures of the surface, wth k 2 >= k 1. Wth ths defnton, all shapes can be mapped on the range [- 1.0, 1.0]. Every dstnct surface shape corresponds to a unque shape ndex value. The shape ndex s computed for each pont on the model. We use a cubc polynomal fttng approach to compute the egen-values of the Wengarten Matrx [15], resultng n the mnmum and maxmum curvatures (k 1, k 2 ). The shape ndex scale s normalzed to [0, 1], and encoded as a contnuous range of grey-level values between 1 and 255. To quantfy the curvature based measurement for an effcent descrpton of a model, we transform the shape ndex scale to a set of nne quantzaton values from concave to convex, namely (1) up (0); (2) Trough (0.125); (3) Rut Saddle (0.25); (4)
Rut (0.375); (5) Saddle (0.5); (6) Saddle Rdge (0.625); (7) Rdge (0.75); (8) Dome (0.875); and (9) ap (1), as shown n Fgure 2. 0 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1 Fgure 2: Shape ndex quantzaton to nne values: up (0), Trough (0.125), Rut Saddle (0.25), Rut (0.375), Saddle (0.5), Saddle Rdge (0.625), Rdge (0.75), Dome (0.875), and ap (1). 3.2. Dynamc urvature Based Descrptor Untl ths stage, each vertex on the 3D face model has been assgned a curvature-based label (.e., quantzed shape ndex) based on the above shape analyss. Snce each facal model s segmented nto 8 sub-regons (e.g., eyes, nose, mouth, cheek, etc. as shown n Fgure 3) from the set of tracked feature ponts, we are able to get the curvature dstrbuton of each sub-regon and combne them nto a vector. To do so, we construct followng hstograms to form a dynamc curvature descrptor: vertces wth shape-ndex scale 1, 9 n that regon of all k frames, respectvely. (3) Local Temporal Hstogram: For each sub-regon, we concatenate the hstogram h across k frames along the temporal axs and the hstogram temporal hstogram vector, j k h to formulate a local k 1 2 k k H = [ h, h..., h, h ] (5) (4) Global Temporal Hstogram - Dynamc urvature Descrptor: For the facal model across k frames, we combne all the local temporal hstograms of n regons to generate a global descrptor (so-called dynamc curvature descrptor), whch wll be used for subsequent classfcaton, k k k k H = H, H..., H ] (6) D [ 1 2 where n s the number of local regons and k s the number of frames (n=8 and k=3 n ths mplementaton). n (1) Regonal Hstogram of Intra-frame: Gven k facal frames and n regons for each ndvdual frame, the hstogram of shape-ndex of each regon of ndvdual j frame j s derved to form a hstogram vector, h, where =1, n; j=1,...k; c1 c2 c9 h = [,,..., ] (3) j c c where c s the total number vertces of a local regon n a sngle frame j, and c 1, c 9 are the numbers of vertces wth shape-ndex scale 1, 9 n that regon, respectvely. (2) Regonal Hstogram of Inter-frame: In each regon, the statstcs of shape-ndex s counted n all k frames as a k whole to form a second hstogram vector, h, where =1, n; j=1,...k. c 1 2 9 h = [,,..., ] (4) k where s the total number vertces of a local regon across all k frames, and 1, 9 are the numbers of Fgure 3: Illustraton of Dynamc urvature descrptor based on eght local regons. 3.3. lassfcaton After the dynamc curvature descrptor s created for 3D vdeo sequences, we apply LDA for dmenson reducton, and then use Support Vector Machne (SVM) classfers to learn predctve power. Tradtonal SVM s used for bnary classfcaton. How to effectvely extend t for mult-class classfcaton problem s stll an on-gong research ssue. One effcent way s to construct a mult-class classfer by combnng several bnary classfers. The one-aganst-all SVM s constructed for each class by dscrmnatng that class aganst the remanng M-1 classes. The number of SVMs used n ths approach s M. A test pattern x s classfed by usng the wnner-takes-all decson strategy,.e., the class wth the maxmum value of the dscrmnant functon f(x) s the class that x belongs to.
Alternatvely, the one-aganst-one SVM method s also known as one-versus-one method. An SVM s constructed for every par of classes by tranng t to dscrmnate the two classes. Thus, the number of SVMs used n ths approach s M(M -1)/2. A max-mn strategy s used to determne the class that a test sample belongs to. That s to say, the class wth the maxmum number of votes for the test sample s assgned to the sample. There are other exstng multclass SVM algorthms, e.g., drected acyclc graph SVM (DAGSVM) [17][18], Weston's mult-class SVM [19], and rammer's multclass SVM [20]. However, consderng the algorthm complexty and classfcaton performance, we chose the one-aganst-all SVM for the classfcaton task. 4. Experments and Evaluaton 4.1. Database A publc database 4DFE [3] s used for our test. Ths s a 3D dynamc face model database, whch contans 3D vdeo sequences of sx prototypc expressons of subjects. Each clp has neutral expressons and posed non-neutral expressons. 4.2. Facal Actvty lassfcaton Inspred by the 2D dynamc texture based approach whch s capable of dstngushng dfferent expressons, we extend the concept to dynamc curvature based approach for handlng 3D dynamc range model vdeos. One of the advantages s that the curvature based descrptor encodes the local surface shape nformaton explctly, thus beng relatvely robust wth nose and pose changes. To verfy such a new descrptor, we performed experments on facal actvty on three levels. Frst, we dstngush the facal actvty by expressve face (wth non-neutral expressons) and non-expressve face (wth neutral appearances). Second, gven the expressve face category, we apply the SVM (one-aganst-all) to classfy the sx prototypc expressons. Thrd, we further dentfy the ntensty of each prototypc expresson: ether low ntensty or hgh ntensty. We used 60 subjects from 4DFE for our experment. The experment s subject-ndependent. We randomly choose 50 subjects for tranng and 10 subjects for testng. Based on the tenfold cross-valdaton approach, by whch the tests are executed 10 tmes wth dfferent parttons to acheve a stable generalzaton recognton rate. The classfer used for all three-level experments s the twoclass SVM. Followngs are the results for three-level facal actvty classfcaton. Frst Level: Neutral vs. Non-Neutral. The confuson matrx s lsted as below n Table 1. The average recognton rate to separate neutral wth nonneutral expresson s as hgh as 94.7%. Table 1: Recognton rate for neutral/non-neutral expresson True\Estmate Neutral Non-Neutral Neutral 95.1% 4.9% Non-Neutral 5.7% 94.3% Second Level: Sx prototypc expressons From the non-neutral group of vdeo segments, we further classfy sx prototypc expressons: anger, dsgust, sadness, happness, fear, and surprse. The confuson matrx of dstngushng sx unversal expressons s lsted n Table 2. The average recognton rate s 84.8% Table 2: Recognton rate for sx unversal expressons (%) True\Estmate Anger Dsgus Fear Happy Sad Surpr Anger 83.6 5.5 4.9 0 3.8 2.2 Dsgust 5.1 83.2 5.8 0 3.3 2.6 Fear 1.7 3.2 81.3 7.5 4.2 2.1 Happy 1.1 2.1 0 92.1 0 4.7 Sad 4.2 8.6 9.2 0 78 0 Surprse 1.1 1.9 3.6 3.9 0 89.5 Thrd Level: Low Intensty vs. Hgh Intensty For each recognzed expresson, ther correspondng 3D vdeo segments are further classfed by the bnary SVM for separatng ther degree of the expresson: low ntensty or hgh ntensty. Below are the summary of the average rate (Table 3) and the ndvdual confuson matrx (Table 4). Table 3: Average separaton rate of low/hgh ntenstes Angry Dsgust Fear Happy Sad Surprse 80.6% 83.4% 79.1% 91.2% 78.4% 90.7% Table 4: onfuson matrx of ndvdual expresson for ntensty (low/hgh) separaton Expresson True\Estmate Low Hgh Angry Low 81.8% 18.2% Hgh 20.6% 79.4% Dsgust Low 81% 19%
Hgh 14.2% 85.8% Fear Low 80.1% 19.9% Hgh 21.9% 78.1% Happy Low 86.1% 13.9% Hgh 3.7% 96.3% Sad Low 79.4% 20.6% Hgh 23.6% 77.4% Surprse Low 85.5% 14.5% Hgh 4.1% 95.9% Observed from above results, the expresson ntensty of happness and surprse s relatvely easer to separate than the others due to ther physcally large movements of mouth and eyes, whle sadness, fear, and angry have relatvely small movements of these areas. 4.3. omparson We also conducted experments wth both our dynamc curvature based approach and other methods for recognzng expressons wth both hgh and low ntenstes, respectvely. We choose the recent and classc work for comparson, ncludng 3D dynamc HMM [13][23], 3D dynamc Moton Unts [23], 3D statc surface prmtve feature dstrbuton [2], 2D dynamc moton unts [22], 2D dynamc texture [32], and 2D statc Gabor Wavelet [21]. As shown n the Table 5, the dynamc curvature based approach outperforms other approaches n both cases of low ntensty and hgh ntensty of expressons. Its performance s close to the 3D dynamc HMM based approach where spatal-temporal features were descrbed n the HMM framework. Table 5: Recognton rates from low ntensty (LI) expressons and hgh ntensty (HI) expressons usng dfferent approaches, respectvely. Methods Low (LI) Hgh (HI) 3D dynamc curvature (our approach) 75.1% 86.3% 3D dynamc (HMM) [13][23] 72.4% 83.7% 3D dynamc (MU based [23] 57.3% 72.1% 3D statc (PSFD) [2] 52.8% 71.7% 2D dynamc (MU based) [22] 56.6% 69.2% 2D dynamc (DT based) [32] 70.8% 81.5% 2D statc (Gabor) [21] 50.4% 68.6% 5. oncluson and Future Work In ths paper, we presented a new 3D feature representaton usng a so-called dynamc curvature based approach for facal actvty analyss. The experments have shown the feasblty of such a new descrptor for 3D facal actvty analyss. We have evaluated and valdated ts utlty for dynamc curvature based expresson classfcaton n terms of neutral vs. nonneutral, varous prototypc expressons, and ther low/hgh ntenstes. In the future work, we plan to develop a more robust method for estmatng the drecton of moton for the landmarks, ncludng 3D edge nformaton based on dfferences between vertex normal values. We wll also valdate our method through the applcaton of 3D facal acton unt detecton and segmentaton of dynamc expresson sequences. The proposed 3D Dynamc urvature based approach s n prncple applcable (or extendble) to any other objects wth 3D/4D mesh representaton. Our future work wll also nclude the evaluaton on 3D feature detecton and Dynamc urvature descrptor on spontaneous expresson data and other databases, such as [24][25][26]. 6. Acknowledgement Ths materal s based upon work supported n part by the NSF (IIS-1051103, IIS-0541044). 7. References [1] F. Stenke, B. Scholkopf, and V. Blanz, Learng dense 3d orrespondence, Proc 20th Annual onf. on Neural Informaton Processng Systems, 2006. [2] J. Wang, L. Yn, X. We, Y. Sun, 3D facal expresson recognton based on prmtve surface feature dstrbuton, VPR 2006. [3] L. Yn, X. hen, Y. Sun, T. Worm, and M. Reale. A hghresoluton 3D dynamc facal expresson database. IEEE Intl. onference on Automatc Face and Gesture Recognton, 2008. [4] Ojala, T., Petkänen, M. and Mäenpää, T. Multresoluton Gray-scale and Rotaton Invarant Texture lassfcaton wth Local Bnary Patterns. IEEE Trans. Pattern Analyss and Machne Intellgence 24(7), 2002, p971-987. [5] P. Besl and N. McKay, A method for regstraton of 3D shapes, IEEE Trans. On Pattern Analyss and Machne Intellgence, vol. 14, no. 2, pp. 239-256, Feb. 1992. [6] P. Dalal, B.. Munsell, S. Wang, J. Tang, and K. Olver, A fast 3d correspondence method for statstcal shape modelng, VPR 2007. [7] P. Nar, and A. avallaro, 3-D face detecton, landmark localzaton, and regstraton usng a pont dstrbuton model, IEEE Trans. Multmeda, 11(4):611-623, 2009. [8] P. Peraks, G. Passals, T. Theohars, G. Toderc, and I.A. Kakadars, Partal matchng of nterpose 3D facal data for face recognton, Proc. 3rd IEEE BTAS, pp. 439-466 [9] T. ootes,. Taylor, D. ooper, and J. Graham. Actve shape model-ther tranng and applcaton. omputer Vson and Image Understandng, 61:18-23, 1995.
[10] T.F. ootes, G.J. Edwards, and.j. Taylor, "Actve appearance models", IEEE Trans. Pattern Analyss and Machne Intellgence, vol. 23 no. 6. pp. 681-685, June 2001. [11] V. Blanz and T. Vetter. A Morphable model for the synthess of 3D faces. omputer Graphcs Proc. SIGGRAPH, 1999. [12] X. Lu and A. K. Jan, Automatc feature extracton for multvew 3D face recognton, Proc. IEEE onf. on Automatc Face and Gesture Recognton, Southampton, UK, 2006, pp. 585-590. [13] Y. Sun, X. hen, M. Rosato, and L. Yn, "Trackng vertex flow and model adaptaton for 3D spato-temporal face analyss", IEEE Transactons on Systems, Man, and ybernetcs - Part A. 40(3):461-474, May 2010. [14] Z. Zeng, M. Pantc, G. Rosman, T. Huang: A Survey of Affect Recognton Methods: Audo, Vsual, and Spontaneous Expressons. IEEE Trans. on PAMI, 31(1):39-58, 2009. [15] Dora,., Jan, A.: osmosa representaton scheme for 3D free-form objects. IEEE Trans. Pattern Analyss and Machne Intellgence, Vol.19, No. 10, 1997 [16] J. Koendernk and A. van Doorn, Surface shape and curvature scales, Image and Vson omputng, Vol. 10, No. 8, Oct. 1992, p557-564 [17] U. H.-G. Kreßel. Parwse classfcaton and support vector machnes. In B. Sch olkopf,. J.. Burges, and A. J. Smola, edtors, Advances n Kernel Methods: Support Vector Learnng, pages 255 268. The MIT Press, ambrdge, MA, 1999 [18] J.. Platt, N. rstann, and J. Shawe-Taylor. Large margn DAGs for multclass classfcaton. In S. A. Solla, T. K. Leen, and K.-R. M uller, edtors, Advances n Neural Informaton Processng Systems 12, pages 547 553. The MIT Press, ambrdge, MA, 2000. [19] J. Weston and. Watkns. Mult-class support vector machnes. Techncal Report SD-TR-98-04, Department of omputer Scence, Royal Holloway, Unversty of London, Egham, TW20 0EX, UK, 1998. [20] K. rammer and Y. Snger. On the Algorthmc Implementaton of Mult-class SVMs, Journal of Machne Learnng Research, 2:265-292, 2001. [21] M. Lyons, J. Budynek, and S. Akamatsu. Automatc classfcaton of sngle facal mages. IEEE Trans. on PAMI, 21:1357 1362, 1999. [22] I. ohen, N. Sebe, A. Garg, L. hen, and T. Huang. Facal expresson recognton from vdeo sequences: temporal and statc modelng. Journal of VIU, 91(1), 2003. [23] Y. Sun and L. Yn. Facal expresson recognton based on 3d dynamc range model sequences. In 10th European onference on omputer Vson (EV08), Marselle, France, October 2008. [24] osker, D., Krumhuber, E., Hlton, A.: A facs vald 3d dynamc acton unt database wth applcatons to 3d dynamc morphable facal modelng. In: IEEE IV'11. (2011) 2296-2303 [25] G. Stratou, A. Ghosh, P. Debevec, L.-P. Morency, Effect of Illumnaton on Automatc Expresson Recognton: A Novel 3D Relghtable Facal Database, n: 9th Internatonal onference on Automatc Face and Gesture Recognton, 2011 (FGR 2011), Santa Barbara, alforna, 2011. [26] A. Savran, N. Alyuz, H. Dbekloglu, O. elktutan, B. Gokberk, B. Sankur, L. Akarun, Bosphorus database for 3D face analyss, n: Proc. Frst OST 2101 Workshop on Bometrcs and Identty Management, Rosklde Unversty, Denmark, 2008, pp. 47 56. [27] Georga Sandbach, Stefanos Zaferou, Maja Pantc and Danel Rueckert, A Dynamc Approach to the Recognton of 3D Facal Expressons and Ther Temporal Models, Specal Sesson: 3D facal behavour analyss and understandng, IEEE Internatonal onference on Automatc Face and Gesture Recognton (FGR), 2011. [28] T. Fang, X. Zhao, O. Ocegueda, S.K. Shah and I.A. Kakadars, 3D Facal Expresson Recognton: A Perspectve on Promses and hallenges, IEEE Internatonal onference on Automatc Face and Gesture Recognton (FGR), 2011. [29] V. Le, H. Tang and T. Huang, Expresson Recognton from 3D Dynamc Faces usng Robust Spato-temporal Shape Features, IEEE Internatonal onference on Automatc Face and Gesture Recognton (FGR), 2011. [30] G. Doretto, A. huso, Y. Wu, and S. Soatto. Dynamc textures. Internatonal Journal of omputer Vson, 51(2):91 109, 2003. [31] S. Koelstra, M. Pantc, and I. Patras. A dynamc texturebased approach to recognton of facal actons and ther temporal models. IEEE Trans. on PAMI, 32(11) 1940 1954, 2010. [32] G. Zhao and M. Petkanen. Dynamc texture recognton usng local bnary patterns wth an applcaton to facal expressons. IEEE Trans. on PAMI, 6(29), 915-928, 2007. [33] M. Valstar, M. Pantc, and I. Patras. Moton hstory for facal acton detecton n vdeo. In Proceedngs of IEEE Internatonal onference on Systems, Man and ybernetcs, pages 635 640, 2004. [34] D. hetverkov and R. Peter. A bref survey of dynamc texture descrpton and recognton. In 4th onference on omputer Recognton Systems, pages 17 26, 2005. [35] Lucey, S., Matthews, I., Hu,., Ambadar, Z., De la Torre Frade, F., & ohn, J., (2006). AAM derved face representatons for robust facal acton recognton. IEEE Inter. onf. on Auto. Face and Gesture Recognton 06. [36] Ramnath, K., Koterba, S., Xao, J., Hu,., Matthews, I., Baker, S., ohn, J., & Kanade, T., Mult-vew AAM fttng and constructon. Internatonal Journal of omputer Vson, 2007. [37] S. anavan and L. Yn, 3D feature trackng usng a deformable shape model, Techncal Report, Bnghamton Unversty, Feb., 2012.