Recognition of Facial Expressions Based on Salient Geometric Features and Support Vector Machines

Size: px
Start display at page:

Download "Recognition of Facial Expressions Based on Salient Geometric Features and Support Vector Machines"

Transcription

1 The fnal publcaton s avalable at Sprnger va Recognton of Facal Expressons Based on Salent Geometrc Features and Support Vector Machnes Deepa Ghmre 1, Joonwhoan Lee 2, *, Ze-Nan L 3, Sunghwan Jeong Korea Electroncs Technology Insttute, Jeonju-s, Jeollabu-do , Rep. of Korea; E-Mals: (deepa, shjeong)@et.re.r Dvson of Computer Engneerng, Chonbu Natonal Unversty, Jeonju-s, Jeollabu-do , Rep. of Korea; E-Mal: chlee@jbnu.ac.r School of Computng Scence, Smon Fraser Unversty, Burnaby, B.C., Canada; E-Mal: l@cs.sfu.ca * Auor to whom correspondence should be addressed; E-Mal: chlee@jbnu.ac.r; Tel.: ; Fax: Abstract: Facal expressons convey nonverbal cues whch play an mportant role n nterpersonal relatons, and are wdely used n behavor nterpretaton of emotons, cogntve scence, and socal nteractons. In s paper we analyze dfferent ways of representng geometrc feature and present a fully automatc facal expresson recognton (FER) system usng salent geometrc features. In geometrc feature-based FER approach, e frst mportant step s to ntalze and trac dense set of facal ponts as e expresson evolves over tme n consecutve frames. In e proposed system, facal ponts are ntalzed usng elastc bunch graph matchng (EBGM) algorm and tracng s performed usng Kanade-Lucas-Tomac (KLT) tracer. We extract geometrc features from pont, lne and trangle composed of tracng results of facal ponts. The most dscrmnatve lne and trangle features are extracted usng feature selectve mult-class AdaBoost w e help of extreme learnng machne (ELM) classfcaton. Fnally e geometrc features for FER are extracted from e boosted lne, and trangles composed of facal ponts. The recognton accuracy usng features from pont, lne and trangle are analyzed ndependently. The performance of e proposed FER system s evaluated on ree dfferent data sets: namely CK+, MMI and MUG facal expresson data sets. Keywords: facal ponts, geometrc features, AdaBoost, extreme learnng machne, support vector machnes, facal expresson recogntons

2 The fnal publcaton s avalable at Sprnger va Introducton The tracng and recognton of facal actvtes from stll mages or vdeo sequences has attracted great attenton n computer vson feld. Among em recognton of facal expresson has been an actve research topc snce last decade. Facal expressons are among e most unversal forms of body language. A facal expresson s one or more motons, or postons of e muscles benea e sn of e face. These movements convey e emotonal state of an ndvdual to observers. Psychologcal research has shown at facal expressons are e most expressve way n whch human dsplay emotons [1]. In general, researchers dvded facal expressons nto sx basc categores: anger, dsgust, fear, happness, sadness, and surprse; whch are also called prmary emotons [2]. Advanced emotons nclude frustraton and confuson, sentments ncludng postve, negatve, and neutral, composte of two or more emotons, facal acton unts etc. In e dgtal age, t s no secret at socal relatonshps are changng. W all advantages at our dgtal devces have brought us, ey are also affectng our ablty to empaze w oers. Recent research shows at due to excessve use of dgtal devces young people are losng er ablty to read oer people emotons or feelng [3]. Therefore t s mportant to recognze emotons va facal expressons accurately and n real tme. On e oer hand, facal emoton recognton has applcatons n human-computer nteracton, clncal studes, advertsng, acton recognton for computer games, etc. An automatc FER system generally conssts of ree steps [4]: (a) accurate localzaton of face n an mage or vdeo, (b) facal feature extracton and representaton, and fnally (c) recognton of facal expresson usng feature classfcaton. In s paper we focus on e study of salent geometrc feature extracton for recognzng e sx basc prototypcal facal expressons. Fg.1 shows e overall bloc dagram of e proposed FER system. As shown n fg. 1, at frst, face detecton, feature pont ntalzaton, and tracng s performed. Vola and Jones Haar le feature based AdaBoost scheme [5] s used for face and eye detecton, whereas EBGM [6] and KLT tracer [7] s used for feature pont ntalzaton and tracng n consecutve vdeo frames, respectvely. Face graph normalzaton scheme s proposed to brng all face graphs n standard shape before feature selecton and extracton. Three dfferent geometrc features are extracted. 1) Sngle facal ponts coordnate dsplacements feature, 2) two ponts are consdered at once to form lne features, and 3) ree facal ponts are consdered at once to form trangle type features. Promnent lne and trangle are selected usng mult-class AdaBoost before feature extracton. Detal of s procedure wll be dscussed n secton 3. Fnally facal expressons are recognzed usng SVMs learned on ponts, lnes, and trangles based geometrc features, ndependently. We analyze dfferent types of geometrc feature extracton and present e recognton results n dfferent data sets. The man contrbutons of s paper are summarzed as follows: 1. We propose a fully automatc sequence based FER system usng salent geometrc feature representatons. 2. We study facal geometrc feature n ree dfferent forms (pont, lne and trangle); er representaton power for dscrmnatng basc FER are compared, and valdated usng publcly avalable ree dfferent FER data sets.

3 The fnal publcaton s avalable at Sprnger va We show at e trangle based representaton outperforms bo lne and pont based representaton, whereas lne based representaton outperforms pont based feature representaton. Therefore our study proves at, not only e facal feature movement over tme but also e nter-relaton between facal features movements wn a face s mportant n dscrmnatng facal expressons. 4. We conduct extensve FER experments on ree wdely used facal expresson data sets to demonstrate e effcency of our proposed meod. Expermental results show at our meod s superor to most state-of-e-art FER systems. Rest of s paper s organzed as follows. A bref revew of e wor n e feld of FER s gven n secton 2. Face detecton, facal pont ntalzaton, tracng, and normalzaton of face graph as well as dfferent types of geometrc feature extracton s descrbed n secton 3. Secton 4 descrbes e analyss and selecton of dfferent geometrc feature from e tracng result of dense set of facal ponts. The expermental setup and dataset descrpton s gven n secton 5. Expermental results on dfferent publcly avalable benchmar facal expresson data sets are presented n secton 6. Fnally, concluson of e proposed FER system s gven n secton 7. Fg. 1. Archtecture of e proposed facal expresson recognton system 2. Related Wor Several researchers have presented revew on FER; among em early revews can be found n [8, 9, 10], whereas recent revews can be found n [11, 12]. Frst revew was made n 1994 by Samal and

4 The fnal publcaton s avalable at Sprnger va 4 Iyenger [8], followed by [9], [10] n 2000 and 2003, respectvely. In [11], survey of affect recognton meods ncludng audo, vsual and spontaneous expressons s made, whch was publshed n It covers dscusson of emoton percepton from psychologcal perspectve, examnaton of avalable approaches for solvng problem of machne understandng of human affectve behavor, dscusson on collecton and avalablty of emoton tranng dataset, and also outlnes e scentfc and engneerng challenges to advancng human affect sensng technology. Recently, n 2012, meta-revew of FER and analyss challenge s presented by Valster et al. [12], n whch e focus s on clarfyng how far e feld has come, dentfyng new goals, and provdng baselne results regardng facal emoton recognton and analyss. The approaches reported for FER can be classfed nto two man categores, a) template-based and b) feature-based. The template-based meods use 2-D or 3-D facal models as templates for expresson nformaton extracton. The feature-based meods use appearance-based features or geometry-based features for expresson nformaton extracton. Geometry-based features descrbe e shape of e face and ts components such as e mou or e eyebrow, whereas appearance-based features descrbe e texture of e face caused by expresson. Among e appearance-based features, local bnary pattern (LBP) s wdely used recognzng facal expressons [13-17]. Smlarly, local Gabor bnary pattern [16], hstogram of orentaton gradent [18], Gabor wavelets representaton [17], scale nvarant feature transform (SIFT) [19], non-negatve matrx factorzaton (NMF) based texture features [20, 21], lnear dscrmnant analyss (LDA) [22], ndependent component analyss (ICA) [22] etc., are also wdely used appearance-based feature for e recognton of facal expressons. Most geometrc feature-based approaches use e actve appearance model (AAM) or ts varaton, to trac a dense set of facal ponts [23, 24]. Alternatvely, EBGM algorm, KLT tracer etc., are also used for facal ey pont detecton or tracng [25, 26]. The locatons of ese facal landmars are en used n dfferent ways to extract facal features regardng shape of e face, or movement of facal ey ponts as e expresson evolves. Kotsa et al. [26] used geometrc dsplacement of certan selected candd nodes, defned as e dfferences of e node coordnates between e frst and e greatest facal expresson ntensty frames, as geometrc features for recognton of sx basc facal expressons. Sung and Km [27] ntroduced Stereo AAM, whch mproves e fttng and tracng of standard AAMs usng multple cameras to model e 3-D shape and rgd moton parameters. Actve shape model (ASM) s used n [28] for modelng and tracng facal ey ponts and e facal expressons are recognzed on a low-dmensonal expresson manfold. Pose nvarant FER based on a set of characterstc facal ponts extracted usng AAMs s presented by Rudovc and Pantc [29]. A coupled scale Gaussan process regresson (CSGPR) model s used for head-pose normalzaton. Ghmre and Lee [25] used tracng result of 52 facal ey ponts modeled n e form of pont and lne features for e recognton of facal expressons. The ey geometrc features are selected based on AdaBoost and dynamc tme warpng (DTW) algorm. Recently, n [30] and [31], auors also utlzed geometrc features for e recognton of facal expressons. In [30], facal actvtes are characterzed by ree levels. Frst, n e bottom level, facal feature pont are traced usng ASM, n e mddle level, facal acton unts are defned, and fnally facal expressons are represented based on detected acton unts. Saeed et al. [31] use only eght facal ey ponts n order to model geometrc structure of face from sngle face mage for e recognton of facal expressons.

5 The fnal publcaton s avalable at Sprnger va 5 Several classfers have also been nvestgated to buld recognton module for facal expressons. Therefore, FER technques can also be categorzed accordng to recognton modules. The most common recognton modules are support vector machnes (SVMs), hdden Marov models (HMM), Gaussan mxture models (GMM), dynamc Baysan networs (DBN) etc. Among em [13, 16, 17, 24, 25, 26, 29, 31] use SVM, HMM s used n [32, 37, 38], GMM s utlzed by [35, 36], whereas [30, 33] uses DBN. Recently, sparse representaton classfcaton (SRC), whch s a very successful face recognton technque [34], s also used for FER [17]. In SVM, e probablty s calculated usng N- fold cross valdaton technque, n oer words, ere s no drect probablty estmaton n SVM. Therefore n order to recognze facal expressons from vdeo, e temporal nformaton should be embedded n feature extracton process. GMM s senstve to nose and cannot model fast varaton n e consecutve frames. HMM are mostly used to handle e sequental data when frame level features are used whch has e advantage over SVM, GMM le classfers. 3. Geometry-based Facal Feature Extracton Facal feature extracton attempts to fnd e most approprate representaton of e face mage for recognton. In geometrc feature-based system, e facal ponts n a sngle mage or n mage sequences are used n dfferent ways to form feature vector for recognton of facal expressons. For example, e dstance between feature ponts and e relatve szes of e major face components are computed to form a feature vector. The feature ponts can also form a geometrc graph representaton of e face. Usng geometrc features have er own advantage and dsadvantages. The dffculty n geometrc feature based approach s to ntalze and trac facal feature ponts accurately and n real tme. If ere s error n feature pont ntalzaton and tracng, e error gets accumulated n e geometrc feature extracton process. Image resoluton, head pose, eyeglass, presence of beard etc. could also affect e feature pont ntalzaton and tracng process. But once e feature ponts are ntalzed and traced accurately, e geometrc features extracted from e tracng result are robust to varaton n scale, sze, head orentaton, texture of e face due to age varaton etc. In s secton we wll present e meod for facal feature pont ntalzaton and tracng. Dfferent type of geometrc feature extracton technque, as well as feature selecton technque to fnd e most dscrmnant geometrc features for e recognton of facal expressons wll be studed. The geometrc features are extracted based on pont, lne and trangle composed of facal ey ponts n e vdeo sequence Facal Feature Pont Tracng and Graph Normalzaton In e proposed meod, e facal ponts are ntalzed and traced automatcally. The feature pont ntalzaton s performed usng EBGM algorm. The tracng n consecutve frames s performed usng KLT tracer. Fnally, e face graph s normalzed n such a way at for each facal expresson sequence, e vertex of e ntal face graph starts from same poston and evolves accordng to e movements of facal feature ponts as some partcular facal emoton evolves over tme. We use EBGM algorm, whch was mplementaton by Colorado State Unversty (CSU) as a baselne algorm for e comparson of face recognton algorms [39], for facal feature pont ntalzaton. Facal feature pont localzaton n a novel magery has two steps. Frst, e locatons of

6 The fnal publcaton s avalable at Sprnger va 6 e new feature pont s estmated based on e nown locatons of oer feature ponts n e mage; second, e estmate s refned by comparng e Gabor jet extracted from at mage n e approxmate locatons and e jets extracted from e same postons n e model mages. In order to start e feature pont localzaton process, e approxmate locatons of two eyes are detected usng Haar-le feature based object detecton algorm [5]. Once e facal feature ponts are automatcally ntalzed usng EBGM algorm we use pyramdal varant of well-nown KLT tracer for tracng e 52 facal feature ponts n consecutve frames. The KLT algorm tracs a set of feature ponts across e vdeo frames. The algorm tracs e facal feature ponts n e mage sequence contanng e formaton of a dynamc human facal expresson from e neutral state to e fully expressve one. KLT tracng s faster as compared to e EBG usng Gabor flter based tracng algorm used n [25]. Fg. 2 shows e result of facal feature pont tracng usng KLT tracer. Fg. 2. An example of facal feature pont tracng n happy facal expresson sequence usng KLT tracer. Face graph normalzaton brngs each face ponts to e unform coordnate poston n e frst frame of e vdeo shot, and as e expresson evolves, e landmars are dsplaced accordngly. Let us suppose ( x, y ) denotes e feature pont poston n e l frame of e facal expresson l l sequence n e database. Tracng result of a sngle landmar s denoted by (1). (, ),(, ),...,,(, ) N N S, and s defned by Eq. S = x y x y x y (1) where, N s e number of frames n an expresson sequence. An average feature pont poston correspondng to each feature pont s computed by averagng feature ponts n neutral face mages,.e., frst frame n e vdeo shot. Suppose µ, denotes e ( x 0 µ y0 ) average ey pont poston of e ey pont n e frst frame of e expresson sequence. Suppose δ, denotes e dsplacement of e ey pont n e frst frame of e expresson ( x 0 δ y 0 ) sequence, w respect to e average ey pont poston:

7 The fnal publcaton s avalable at Sprnger va 7 δ δ = µ x µ ) (2) ( x0, y0) ( x0 0, y0 y0 Now e ey pont dsplacement descrbed by Eq. 2 s added to e ey pont postons n every frame of e expresson sequence. The transformed result of ey pont tracng s now denoted by S and s defned as: ( 0 δ 0, 0 δ 0 ),( 1 δ 0, 1 δ 0 ),...,( δ 0, δ 0 ) S x y x y x y = + x + y + x + y N + x N + y Fg. 3 shows e result of e facal feature tracng and correspondng result after graph normalzaton. Note at graph s also scaled n order to mae unform sze. Also, note at, e lnes connectng two feature pont are used just to mae face le appearance. (3) Fg. 3. Example of facal feature pont tracng and correspondng result after normalzaton for two surprse facal expresson sequences from MMI database Pont Based Geometrc Features Suppose ( x, y ) s e normalzed ey pont coordnate poston n e face graph, and let us rewrte Eq. 3 n e followng form: (, ),(, ),...,(, ) S = x y x y x y (4) N N The number of frames n dfferent vdeo shots of facal expresson can be dfferent. To mae feature extracton and feature selecton process easy, e feature pont tracng result s reszed to fxed leng usng lnear nterpolaton. In our experment e sequence s resampled nto N = 10 frames.

8 The fnal publcaton s avalable at Sprnger va 8 The feature pont dsplacement n each frame w respect to e frst frame s calculated. Suppose ( x, y ) denotes e dfference between e landmar n e l frame, and landmar n e frst l l frame of e facal expresson sequence n e database. ( x, y ) = ( x x, y y ) (5) l l l 0 l 0 Eq. (6) defnes all e dsplacements of e feature pont n e (, ),(, ),...,(, ) sequence. S = x 1 y 1 x 2 y 2 x N y N (6) 3.3. Lne Based Geometrc Features The geometrc feature extracted n e form of Eq. (6) consders only e tracng result of ndvdual feature ponts. The movements of ey ponts as e partcular facal expresson evolves are not ndependent,.e., ere s defnte relatonshp between e movements of facal ey ponts. In order to capture s nformaton n e feature, par of feature ponts s consdered at a tme, and en features are extracted as components of lne. The Eucldan dstance and e base angle connectng par of facal ey ponts wn a frame are calculated as a lne based geometrc features. Suppose ( d, ) l θ l, j denotes e Eucldan dstance and angle between and j par of ey ponts n e l frame of e facal expresson sequence. d = ( x x ) + ( y y ) (7) 2 2 l,(, j) l, l, j l, l, j θ y y = l, l, j l,(, j) arctan xl, xl, j Let us denote e calculated sequence of dstances and angle by L, j, and defned by Eq. (9): (8) L, j = ( d0, θ0 ), j,( d1, θ1 ), j,...,( dn, θn ), j (9) Now, e obtaned dstance and angle between e par of landmars are subtracted from e correspondng dstance and angle n e frst frame of e vdeo shot. Suppose ( d, ) l θl, j denotes e change n dstance and angle between and frst frame of e vdeo shot, whch s defned as: j par of ey ponts n e l frame, w respect to e (, θ ) (, θ θ ) dl l, j = dl d0 l 0, j (10) Fnally, e lne based geometrc feature extracted from e mage sequence s defned as follows: L, j = ( d1, θ1 ), j,( d2, θ2 ), j,...,( dn, θn ), j (11) 3.4. Trangle Based Geometrc Features Here, ree facal landmars are consdered at a tme and features are extracted n e form of components of trangle. The nformaton regardng movement of facal ey ponts and relatonshp between em when some facal expresson evolves over tme can be captured well by consderng

9 The fnal publcaton s avalable at Sprnger va 9 ree facal ey ponts at a tme as compared to two facal ey ponts. Trangle components n e l frame are subtracted w e trangle components n e frst frame of e vdeo sequence as shown n fg. 4. Fg. 4. Dfference n components of two trangle used as features. Vertex of each trangle corresponds to e facal ey pont n e two frames of e vdeo sequence. Suppose ( a,,, ) m l bl αl βl, j, denotes e two sde lengs, an ncluded angle, and e base angle of e trangle composed of, j and ey ponts of e face graph n e l frame of e m facal expresson sequence. m Now let us denote e calculated sequence of trangle components as shown n fg. 4 by T, j,, and defned as n Eq. (12). m m m m T, j, = ( a0, b0, α0, β0 ), j,,( a1, b1, α1, β1 ), j,,...,( an, bn, αn, βn ), j, (12) The obtaned components of trangle n e sequence are now subtracted from e correspondng components of trangle n e frst frame of e vdeo sequence. Suppose ( a,,, ) m l bl αl βl, j, denotes e dfference between components of trangle n e l frame of e vdeo shot and e components of e correspondng trangle n e frst frame of e vdeo sequence, whch s defned as: (,,, ) (,,, ) m m al bl αl βl, j, = al a0 bl b0 αl α0 βl β0, j, (13) Fnally, trangle based geometrc feature extracted from e mage sequence are defned as follows: m m m [( a, b, α, β ),( a, b, α, β ),...,( a, b, α, β ] m T, j, = , j, , j, N N N N ), j, (14) Suppose ere are N frames n e sequence, en e feature vector s composed of (N-1) 4 components,.e., f N = 11, feature dmenson for a sequence extracted from e sngle trangle wll be (11-1) 4 = Features Selecton usng Mult-class AdaBoost w ELM The geometrc features are extracted n e form of components of lnes and trangle. In total ere are 52 facal ey ponts. Accordng to e combnaton prncple w 52 facal ponts C = 52!/(2!(52 2)!) 1326 and C = 52!/(3!(52 3)!) unque lnes and trangles are 52 2 = 52 3 = possble. If we use all of em n order to extract features for classfcaton, e dmenson of e

10 The fnal publcaton s avalable at Sprnger va 10 feature wll be large. In feature extracton process each pont, lne, and trangle n e face graph are represented by Eq. (6), Eq. (11), and Eq. (14), respectvely. Let us call ese equatons as feature vector, because n s paper feature selecton means selecton of lnes or trangles whose components represents e dscrmnatve feature for e recognton of facal expressons. Among large number of feature vectors only e small subset wll provde dscrmnatve nformaton for recognton of facal expressons. Our goal s to fnd subset of lnes and trangles usng some feature selecton scheme. Here we use feature selectve AdaBoost algorm n combnaton w ELM Extreme Learnng Machne Gradent based learnng algorms are very slow and may easly converge to local mnma. They also requre many teratve learnng steps n order to obtan better learnng performance. ELM, a fast learnng algorm for sngle-layer feed-forward neural networs (SLFNs) proposed by Huang et al. [40], solves e gradent-based learnng algorm by analytcally calculatng e optmal weghts of e SLFN. Frst, e weghts between e nput layer and hdden layer are randomly selected and en e optmal values for e weghts between hdden layer and output layer are determned by calculatng e lnear matrx equatons. In summary, ELM algorm can be wrtten as follows: Algorm 1. Summary of extreme learnng machne (ELM) algorm. { } Gven a tranng set (, ) n m x t x R, t R, = 1,..., N, hdden node output functon g ( w, b, x ), and number of hdden nodes L, a. Randomly assgn hdden node parameters( w, b ), = 1,..., L. b. Calculate e hdden layer output matrx H. c. Calculate e output weghts : H T β β =. where H s e Moore-Penrose generalzed nverse of hdden layer output matrx H. Mult-class AdaBoost algorm (Algorm 2) s used to select e salent lnes and trangles. ELM s used as a wea classfer n AdaBoost algorm. ELM tself s not a wea classfer, but n e proposed system, n terms of feature t s treated as a wea classfer,.e., ELM wll be traned usng feature extracted from sngle lne or sngle trangle. The reason behnd selectng ELM as a wea classfer s at t s a very fast learnng algorm, and can be traned almost n real tme. In e proposed feature selecton scheme we have to tran 1326 ELMs for lne feature selecton and ELMs for trangle feature selecton Feature Selectve Mult-class AdaBoost The AdaBoost learnng algorm proposed by Freud and Schapre [41], n ts orgnal form, s used to boost e classfcaton performance of a smple learnng algorm. In our system, a varant of mult-class AdaBoost proposed by Jhu et al. [42] s used to select e lnes or trangles from whch features wll be extracted for FER.

11 The fnal publcaton s avalable at Sprnger va 11 Algorm 2: Mult-class AdaBoost learnng algorm. M hypoess are constructed, each usng a sngle feature vector. The fnal hypoess s a weghted lnear combnaton of M hypoess. 1. Intalze e observaton weghtsw 1, = 1/ n, = 1,2,..., n 2. For m = 1 to M: n a. Normalze e weghts w, w, w j= 1, m m m j b. Select e best wea classfer w respect to e weghted error ( ( )) err ( m) n n = mn w f., 1 Ι c T x f w = = 1 T x = T x f m where f m s e mnmze of ( c. Defne m) ( ) (, ) d. Computeα ( m) ( m) err ( m) 1 err = log + log( K 1). (15) ( m) err ( ) ( m) ( m) e. Update e weghts: m, m, α ( ) w w.exp. Ι c T ( x ), = 1,..., n 3. The fnal strong classfer s: C( x) = arg max M α ( m). Ι ( T ( m) ( x) = ) m= 1 Algorm 2 shows e varant of mult-class AdaBoost learnng algorm proposed n [42], n whch ey refer er algorm as SAMME Stagewse Addtve Modelng usng a Mult-class Exponental loss functon. Wea classfer T( x, f ) n our system s traned ELM networ usng features extracted from sngle lne or trangle. Note at we performed lne and trangle selecton experment ndependently. But e process used for feature selecton s same. The mult-class AdaBoost algorm gven n Algorm 2 s smlar to AdaBoost, w e major dfference n Eq. (15). ( m) ( m) Now n order for α to be postve, we only need (1 err ) > 1 K, where K s e number of classes, or e accuracy of each wea classfer to be better an random guessng raer an 1/2. The feature extracton based on e tracng result of ndvdual landmar s smple process. It gves e maxmum dsplacement of feature pont n four drectons as some expresson evolves over tme. But feature selecton s only appled for lne and trangle based feature extracton process. Before creatng e feature vector for expresson recognton, e feature selecton process selects ose lne and trangle whch carry most of nformaton for dscrmnatng sx basc facal expressons. Fg. 5 and fg. 6 shows e most dscrmnatve lnes and trangle features n ree dfferent data sets. Note at lne and trangle feature selecton s performed ndependently. Mostly, e selected lnes or trangles are composed of landmars from e eyebrow, mou and nose area. In most cases, at least one of e vertexes of e trangle s from e eyebrow or mou or nose regon.

12 The fnal publcaton s avalable at Sprnger va 12 Fg. 5. Set of lnes selected usng mult-class AdaBoost w ELM n ree dfferent data sets (left to rght: CK+, MMI, and MUG data sets). Fg. 6. Frst 10, 20, and 30 trangular features selected usng mult-class AdaBoost w ELM. Frst row: CK+ data set, second row: MMI data set, and rd row: MUG data set. 5. Expermental Setup and Data sets Descrpton In order to access e relablty of e proposed FER approach, e performance of e proposed FER system s evaluated on ree dfferent databases: extended Cohn-Kanade (CK+) facal expresson dataset [43], M&M Intatve (MMI) dataset [44], and Multmeda Understandng Group (MUG) dataset [45]. These dataset conssts of facal expresson mage sequence or vdeos whch starts from natural frame and evolves to pea facal expresson ntensty. The most common approach for testng e generalzaton performance of a classfer s e K-fold cross valdaton approach. A ten-fold cross valdaton was used n order to mae maxmum use of e avalable data, and produce averaged classfcaton accuracy results. The classfcaton accuracy s e average accuracy across all ten trals. To get better pcture of e recognton accuracy of each

13 The fnal publcaton s avalable at Sprnger va 13 expresson type, e confuson matrces are gven. The dagonal entres of e confuson matrx are e rates of facal expressons at are correctly classfed, whle e off-dagonal entres correspond to msclassfcaton rates. Fg. 7. Example of facal expresson sequence from ree dfferent data sets SVM s a well-nown classfer for ts generalzaton capablty. SVM classfers maxmze e hyper plane margn between classes. In our experment, we use a publcly avalable mplementaton of SVM, lbsvm [46], n whch we used radal basc functon (RBF) ernel. The optmal parameter selecton s performed based on e grd search strategy [47]. A bref ntroducton of ree dfferent data sets used n s paper s gven below. Extended Cohn-Kanade (CK+) dataset: The extended Cohn-Kanade (CK+) dataset [43] was used for FER n sx basc facal expresson classes (anger, dsgust, fear, happness, sadness, and surprse). Ths database conssts of 593 sequences from 123 subjects. The mage sequence vares n duraton (.e., 7 to 60 frames), and ncorporates e onset (whch s also e neutral face) to pea formaton of e facal expresson. Image sequences from neutral to target dsplay were dgtzed nto or pxel arrays. Only 327 of e 593 sequences have a gven emotonal class. Ths s because ese are e only ones at ft e prototypc defnton. For e evaluaton of proposed FER system, 315 sequences of e dataset are selected from e database. Fg. 7 (frst row) shows an example of e facal expresson sequence from CK+ dataset. M&M Intatve (MMI) dataset: The MMI face dataset [44] contans more an 1500 samples of bo statc mages and mage sequences of faces n frontal and n profle vew dsplayng varous facal expressons of emoton, sngle AU actvaton, and multple AU actvaton. It not only contans posed but also contans spontaneous expressons of facal behavor. There are approxmately 30 profle-vew and 750 dual-vew facal-expresson vdeo sequences. All vdeo sequences have been recorded at a rate of 24 frames per second usng a standard PAL camera. It ncludes 19 dfferent face of students and research staff members of bo sexes (44% female), rangng n age from 19 to 62, havng eer a European, Asan, or Sou Amercan enc bacground. Total of 203 facal expresson vdeo

14 The fnal publcaton s avalable at Sprnger va 14 sequences are chosen for e evaluaton of e proposed FER systems. Fg. 7 (second row) shows an example of e facal expresson sequence from MMI database. Multmeda Understandng Group (MUG) dataset: Image sequences n MUG dataset [45] begn and end at neutral state and follow e onset, apex, offset temporal pattern. For each of e sx basc expressons a few mage sequences of varous lengs are recorded. Each mage sequence contans 50 to 160 mages. Pror to e recordngs, a short tutoral about e basc emotons was gven to e subjects. The recordngs of 77 subjects are avalable to researchers and e number of e avalable sequences counts up to The database ncludes 86 subjects w Caucasan orgn and age between 20 and 35 years. There are 35 female and 51 males w or wout beard. The recorded sequence consst of mages saved n hgh qualty lossy JPEG format, w a resoluton of pxels and a sze rangng from 240 to 340 KB. Image sequences of 52 subjects and e correspondng annotaton are avalable publcally va e nternet. In e proposed system 325 sequences are selected for e experment. Fg. 7 (last row) shows an example of e facal expresson sequence from MUG database. Table 1 shows e number of facal expresson mages/vdeo sequences for each expresson from each dataset used n s paper for e expermentaton of e proposed FER system. Table 1. Number of facal expresson mages/vdeo sequences n ree dfferent data sets. Dataset/Expresson Anger Dsgust Fear Happness Sadness Surprse Total CK MMI MUG Expermental Results and Dscusson 6.1. Facal Expresson Recognton usng Pont based Features In s paper bascally ree dfferent types of facal geometrc features are used ndvdually for e recognton of facal expressons. As explaned n secton 3.2, e pont based feature refers to e geometrc features whch are ndvdual facal feature pont dsplacement n four possble drectons. The feature for SVM classfcaton from facal ey pont of e facal expresson sequence s explaned as follows: ( x1,, x2, xn ) ( x1,, x2, xn ) ( y1,, y2, yn ) ( y, y y ) x, max = max,..., x, mn = mn,..., y, max = max,..., y, mn = mn 1, 2,,... N, Total of 52 facal ey ponts are traced, erefore e dmensonalty of pont based feature s 52 4 = 208. The average recognton accuracy usng pont based feature w ten-fold cross valdaton

15 The fnal publcaton s avalable at Sprnger va 15 s 96.37%, 67.64%, and 91.41% n CK+, MMI, and MUG facal expresson data sets, respectvely. Table 2, 3 and 4 show e correspondng confuson matrces labeled w pont based feature representaton along w lne and trangle based features. Table 2. Confuson matrx for FER n percentages usng SVM classfer w dfferent types of geometrc features representaton n CK+ dataset. % Feature Representaton Anger Dsgust Fear Happness Sadness Surprse Pont Anger Lne Trangle Pont Dsgust Lne Trangle Pont Fear Lne Trangle Pont Happness Lne Trangle Pont Sadness Lne Trangle Pont Surprse Lne Trangle Table 3. Confuson matrx for FER n percentages usng SVM classfer w dfferent types of geometrc features representaton n MMI dataset. % Feature Representaton Anger Dsgust Fear Happness Sadness Surprse Pont Anger Lne Trangle Pont Dsgust Lne Trangle Pont Fear Lne Trangle Pont Happness Lne Trangle Pont Sadness Lne Trangle Pont Surprse Lne Trangle Table 4. Confuson matrx for FER n percentages usng SVM classfer w dfferent types of geometrc features representaton n MUG dataset.

16 The fnal publcaton s avalable at Sprnger va 16 % Feature Representaton Anger Dsgust Fear Happness Sadness Surprse Pont Anger Lne Trangle Pont Dsgust Lne Trangle Pont Fear Lne Trangle Pont Happness Lne Trangle Pont Sadness Lne Trangle Pont Surprse Lne Trangle Facal Expresson Recognton usng Boosted Lne based Features As explaned n secton 3.3, e lne s created by connectng two facal ey ponts. W 52 facal ey ponts 1326 unque lnes are possble. But e features from only a subset of ose lnes are suffcent to learn e basc facal expressons. Therefore AdaBoost algorm s used to select e dscrmnatng lnes from whch features for SVM classfcaton are extracted. The magntude of dfference n leng and base angle w.r.t e neutral frame are extracted from lne based features from facal expresson sequence represented by Eq. (11) whch s gven as follows: d θ (, j),max (, j),max ( abs( d1(, j) ), abs( d 2(, j) ),... abs( dn (, j )) ( abs( θ ), abs( θ ),... abs( θ )) = max ) = max 1(, j ) 2(, j ) N (, j) The average recognton accuracy usng lne based features w ten-fold cross valdaton s 96.58%, 74.31%, and 94.13% n CK+, MMI, and MUG dataset, respectvely. There s mprovement n recognton accuracy usng lne based features as compared to pont based features. Table 2, 3 and 4 show e correspondng confuson matrces labeled w lne based feature representaton along w pont and trangle based features Facal Expresson Recognton usng Boosted Trangle based Features The overall procedure for extractng trangular features s explaned n secton 3.4. Mult-class AdaBoost w ELM s used to select e most dscrmnatve features n e form of trangle composed of facal landmars. A trangle n e proposed system, whch s formed usng facal ey ponts, s represented by four components; two sde lengs, an ncluded angle, and a base angle of one of e trangle sde w e x-axs (refer to fg. 4). As shown n Fg. 4, features from trangle for SVM classfcaton are extracted by subtractng trangle components of facal landmars. The maxmum changes n magntude of e four components

17 The fnal publcaton s avalable at Sprnger va 17 of e trangle n e sequence w respect to e trangle components n e frst frame are extracted. Therefore each trangle s composed of four features, but some trangles n e AdaBoost selected trangle set may share e common edge, erefore e total feature dmenson s always less or equal to e number of AdaBoost selected trangles multpled by 4. As e number of trangle n e set ncreases e classfcaton accuracy also ncreases. Fg. 8 shows e graph of e number of trangular features verses recognton accuracy n MUG facal expresson dataset. Recognton accuracy (%) Recognton Accuracy-MUG Dataset Total number of trangles Fg. 8. Recognton accuracy under dfferent number of boosted trangular features n MUG dataset Table 2, 3 and 4 shows e confuson matrx for e FER usng features extracted from 160, 84, and 98 AdaBoosted trangles n CK+, MMI, and MUG facal expresson data sets respectvely (labeled as trangle feature representaton) along w pont and lne based features. The dmensonalty of e feature vector usng 160, 84 and 98 trangles n CK+, MMI and MUG dataset s 370, 317 and 330 respectvely. The average recognton accuraces are 97.80%, 77.22% and 95.50% respectvely. We also performed e experment by reducng e number of ey ponts. As shows n Fg. 9, 25 and 34 ey ponts tracng results are used to select e trangular features. The set, 25 ey ponts, are e same set of ey ponts used n [39] for e comparson of face recognton algorms. Anoer set, 34 ey ponts, are obtaned by addng some more ey ponts to e 25 ey ponts set, especally n mou and eyebrow regons. Fnally, 52 ey ponts are e set of ey ponts used n [25], whch s e extenson of 34 ey ponts set. Fg. 9. Three dfferent set of facal landmars used for e evaluaton of proposed trangular feature based FER system

18 The fnal publcaton s avalable at Sprnger va 18 Fg. 10 shows e comparson of FER accuracy n ree dfferent data sets usng trangle based features w dfferent number of facal ey ponts tracng. In CK+ dataset 97.80%, 97.29%, and 93.37% of recognton accuracy, n MMI dataset 77.22%, 71.11%, and 68.61% of recognton accuracy and n MUG dataset 95.50%, 93.92%, and 93.00% of recognton accuracy s obtaned usng features from tracng result of 52, 34, and 25 facal landmars respectvely. Even f e small numbers of facal landmars are used, good recognton accuracy can be obtaned usng e proposed trangle based geometrc features. Fg. 10. Comparson of FER accuracy n ree dfferent data sets w trangular feature extracted from dfferent set of landmars tracng results 6.4. Comparson of Pont, Lne and Trangle Feature based Facal Expresson Recognton The features extracted based on e tracng result of sngle landmar s very smple. It gves e maxmum dsplacement of feature pont n four drectons as some expresson evolves over tme. The second type of geometrc feature s e features extracted based on lne connectng two facal landmars. Fnally, rd type of geometrc feature s extracted n e form of components of trangles composed of facal ey ponts. Fg. 11 shows e comparson of FER performance usng ree dfferent nds of geometrc features n ree data sets. The average classfcaton accuracy usng pont, lne and trangle features n CK+ dataset s 96.37%, 96.58%, and 97.80%, n MMI dataset s 67.64%, 74.31%, and 77.22%, and n MUG dataset s 91.41%, 94.13%, and 95.50%, respectvely. The features extracted n e form of lne components gve better result an pont based features. On e oer hand, feature extracted n e form of trangle components gve better result an lne based features. It proves at whle some facal expresson evolves, e movement of facal ey ponts s not ndependent,.e., ere s some defnte relatonshp between movement of facal ey ponts.

19 The fnal publcaton s avalable at Sprnger va 19 Fg. 11. Comparson of pont, lne and trangle feature based FER n ree dfferent data sets The performance n e MMI dataset s low as compared to CK+ and MUG dataset. Ths s because MMI s dffcult dataset among ree data sets. Even ough e lne based feature gve better result an pont based feature, and trangle based feature s superor an lne based feature, e performance n CK+ and MUG dataset usng pont, lne, and trangle feature produce comparable results. But n concluson, e best result n all ree data sets s obtaned usng geometrc features extracted based on e trangle composed of facal landmars Generalzaton Valdaton across Dfferent Databases The generalzaton performance of e FER system can be best evaluated usng cross-dataset evaluaton. Most of e researchers use same dataset for bo tranng and valdaton. It s obvous at hgher recognton rate can be acheved when evaluated on a sngle dataset, because, whle recordng e facal expressons n lab envronment, ere s lot of smlarty n all e recorded sequences. For example, lghtng, bacground, way of expressng emoton, mage qualty, mage resolutons etc. But f we tae two dfferent data sets, ere wll be much dssmlarty. The FER system wll be a good one f t can produce better result when evaluated on a testng dataset dfferent from e tranng dataset. Table 5 shows e confuson matrx for FER usng trangle based geometrc feature whle usng CK+ dataset for tranng and MMI dataset for testng. The average recognton accuracy n s case s only 64.89%. Table 6 shows e confuson matrx for FER usng MUG dataset for tranng and CK+ dataset for testng. In s case, e average recognton accuracy s 81.74%. Now, table 7 shows e cross-dataset evaluaton result n ree dfferent dataset n whch only average recognton result are presented. From ese tables t can be seen at, whle MMI dataset s used eer for tranng or testng, e recognton accuracy s low, on e oer hand f CK+ and MUG dataset are used for cross dataset evaluaton e recognton performance s more an 80%. It shows at MMI dataset s e most dffcult dataset among e ree dfferent dataset used n s paper. The cross-dataset evaluaton s performed only for trangle based features because s feature set gves e best recognton accuracy as compared to pont and lne based features. Table 5. Confuson matrxes for FER n percentages usng SVM w boosted trangle based features whle usng CK+ dataset for tranng and MMI dataset for testng.

20 The fnal publcaton s avalable at Sprnger va 20 % Anger Dsgust Fear Happness Sadness Surprse Anger Dsgust Fear Happness Sadness Surprse Table 6. Confuson matrxes for FER n percentages usng SVM w boosted trangle based features whle usng MUG dataset for tranng and CK+ dataset for testng. % Anger Dsgust Fear Happness Sadness Surprse Anger Dsgust Fear Happness Sadness Surprse Table 7. Cross-dataset evaluaton performance showng average recognton accuraces 6.6. Comparson w state-of-e-art Meods Tranng/Testng MMI CK+ MUG MMI X CK X MUG X Even ough e expermental setup s not exactly same, e overall recognton accuracy of some recent meods of FER from e lterature s compared w accuracy obtaned usng proposed systems. Geometrc feature based FER system n whch e features n e form of trangle are selected and e recognton s performed usng SVM classfcaton gves e best recognton accuracy of 97.8% n CK+ dataset, 77.22% n MMI dataset and 95.5% n MUG dataset. In e lterature so far, e system n [26] has shown superor performance, and has acheved 99.7% of recognton rate n CK+ dataset usng ey pont dsplacement features. But n er meod, e facal ey pont ntalzaton s a manual process, and e number of ey ponts s also larger an e number of ey ponts used n e proposed meod. On e oer hand, e proposed system s fully automatc. Smlarly, n [48], 97.16% recognton rate has been acheved by extractng e most dscrmnatve facal ey ponts for each facal expresson. Recently n CK+ dataset, [31] have acheved 83.01% recognton accuracy n whch geometrc features are extracted usng only 8 facal ey ponts n e sngle hghly expressed facal expresson frame. In [14], usng 96 mage sequences from MMI dataset w LBP features; ey acheved average recognton accuracy of 86.9%. Recently, Albert et al. [32] acheved 71.83% recognton accuracy n MMI dataset usng attenton eory based automatc samplng and optcal flow as a temporal feature. In e proposed system 203 mage sequences are used from MMI dataset, at whch some of em are not acted facal expressons,.e., ey are naturally expressed facal expressons whch adds dffculty n recognzng facal expressons w hgh accuracy. Rahulamaavan et al. [49] acheved 95.24% overall recognton accuracy n MUG

21 The fnal publcaton s avalable at Sprnger va 21 facal expresson dataset. They performed FER n encrypted doman usng local fsher dscrmnant analyss. Recently, n [50], 92.76% and % recognton accuracy s obtaned n MUG and CK+ dataset usng leave-one-subject-out valdaton strategy, respectvely. The manfold structure s learned usng coordnates of facal ey ponts tracng result whch can be decomposed to a small number of lnear subspaces of very low dmenson. Table 8 shows e summary of e comparson of FER performance w dfferent meods n e lterature. One of e advantages of e proposed geometrc feature based FER systems s e relatvely lower dmenson of feature vector compared to e feature dmenson n e state of e art meods. The recognton accuracy n CK+ and MUG dataset s comparable and even better an e best recognton accuracy n e lterature. In e lterature n MMI dataset e best accuracy s obtaned usng texture feature raer an geometrc features. Table 8. Comparson of FER performance w dfferent meods n e lterature. Reference Meod Data sets Class Accuracy (%) [26] Sem-automatc, facal ey pont dsplacement features, SVM classfer CK [48] Most dscrmnated facal ey ponts for each facal expressons CK [31] Geometrc features from 8 facal ey ponts, SVM classfer CK [14] Boosted LBP features, SVM classfer MMI [32] Attenton eory based automatc samplng and optcal flow as temporal features MMI [49] Local fsher dscrmnant analyss n encrypted doman MUG [50] Manfold structure learnng usng coordnates of facal ey pont CK tracng results MUG [21] Graph-preservng sparse NMF CK [38] Enhanced ndependent component, FLDA CK [30] Geometrc features, dynamc Bayesan networ CK Ours Fully-automatc, trangle based geometrc feature representaton, salent feature selecton, SVM classfer CK MMI MUG Concluson The am of s paper s to present a new framewor for FER n frontal mage sequence based on geometrc features extracted from e tracng result of facal ey ponts. Dfferent types of geometrc feature extracton technques from facal expresson mage sequence are presented. The facal expressons are recognzed usng most dscrmnatve geometrc feature selected usng feature selectve AdaBoost algorm. The pont, lne and trangle based features are presented. The pont based feature can be used drectly, whereas lne and trangle based feature are used only after feature selecton process. The performance of e proposed geometrc-feature based FER system s evaluated n ree dfferent data sets: namely CK+, MMI and MUG. The lne based feature gves better result an pont based feature, whereas trangle based feature gves superor result an lne based features. Therefore recognton accuracy usng e features extracted consderng more ey ponts at a tme s better an usng e features extracted by consderng sngle ey pont at a tme. Therefore e most desrable feature s e

22 The fnal publcaton s avalable at Sprnger va 22 tme-varyng graph tself. But we cannot use graph drectly, so we need to fnd out effcent feature from t, whch do not reduce e nformaton n e graph. The recognton accuracy n CK+ and MUG dataset s more an 95%, whereas n MMI dataset e recognton accuracy s only 77.22%. The MMI dataset s relatvely dffcult en CK+ and MUG dataset because t ncludes some spontaneous facal expressons. The generalzaton capablty of e proposed FER system s proved usng cross-dataset evaluaton. More an 80% recognton accuracy s obtaned usng proposed FER system whle usng dfferent data sets for tranng and testng. Whle comparng e results w e state of e art meods, e performances of e proposed system s comparable and at tmes even better an e results reported n e lterature for most cases. References [1] A. Mehraban, Communcaton wout words, Psychology Today, vol. 2, pp , [2] P. Eman, Strong evdence of unversal n facal expressons: A reply to Russell s mstaen crtque, Psychologcal Bulletn, vol. 115, pp , [3] Y. T. Uhls, M. Mchyan, J. Morrs, D. Garca, G. W. Small, E. Zgourou and P. M. Greenfeld, Fve days at outdoor educaton camp wout screen mproves pattern slls w nonverbal emoton cues, Computers n Human Behavor, vol. 39, pp , [4] Y.-L. Tan, T. Kanade, and J. F. Cohn, Handboo of Face Recognton Sprnger: Berln, Germany, pp , [5] P. Vola, and M. J. Jones, Robust real-tme face detecton, Internatonal Journal of Computer Vson, vol. 57, pp , [6] L. Wsott, J. M. Fellous and N. Krüger, Face recognton by elastc bunch graph matchng, IEEE Transacton on Pattern Analyss and Machne Intellgence, vol. 19, pp , [7] J. Y. Bouguet, Pyramdal mplementaton of e Lucas-Kanade feature tracer, Technologcal Report, Intel Corporaton, Mcroprocessor Research Lab, [8] A. Samal and P. A. Iyenger, Automatc recognton of human face and facal expressons: a survey, Pattern Recognton, vol. 25, pp , [9] M. Pantc, and L. Rorantz, Automatc analyss of facal expressons: e state of e art, IEEE Transacton on Pattern Analyss and Machne Intellgence, vol. 22, pp , [10] B. Fasel, and J. Luettn, Automatc facal expresson analyss: a survey, Pattern Recognton, vol. 36, pp , [11] Z. Zeng, M. Pantc, G. I. Rosman, and T. S. Huang, A survey of affect recognton meods: audo, vsual, and spontaneous expressons, IEEE Transacton on Pattern Analyss and Machne Intellgence, vol. 31, pp , [12] M. F. Valster, M. Mehu, B. Jang, M. Pantc and K. Scherer, Meta-analyss of e frst facal expresson recognton challenge, IEEE Transacton on System Man and Cybernetcs-Part. B Cybernatcs, vol. 42, pp , [13] G. Zhao, and M. Petanen, Dynamc texture recognton usng local bnary patterns w an applcaton to facal expressons, IEEE Transacton on Pattern Analyss and Machne Intellgence, vol. 29, pp , 2007.

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS P.G. Demdov Yaroslavl State Unversty Anatoly Ntn, Vladmr Khryashchev, Olga Stepanova, Igor Kostern EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS Yaroslavl, 2015 Eye

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Detection of an Object by using Principal Component Analysis

Detection of an Object by using Principal Component Analysis Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Local Quaternary Patterns and Feature Local Quaternary Patterns

Local Quaternary Patterns and Feature Local Quaternary Patterns Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents

More information

Recognizing Faces. Outline

Recognizing Faces. Outline Recognzng Faces Drk Colbry Outlne Introducton and Motvaton Defnng a feature vector Prncpal Component Analyss Lnear Dscrmnate Analyss !"" #$""% http://www.nfotech.oulu.f/annual/2004 + &'()*) '+)* 2 ! &

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

An Image Fusion Approach Based on Segmentation Region

An Image Fusion Approach Based on Segmentation Region Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Facial Expressions Recognition in a Single Static as well as Dynamic Facial Images Using Tracking and Probabilistic Neural Networks

Facial Expressions Recognition in a Single Static as well as Dynamic Facial Images Using Tracking and Probabilistic Neural Networks Facal Expressons Recognton n a Sngle Statc as well as Dynamc Facal Images Usng Trackng and Probablstc Neural Networks Had Seyedarab 1, Won-Sook Lee 2, Al Aghagolzadeh 1, and Sohrab Khanmohammad 1 1 Faculty

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Texture and shape information fusion for facial expression and facial action unit recognition

Texture and shape information fusion for facial expression and facial action unit recognition Pattern Recognton 41 (2008) 833 851 www.elsever.com/locate/pr Texture and shape nformaton fuson for facal expresson and facal acton unt recognton Irene Kotsa, Stefanos Zaferou, Ioanns Ptas Department of

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

Learning a Class-Specific Dictionary for Facial Expression Recognition

Learning a Class-Specific Dictionary for Facial Expression Recognition BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

An efficient method to build panoramic image mosaics

An efficient method to build panoramic image mosaics An effcent method to buld panoramc mage mosacs Pattern Recognton Letters vol. 4 003 Dae-Hyun Km Yong-In Yoon Jong-Soo Cho School of Electrcal Engneerng and Computer Scence Kyungpook Natonal Unv. Abstract

More information

Integrated Expression-Invariant Face Recognition with Constrained Optical Flow

Integrated Expression-Invariant Face Recognition with Constrained Optical Flow Integrated Expresson-Invarant Face Recognton wth Constraned Optcal Flow Chao-Kue Hseh, Shang-Hong La 2, and Yung-Chang Chen Department of Electrcal Engneerng, Natonal Tsng Hua Unversty, Tawan 2 Department

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input

Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input Real-tme Jont Tracng of a Hand Manpulatng an Object from RGB-D Input Srnath Srdhar 1 Franzsa Mueller 1 Mchael Zollhöfer 1 Dan Casas 1 Antt Oulasvrta 2 Chrstan Theobalt 1 1 Max Planc Insttute for Informatcs

More information

Histogram of Template for Pedestrian Detection

Histogram of Template for Pedestrian Detection PAPER IEICE TRANS. FUNDAMENTALS/COMMUN./ELECTRON./INF. & SYST., VOL. E85-A/B/C/D, No. xx JANUARY 20xx Hstogram of Template for Pedestran Detecton Shaopeng Tang, Non Member, Satosh Goto Fellow Summary In

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Margin-Constrained Multiple Kernel Learning Based Multi-Modal Fusion for Affect Recognition

Margin-Constrained Multiple Kernel Learning Based Multi-Modal Fusion for Affect Recognition Margn-Constraned Multple Kernel Learnng Based Mult-Modal Fuson for Affect Recognton Shzh Chen and Yngl Tan Electrcal Engneerng epartment The Cty College of New Yor New Yor, NY USA {schen, ytan}@ccny.cuny.edu

More information

An AAM-based Face Shape Classification Method Used for Facial Expression Recognition

An AAM-based Face Shape Classification Method Used for Facial Expression Recognition Internatonal Journal of Research n Engneerng and Technology (IJRET) Vol. 2, No. 4, 23 ISSN 2277 4378 An AAM-based Face Shape Classfcaton Method Used for Facal Expresson Recognton Lunng. L, Jaehyun So,

More information

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram Shape Representaton Robust to the Sketchng Order Usng Dstance Map and Drecton Hstogram Department of Computer Scence Yonse Unversty Kwon Yun CONTENTS Revew Topc Proposed Method System Overvew Sketch Normalzaton

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

Backpropagation: In Search of Performance Parameters

Backpropagation: In Search of Performance Parameters Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,

More information

A Robust Method for Estimating the Fundamental Matrix

A Robust Method for Estimating the Fundamental Matrix Proc. VIIth Dgtal Image Computng: Technques and Applcatons, Sun C., Talbot H., Ourseln S. and Adraansen T. (Eds.), 0- Dec. 003, Sydney A Robust Method for Estmatng the Fundamental Matrx C.L. Feng and Y.S.

More information

Data Mining: Model Evaluation

Data Mining: Model Evaluation Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct

More information

Video-Based Facial Expression Recognition Using Local Directional Binary Pattern

Video-Based Facial Expression Recognition Using Local Directional Binary Pattern Vdeo-Based Facal Expresson Recognton Usng Local Drectonal Bnary Pattern Sahar Hooshmand, Al Jamal Avlaq, Amr Hossen Rezae Electrcal Engneerng Dept., AmrKabr Unvarsty of Technology Tehran, Iran Abstract

More information

Face Detection with Deep Learning

Face Detection with Deep Learning Face Detecton wth Deep Learnng Yu Shen Yus122@ucsd.edu A13227146 Kuan-We Chen kuc010@ucsd.edu A99045121 Yzhou Hao y3hao@ucsd.edu A98017773 Mn Hsuan Wu mhwu@ucsd.edu A92424998 Abstract The project here

More information

Fast Feature Value Searching for Face Detection

Fast Feature Value Searching for Face Detection Vol., No. 2 Computer and Informaton Scence Fast Feature Value Searchng for Face Detecton Yunyang Yan Department of Computer Engneerng Huayn Insttute of Technology Hua an 22300, Chna E-mal: areyyyke@63.com

More information

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

Dynamic wetting property investigation of AFM tips in micro/nanoscale

Dynamic wetting property investigation of AFM tips in micro/nanoscale Dynamc wettng property nvestgaton of AFM tps n mcro/nanoscale The wettng propertes of AFM probe tps are of concern n AFM tp related force measurement, fabrcaton, and manpulaton technques, such as dp-pen

More information

A New Feature of Uniformity of Image Texture Directions Coinciding with the Human Eyes Perception 1

A New Feature of Uniformity of Image Texture Directions Coinciding with the Human Eyes Perception 1 A New Feature of Unformty of Image Texture Drectons Concdng wth the Human Eyes Percepton Xng-Jan He, De-Shuang Huang, Yue Zhang, Tat-Mng Lo 2, and Mchael R. Lyu 3 Intellgent Computng Lab, Insttute of Intellgent

More information

Human Face Recognition Using Generalized. Kernel Fisher Discriminant

Human Face Recognition Using Generalized. Kernel Fisher Discriminant Human Face Recognton Usng Generalzed Kernel Fsher Dscrmnant ng-yu Sun,2 De-Shuang Huang Ln Guo. Insttute of Intellgent Machnes, Chnese Academy of Scences, P.O.ox 30, Hefe, Anhu, Chna. 2. Department of

More information

Scale Selective Extended Local Binary Pattern For Texture Classification

Scale Selective Extended Local Binary Pattern For Texture Classification Scale Selectve Extended Local Bnary Pattern For Texture Classfcaton Yutng Hu, Zhlng Long, and Ghassan AlRegb Multmeda & Sensors Lab (MSL) Georga Insttute of Technology 03/09/017 Outlne Texture Representaton

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros. Fttng & Matchng Lecture 4 Prof. Bregler Sldes from: S. Lazebnk, S. Setz, M. Pollefeys, A. Effros. How do we buld panorama? We need to match (algn) mages Matchng wth Features Detect feature ponts n both

More information

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM Classfcaton of Face Images Based on Gender usng Dmensonalty Reducton Technques and SVM Fahm Mannan 260 266 294 School of Computer Scence McGll Unversty Abstract Ths report presents gender classfcaton based

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

Multi-View Face Alignment Using 3D Shape Model for View Estimation

Multi-View Face Alignment Using 3D Shape Model for View Estimation Mult-Vew Face Algnment Usng 3D Shape Model for Vew Estmaton Yanchao Su 1, Hazhou A 1, Shhong Lao 1 Computer Scence and Technology Department, Tsnghua Unversty Core Technology Center, Omron Corporaton ahz@mal.tsnghua.edu.cn

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

Facial Expression Recognition Using Sparse Representation

Facial Expression Recognition Using Sparse Representation Facal Expresson Recognton Usng Sparse Representaton SHIQING ZHANG, XIAOMING ZHAO, BICHENG LEI School of Physcs and Electronc Engneerng azhou Unversty azhou 38000 CHINA tzczsq@63.com, lebcheng@63.com Department

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

Discriminative classifiers for object classification. Last time

Discriminative classifiers for object classification. Last time Dscrmnatve classfers for object classfcaton Thursday, Nov 12 Krsten Grauman UT Austn Last tme Supervsed classfcaton Loss and rsk, kbayes rule Skn color detecton example Sldng ndo detecton Classfers, boostng

More information

Gender Classification using Interlaced Derivative Patterns

Gender Classification using Interlaced Derivative Patterns Gender Classfcaton usng Interlaced Dervatve Patterns Author Shobernejad, Ameneh, Gao, Yongsheng Publshed 2 Conference Ttle Proceedngs of the 2th Internatonal Conference on Pattern Recognton (ICPR 2) DOI

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

Computer Animation and Visualisation. Lecture 4. Rigging / Skinning

Computer Animation and Visualisation. Lecture 4. Rigging / Skinning Computer Anmaton and Vsualsaton Lecture 4. Rggng / Sknnng Taku Komura Overvew Sknnng / Rggng Background knowledge Lnear Blendng How to decde weghts? Example-based Method Anatomcal models Sknnng Assume

More information

3D Virtual Eyeglass Frames Modeling from Multiple Camera Image Data Based on the GFFD Deformation Method

3D Virtual Eyeglass Frames Modeling from Multiple Camera Image Data Based on the GFFD Deformation Method NICOGRAPH Internatonal 2012, pp. 114-119 3D Vrtual Eyeglass Frames Modelng from Multple Camera Image Data Based on the GFFD Deformaton Method Norak Tamura, Somsangouane Sngthemphone and Katsuhro Ktama

More information

PCA Based Gait Segmentation

PCA Based Gait Segmentation Honggu L, Cupng Sh & Xngguo L PCA Based Gat Segmentaton PCA Based Gat Segmentaton Honggu L, Cupng Sh, and Xngguo L 2 Electronc Department, Physcs College, Yangzhou Unversty, 225002 Yangzhou, Chna 2 Department

More information

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010 Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement

More information

A Background Subtraction for a Vision-based User Interface *

A Background Subtraction for a Vision-based User Interface * A Background Subtracton for a Vson-based User Interface * Dongpyo Hong and Woontack Woo KJIST U-VR Lab. {dhon wwoo}@kjst.ac.kr Abstract In ths paper, we propose a robust and effcent background subtracton

More information

Combination of Local Multiple Patterns and Exponential Discriminant Analysis for Facial Recognition

Combination of Local Multiple Patterns and Exponential Discriminant Analysis for Facial Recognition Sensors & ransducers 203 by IFSA http://.sensorsportal.com Combnaton of Local Multple Patterns and Exponental Dscrmnant Analyss for Facal Recognton, 2 Lfang Zhou, 2 Bn Fang, 3 Wesheng L, 3 Ldou Wang College

More information

Efficient Segmentation and Classification of Remote Sensing Image Using Local Self Similarity

Efficient Segmentation and Classification of Remote Sensing Image Using Local Self Similarity ISSN(Onlne): 2320-9801 ISSN (Prnt): 2320-9798 Internatonal Journal of Innovatve Research n Computer and Communcaton Engneerng (An ISO 3297: 2007 Certfed Organzaton) Vol.2, Specal Issue 1, March 2014 Proceedngs

More information

Large-scale Web Video Event Classification by use of Fisher Vectors

Large-scale Web Video Event Classification by use of Fisher Vectors Large-scale Web Vdeo Event Classfcaton by use of Fsher Vectors Chen Sun and Ram Nevata Unversty of Southern Calforna, Insttute for Robotcs and Intellgent Systems Los Angeles, CA 90089, USA {chensun nevata}@usc.org

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

WIRELESS CAPSULE ENDOSCOPY IMAGE CLASSIFICATION BASED ON VECTOR SPARSE CODING.

WIRELESS CAPSULE ENDOSCOPY IMAGE CLASSIFICATION BASED ON VECTOR SPARSE CODING. WIRELESS CAPSULE ENDOSCOPY IMAGE CLASSIFICATION BASED ON VECTOR SPARSE CODING Tao Ma 1, Yuexan Zou 1 *, Zhqang Xang 1, Le L 1 and Y L 1 ADSPLAB/ELIP, School of ECE, Pekng Unversty, Shenzhen 518055, Chna

More information

Fingerprint matching based on weighting method and SVM

Fingerprint matching based on weighting method and SVM Fngerprnt matchng based on weghtng method and SVM Ja Ja, Lanhong Ca, Pnyan Lu, Xuhu Lu Key Laboratory of Pervasve Computng (Tsnghua Unversty), Mnstry of Educaton Bejng 100084, P.R.Chna {jaja}@mals.tsnghua.edu.cn

More information

High-Boost Mesh Filtering for 3-D Shape Enhancement

High-Boost Mesh Filtering for 3-D Shape Enhancement Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

IMAGE MATCHING WITH SIFT FEATURES A PROBABILISTIC APPROACH

IMAGE MATCHING WITH SIFT FEATURES A PROBABILISTIC APPROACH IMAGE MATCHING WITH SIFT FEATURES A PROBABILISTIC APPROACH Jyot Joglekar a, *, Shrsh S. Gedam b a CSRE, IIT Bombay, Doctoral Student, Mumba, Inda jyotj@tb.ac.n b Centre of Studes n Resources Engneerng,

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Comparing Image Representations for Training a Convolutional Neural Network to Classify Gender

Comparing Image Representations for Training a Convolutional Neural Network to Classify Gender 2013 Frst Internatonal Conference on Artfcal Intellgence, Modellng & Smulaton Comparng Image Representatons for Tranng a Convolutonal Neural Network to Classfy Gender Choon-Boon Ng, Yong-Haur Tay, Bok-Mn

More information

Facial Expression Recognition Based on Local Binary Patterns and Local Fisher Discriminant Analysis

Facial Expression Recognition Based on Local Binary Patterns and Local Fisher Discriminant Analysis WSEAS RANSACIONS on SIGNAL PROCESSING Shqng Zhang, Xaomng Zhao, Bcheng Le Facal Expresson Recognton Based on Local Bnary Patterns and Local Fsher Dscrmnant Analyss SHIQING ZHANG, XIAOMING ZHAO, BICHENG

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

An Efficient Illumination Normalization Method with Fuzzy LDA Feature Extractor for Face Recognition

An Efficient Illumination Normalization Method with Fuzzy LDA Feature Extractor for Face Recognition www.mer.com Vol.2, Issue.1, pp-060-065 ISS: 2249-6645 An Effcent Illumnaton ormalzaton Meod w Fuzzy LDA Feature Extractor for Face Recognton Behzad Bozorgtabar 1, Hamed Azam 2 (Department of Electrcal

More information

MOTION BLUR ESTIMATION AT CORNERS

MOTION BLUR ESTIMATION AT CORNERS Gacomo Boracch and Vncenzo Caglot Dpartmento d Elettronca e Informazone, Poltecnco d Mlano, Va Ponzo, 34/5-20133 MILANO boracch@elet.polm.t, caglot@elet.polm.t Keywords: Abstract: Pont Spread Functon Parameter

More information

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide Lobachevsky State Unversty of Nzhn Novgorod Polyhedron Quck Start Gude Nzhn Novgorod 2016 Contents Specfcaton of Polyhedron software... 3 Theoretcal background... 4 1. Interface of Polyhedron... 6 1.1.

More information

Modular PCA Face Recognition Based on Weighted Average

Modular PCA Face Recognition Based on Weighted Average odern Appled Scence odular PCA Face Recognton Based on Weghted Average Chengmao Han (Correspondng author) Department of athematcs, Lny Normal Unversty Lny 76005, Chna E-mal: hanchengmao@163.com Abstract

More information

NON-FRONTAL VIEW FACIAL EXPRESSION RECOGNITION BASED ON ERGODIC HIDDEN MARKOV MODEL SUPERVECTORS. Hao Tang, Mark Hasegawa-Johnson, Thomas Huang

NON-FRONTAL VIEW FACIAL EXPRESSION RECOGNITION BASED ON ERGODIC HIDDEN MARKOV MODEL SUPERVECTORS. Hao Tang, Mark Hasegawa-Johnson, Thomas Huang NON-FRONTAL VIEW FACIAL EXPRESSION RECOGNITION BASED ON ERGODIC HIDDEN MARKOV MODEL SUPERVECTORS Hao Tang, Mark Hasegawa-Johnson, Thomas Huang Department of Electrcal and Computer Engneerng Unversty of

More information

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity Journal of Sgnal and Informaton Processng, 013, 4, 114-119 do:10.436/jsp.013.43b00 Publshed Onlne August 013 (http://www.scrp.org/journal/jsp) Corner-Based Image Algnment usng Pyramd Structure wth Gradent

More information

On Modeling Variations For Face Authentication

On Modeling Variations For Face Authentication On Modelng Varatons For Face Authentcaton Xaomng Lu Tsuhan Chen B.V.K. Vjaya Kumar Department of Electrcal and Computer Engneerng, Carnege Mellon Unversty Abstract In ths paper, we present a scheme for

More information

What is Object Detection? Face Detection using AdaBoost. Detection as Classification. Principle of Boosting (Schapire 90)

What is Object Detection? Face Detection using AdaBoost. Detection as Classification. Principle of Boosting (Schapire 90) CIS 5543 Coputer Vson Object Detecton What s Object Detecton? Locate an object n an nput age Habn Lng Extensons Vola & Jones, 2004 Dalal & Trggs, 2005 one or ultple objects Object segentaton Object detecton

More information

The Research of the Facial Expression Recognition Method for Human-Computer Interaction Based on the Gabor Features of the Key Regions

The Research of the Facial Expression Recognition Method for Human-Computer Interaction Based on the Gabor Features of the Key Regions Sensors & Transducers, Vol. 77, Issue 8, August 04, pp. 56-6 Sensors & Transducers 04 by IFSA Publshng, S. L. http://www.sensorsportal.com The Research of the Facal Expresson Recognton Method for Human-Computer

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

MOTION PANORAMA CONSTRUCTION FROM STREAMING VIDEO FOR POWER- CONSTRAINED MOBILE MULTIMEDIA ENVIRONMENTS XUNYU PAN

MOTION PANORAMA CONSTRUCTION FROM STREAMING VIDEO FOR POWER- CONSTRAINED MOBILE MULTIMEDIA ENVIRONMENTS XUNYU PAN MOTION PANORAMA CONSTRUCTION FROM STREAMING VIDEO FOR POWER- CONSTRAINED MOBILE MULTIMEDIA ENVIRONMENTS by XUNYU PAN (Under the Drecton of Suchendra M. Bhandarkar) ABSTRACT In modern tmes, more and more

More information

UB at GeoCLEF Department of Geography Abstract

UB at GeoCLEF Department of Geography   Abstract UB at GeoCLEF 2006 Mguel E. Ruz (1), Stuart Shapro (2), June Abbas (1), Slva B. Southwck (1) and Davd Mark (3) State Unversty of New York at Buffalo (1) Department of Lbrary and Informaton Studes (2) Department

More information

Robust Shot Boundary Detection from Video Using Dynamic Texture

Robust Shot Boundary Detection from Video Using Dynamic Texture Sensors & Transducers 204 by IFSA Publshng, S. L. http://www.sensorsportal.com Robust Shot Boundary Detecton from Vdeo Usng Dynamc Teture, 3 Peng Tale, 2 Zhang Wenjun School of Communcaton & Informaton

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

RECOGNIZING GENDER THROUGH FACIAL IMAGE USING SUPPORT VECTOR MACHINE

RECOGNIZING GENDER THROUGH FACIAL IMAGE USING SUPPORT VECTOR MACHINE Journal of Theoretcal and Appled Informaton Technology 30 th June 06. Vol.88. No.3 005-06 JATIT & LLS. All rghts reserved. ISSN: 99-8645 www.jatt.org E-ISSN: 87-395 RECOGNIZING GENDER THROUGH FACIAL IMAGE

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION 1 THE PUBLISHING HOUSE PROCEEDINGS OF THE ROMANIAN ACADEMY, Seres A, OF THE ROMANIAN ACADEMY Volume 4, Number 2/2003, pp.000-000 A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION Tudor BARBU Insttute

More information

Comparison Study of Textural Descriptors for Training Neural Network Classifiers

Comparison Study of Textural Descriptors for Training Neural Network Classifiers Comparson Study of Textural Descrptors for Tranng Neural Network Classfers G.D. MAGOULAS (1) S.A. KARKANIS (1) D.A. KARRAS () and M.N. VRAHATIS (3) (1) Department of Informatcs Unversty of Athens GR-157.84

More information