Pattern Recognition Letters
|
|
- Samson Wade
- 6 years ago
- Views:
Transcription
1 attern Reconition Letters 29 (2008) Contents lists available at ScienceDirect attern Reconition Letters journal homepae: 3D face reconition by constructin deformation invariant imae q Li Li a, *, Chenhua Xu b, Wei Tan a, Chen Zhon c a Collee of Information and Electrical Enineerin, China Aricultural University,.O. Box 62, Beijin , R China b Nufront Software Corporation, Beijin , R China c National Laboratory of attern Reconition, Institute of Automation, Chinese Academy of Sciences, Beijin , R China article info abstract Article history: Received 17 June 2007 Received in revised form 16 March 2008 Available online 10 April 2008 Communicated by N. ears Keywords: 3D face reconition Facial expression Isometric transformation Geodesic distance Biometrics Based on the observation that facial surfaces across different expressions can be modeled as similar isometric transformations, in this paper a novel deformation invariant imae for robust 3D face reconition is proposed. First, we obtain the depth imae and the intensity imae from the oriinal 3D facial data. Then, eodesic level curves are enerated by constructin radial eodesic distance imae from the depth imae. Finally, deformation invariant imae is constructed by evenly samplin points from the selected eodesic level curves in the intensity imae. Our experiments are based on the 3D CASIA Face Database, which includes 123 individuals with complex expressions. Experimental results show that our proposed method substantially improves the reconition performance under various facial expressions. Ó 2008 Elsevier B.V. All rihts reserved. 1. Introduction Face reconition is one of the most active research issues in the field of pattern reconition and multimedia. The research sinificantly contributes to the improvement of computer intellience. Over the past several decades, most work has been focusin on 2D imaes (Zhao et al., 2003). Due to the complexity of face reconition, it is still hard to develop a robust automatic face reconition system. The difficulties mainly include the complex variations of poses, expressions, illumination and aein. Accordin to the evaluation of commercially available and mature prototype face reconition systems provided by face reconition vendor tests (FRVT) (hillips et al., 2007), the reconition performance under the unconstrained condition is not satisfactory. In this paper, we attempt to realize 3D face reconition robust to expression variations. In fact, the human face contains not only 2D texture information but also 3D shape information. Reconition usin 2D imaes results in the loss of some information. An alternative idea is to represent the face or head as a realistic 3D model, which contains not only texture and shape information, but also structural information for simulatin facial expressions. Furthermore, some techniques in computer raphics can be explored to q artially supported by research funds from the NSFC (Grant Nos and ), and the National 973 roram (Grant No. 2004CB318100). * Correspondin author. Tel.: ; fax: address: lilixch@yahoo.com.cn (L. Li). simulate facial variations, such as expressions, aein and hair style, which provide an ideal way to identify a chanin individual. With the rapid development of 3D acquisition equipment, face reconition based on 3D information is attractin more and more attention (Bowyer et al., 2006). In 3D face reconition, depth information and surface features are used to characterize an individual. This is a promisin way to understand human facial features in 3D space and to improve the performance of current face reconition systems. Moreover, some current 3D sensors can simultaneously obtain texture and depth information, resultin in multi-modal reconition (Wan et al., 2002; Chan et al., 2005a; Tsalakanidou et al., 2005). This makes 3D face reconition a promisin solution to overcome difficulties in 2D face reconition. Variations in illumination, expression and pose are the main factors influencin face reconition performance. For 3D face reconition, illumination variations do not influence the reconition performance that much. This is not strane since the oriinal facial data are usually captured by a laser scanner which is robust to illumination variations. ose variations only affect the reconition performance a little bit because some reistration methods can accurately alin the face data, thus reducin the influence of pose variations. Expression variations reatly affect the reconition performance. Expression variations distort the facial surface, and result in the chane of the facial texture. Moreover, expression variations also cause the reistration error to increase. It is noted that the expression variation is one of the most difficult factors in 3D face reconition /$ - see front matter Ó 2008 Elsevier B.V. All rihts reserved. doi: /j.patrec
2 L. Li et al. / attern Reconition Letters 29 (2008) There also exist some attempts to overcome the expression variation in 3D face reconition field. Facial surface varies differently durin expression chanes: some reions are deformed larely and others little. Chan et al. (2005b, 2006) divided the whole facial surface into some sub-reions. The riid reions around the nose area were used to be matched and combined to perform the reconition. Their method encountered with the problem that it was hard to efficiently determine the riid reions across expression chanes. Makin deformable 3D face model is another way to simulate the facial expression. Lu and Jain (2006) extracted the deformation from a certain controlled roup of face data. Then, the extracted deformation was transferred to and synthesized for the neutral models in the allery, thus yieldin deformed templates with expressions. The comparison between the tested face data and the deformed templates was performed to finish the reconition. assalis et al. (2005) and Kakadiaris et al. (2007) used an annotated face model to fit the chaned facial surface, and then obtained the deformation imae by the fitted model. A multi-stae alinment alorithm and the advanced wavelet analysis resulted in robust performance. In these studies, the main problem has been how to build a parameterizin 3D model from optical or rane imaes, which is not well solved. Facial expression deforms the facial surface in a certain way, which can be used in face reconition. Bronstein et al. (2003, 2007) represented a facial surface based on eometric invariants to isometric deformations and realized multi-modal reconition by interatin flattened textures and canonical imaes, which was robust to some expression variations. Mpiperis et al. (2007) proposed a eodesic polar representation of the facial surface. This representation tried to describe the invariant properties for face reconition under isometric deformations of the facial surface. Face matchin was performed with surface attributes defined on the eodesic plane. Also based on the assumption of isometric transformation, ears and Heseltine (2006) proposed a novel representation called isoradius contours for 3D face reistration and matchin. In their alorithm, an isoradius contour was extracted on the 3D facial surface that was a known fixed Euclidean distance relative to certain predefined reference point (the tip of the nose). Based on these contours, they achieved a promisin result of 3D face reistration. Empirical observations (ears and Heseltine, 2006; Mpiperis et al., 2007; Bronstein et al., 2003, 2007) show that facial expressions can be considered as isometric transformations, which do not stretch the surface and preserve the surface metric. In this paper, we propose a new representation of deformation invariant imae-based on the radial eodesic distance to realize the face reconition robust to expressions. First, we obtain the depth imae and the intensity imae from the oriinal 3D facial data. Then, eodesic level curves are enerated by constructin radial eodesic distance imae from the depth imae. Finally, deformation invariant imae is constructed by evenly samplin points from the selected eodesic level curves in the intensity imae. Further, some familiar methods for classification can be used to build the reconition system. Different from the previous work (ears and Heseltine, 2006; Mpiperis et al., 2007; Bronstein et al., 2003, 2007), our major contribution in this paper is to develop the radial eodesic distance, a similar distance measure to eodesic distance. Based on it, a novel approach presented by deformation invariant imaes (DII) is proposed for extractin the robust features across expression chanes. It proposes a perfect way to combine the shape and the texture information in the 3D face. The most important advantae of this method is its practicability and computational efficiency. Our experimental results have also proved the excellent performance of this approach in the 3D face reconition. The remainin part of this paper is oranized as follows. Section 2 briefly summarizes our preprocessin. Section 3 describes how to calculate the radial eodesic distance and how to construct the deformation invariant imae. In Section 4, the method for dimension reduction and similarity measurement is introduced. Section 5 reports the experimental results and makes some comparisons with existin methods. Finally, Section 6 presents the conclusions of our study. 2. reprocessin The 3D data in our study consist of a rane imae with texture information as shown in Fi. 6. In this section, we preprocess these oriinal 3D data to obtain the normalized depth and intensity imaes. The flowchart of preprocessin is shown in Fi Reistration First, we detect the nose tip in the rane imae. Different from 2D color or intensity imaes, the nose is the most distinct feature in the facial rane imae. The method by Xu et al. (2006b) is used to localize the nose tip. This alorithm utilizes two local surface properties, i.e. local surface feature and local statistical feature. It is fully automated, robust to noisy and incomplete input data, immune to rotation and translation and suitable to different resolutions. In the existin work (Chan et al., 2005a), other feature points, such as eye corners, are considered. There are two disadvantaes for this scheme. One is that one or more feature points are invisible in some imaes with lare poses. The other is that the reions of eyes and mouth usually contain some outliers and holes. It is hard to robustly detect the feature points. Due to these two reasons, the feature points are usually marked manually in the existin work. In our study, only the nose tip, rather than other feature points, is used. Alinment is performed accordin to the method by Xu et al. (2004). A front 3D imae with neutral expression is selected as the fixed model, and all the other 3D imaes are rotated and translated to alin with it. Based on the detected nose tip, all the rane imaes are translated and coarsely alined toether. Then, the classic alorithm of the iterative closest point (IC) (Besl and Mckay, 1992) is used to further reister them. The facial imae usually contains local deformation, such as expression variations. Strictly, the transformation amon different scans is non-riid. Durin reistration, we only consider the rectanular reion from the nose tip to eyes in the vertical orientation and between two outer corners of eyes in the horizontal orientation. This reion chanes little across expression variations. Thus, the reistration can avoid the undesirable influences of mouth and jaw Depth and intensity imaes Depth and intensity imaes are obtained from the reistered 3D data. The 3D data that are bein processed have the same size as the real subjects. We use a mm rectanle centered at the detected nose tip to crop the oriinal 3D data. Then, the Fi. 1. Flowchart of preprocessin.
3 1598 L. Li et al. / attern Reconition Letters 29 (2008) cropped reion is converted to a depth block and an intensity block with pixels. The depth and the intensity blocks are processed by the followin operations. Laser scanners usually produce holes in certain areas, such as eyes and nostril. These holes in depth and intensity blocks are filled by usin interpolation of neihbors. White noise existin in blocks is removed by smoothin filers. Additional historam equalization is employed to intensity imaes to reduce the influence of the illumination variation. Finally, an elliptic mask is applied to the depth imae (see Fi. 2b) and the intensity imae (see Fi. 2c), respectively. Laser scanners usually capture the color imae and 3D data at the same time. The above intensity imaes can also be obtained from 2D color imaes correspondin to the 3D data. Althouh the holes and noises in 3D data can be avoided, there are two disadvantaes. The first one is that it is difficult to rectify the pose variations out of the imae plane. The second one is that it will brin the additional computational cost to detect some feature points (e.. eyes) to alin the 2D imaes. Thus, in our work, the intensity imaes are enerated from alined 3D data instead of usin oriinal 2D color imaes. 3. Expression invariant imaes In the facial surface, the eodesic distance between two points approximately preserves the invariance across the expression variations. Facial deformation can be described by an isometric model, which has been proved to be efficient in the previous work by Bronstein et al. (2007). Accordin to this assumption, we propose the deformation invariant imae to realize the expression invariant reconition. The basic idea of the deformation invariant imae is to keep the correspondin pixels in different imaes describin the fixed position of the real face under chanin expressions, which is illustrated in Fi. 3. In this fiure, facial surfaces without and with expression are simply described by one curve as shown in Fi. 3a a b c O d1 and d, respectively. O and are two points in the facial surface. The horizontal distance between them chanes from d1 to d2 across the expression while the eodesic distance between them,, is invariant. Traditional imaes describe the face accordin to the horizontal distance, d1 and d2, as shown in Fi. 3b and e. Thus, the correspondin pixels in different imaes represent different reions of the face. Our proposed imae describes the face accordin to the eodesic distance as shown in Fi. 3c and f. The correspondin pixels in the imaes describe the same reion of the face across expression chanes. To construct the DII, the solutions to solve the followin two problems are proposed: how to calculate the eodesic distance quickly and how to construct the DII. They are described as follows Radial eodesic distance d e f O d2 d2 Fi. 3. Face description without and with expression. (a) Face shape without expression; (b) traditional imae without expression; (c) deformation invariant imae without expression; (d) face shape with expression; (e) traditional imae with expression; (f) deformation invariant imae with expression. d1 As described above, empirical observations show that facial expressions can be considered as isometric transformations, which do not stretch and do not tear the surface. The lenth between two points in the surface can be preserved as an invariant. In the 3D facial data, the tip of the nose can be detected reliably. Therefore, we reard this point as the reference point, and calculate the eodesic distance from it to other points. Since all the points are centered around the nose tip and the eodesic distance is in the radial direction, it is called as radial eodesic distance. How to compute this kind of distance will be described next. The surface curve between two iven points can be described as follows: IðtÞ ¼ðxðtÞ; yðtþ; zðtþþ ð1þ Fi. 2. Face normalization. (a) Oriinal 3D data; (b) depth imae; (c) intensity imae. where x(t) and y(t) refer to the position in X Y plane and z(t) refers to the correspondin depth value. The eodesic lenth of this curve can be computed as follows: Z b qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi d ¼ x 2 t þ y 2 t þ z 2 t dt ð2þ a where the subscripts denote partial derivatives, e.. x t =dx/dt. Since the facial surface has been normalized into the depth imae followin the result in Section 2, it is described by discrete imae pixels. The radial eodesic distance can be approximately computed with sum of the lenths of the line sements embedded in the facial surface. In detail, iven two points in the depth imae, (i b,j b ) and (i e,j e ), the radial eodesic distance from (i b,j b )to(i e,j e ) can be calculated as follows:
4 L. Li et al. / attern Reconition Letters 29 (2008) Step 1: Calculate the line function, L, in the X Y space. This line is determined by two points, (i b,j b ) and (i e,j e ); Set sum = 0. Step 2: For i = i b +1toi e : (1) Find two pixels, (i,x) and (i 1,y), in the ith and (i 1)th row, respectively. These two pixels have the minimum Euclidean distance to the line, L. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (2) sum ¼ sum þ a þ a ðx yþ 2 þð1 aþðzði; xþ zði 1; yþþ 2 : (3) i = i + 1, o to Step 2. a refers to the weiht which is used to balance between the depth value and pixel coordinates. If the depth value is normalized into the rane between 0 and 1. The recommended value for a is 1/w, where w is the width of the imae. Different from the oriinal eodesic distance (Bronstein et al., 2007) which usually requires to solve the Eikonal equation (Tsai et al., 2003), the radial eodesic distance only calculates the sum of the multiple line sements. Thus, its computational cost is very low Deformation invariant imae In this section, we will describe the DII based on the radial eodesic distance. As shown in Section 2, the nose tip has been determined in the normalized depth imae. The radial eodesic distance between the nose tip and any other pixel can be computed. The computation proram evolves alon an emissive shape as shown in Fi. 4. Thus, one eodesic imae is obtained as shown in Fi. 5b. In Fi. 5b, darker intensities mean smaller distance. Further, we can obtain the eodesic level curves, each of which consists of pixels with the same radial eodesic distance to the nose tip. Two neihborin level curves may have the identical chane of eodesic distance or the lo distance. In our study, the eodesic distance is used. Fi. 5c shows the eodesic level curves. Since the elliptical mask is used, some level curves end alon the elliptical ede. We apply the level curves to the intensity imae which corresponds to the depth imae as shown in Fi. 5e. The level curves determine the samplin position in the intensity imae. The sampled pixels form a new imae representation, which is called deformation invariant imae. It is noted that different depth imaes of the same person have different eodesic level curves due to expression variations. Different level curves determine different samplin positions in intensity imaes. With the assumption of isometric deformation, these samplin positions balance the deformation of expressions in intensity imaes. Fi. 5. Deformation invariant imae. (a) Depth imae; (b) eodesic distance imae; (c) eodesic level curves; (d) intensity imae; (e) samplin position in the intensity imae usin eodesic level curves. In the classic reconition system, the imae is usually presented by one vector. Here, the deformation invariant imae is also converted into one vector. Different vectors from different imaes should have the same dimensionality and correspondin components. To meet these requirements, the samplin rule is made for all imaes. The intensity imae can be sampled alon the curve every fixed anle. Different level curves have different number of samplin position and samplin anle step, which can be typically computed as follows: Fi. 4. Emissive shape for computin eodesic distance. num i ¼ 2 iþ1 step i ¼ 360=num i ð3þ ð4þ
5 1600 L. Li et al. / attern Reconition Letters 29 (2008) where i is the ith level curve from the nose tip, and num i and step i is the number of samplin position and samplin anle step of the ith level curve. When samplin each level curve, we bein from the vertical direction and clockwise pick up the pixel value of the samplin position determined by the samplin anle. The samplin values are saved into one vector in turn. The obtained vector, called deformation invariant feature vector, is used for reconition. The whole process is described in Fi. 5. First, eodesic distance imae is obtained from the depth imae, and then eodesic level curves are computed. The intensity imae is further sampled by usin the level curves, and the deformation invariant imae is then constructed. Finally, the deformation invariant imae is converted into one vector for reconition. In fact, the same position in different deformation invariant imaes consists of intensity pixels, which have the same radial eodesic distance to the nose tip. This proposed representation is invariant to the facial surface deformation and is expected to be robust to expression variations. 4. Classifier construction The oriinal 3D data are represented with the DII, which is further described with one deformation invariant feature vector, accordin to the method in Sections 2 and 3. To reduce the computational cost and improve the reconition performance, the linear discriminant analysis (LDA) (Belhumur et al., 1997) is employed to obtain a lower-dimension vector. LDA minimizes the withinclass distance when maximizin the between-class distance. Its excellent performance in reconition has been proved in the literature. The optimal discriminant vectors constructin the LDA subspace are computed by solvin the followin criterion in the standard LDA alorithm (Belhumur et al., 1997): W ¼ armaxðjðwþþ ¼ WT S B W W T S W W where S B and S W are the between-class and within-class scatter matrix, respectively. Followin the solution in (Belhumur et al., 1997), we obtain N optimal discrimination vectors usin the trainin set in the CASIA 3D Face Database (see Section 5.1). In our trainin set, there are 23 persons. The effective number of discrimination vectors is equal to the number of persons minus one, i.e. N = 22. Thus, each deformation invariant feature vector is transferred into 22-dimension vector by these optimal discrimination vectors. The similarity between two lower-dimension feature vectors is measured with Mahalanobis cosine distance (Yambor et al., 2000), which is the cosine distance between two feature vectors after they have been normalized by the variance estimates. It has been proved that the Mahalanobis cosine distance metric outperforms other metrics, such as the Euclidean distance and L1 norms (Chan et al., 2005a). It is noted that our focus is to validate the separability of the proposed features and only use the simple classifiers. ð5þ More sophisticated classifiers can be used to improve the reconition accuracy. 5. Experiments Our proposed DII is evaluated in terms of its robustness under different expressions and its efficiency for 3D face reconition Database A lare 3D face database, 3D CASIA face database (Xu et al., 2006a), is used to test the proposed alorithm. The 3D imaes were collected indoors durin Auust and September 2004 usin a noncontact 3D diitizer, Minolta VIVID 910. This database contains 123 subjects, and each subject has not only separate variation of expressions, poses and illumination, but also combined variations of expressions under different lihtin and poses under different expressions. Some examples with expression variations are shown in Fi. 6. These variations provide a platform on which the performance of 3D face reconition will be investiated under different variations. We use 1353 imaes from this database (11 imaes for each person) in our experiments. These imaes are divided into three subsets, that is, the trainin set, the allery set and the probe set. The trainin set contains 253 imaes, correspondin to the last 23 of the 123 subjects, 11 imaes for each person. The allery set contains 100 imaes from the first imae of the other 100 subjects (under the condition of front view, office lihtin, and neutral expression). The other 1000 imaes from the above 100 subjects are used as the probe set. The probe set is further divided into four subsets: EV probe set (200 imaes): the probe set includin closed-mouth expression variations, such as aner and eye closed. EVO probe set (300 imaes): the probe set includin openedmouth expression variations, such as smile, lauh, and surprise. EVI probe set (200 imaes): the probe set includin closed-mouth expression variations under side lihtin, such as aner and eye closed. EVIO probe set (300 imaes): the probe set includin openedmouth expression variations under side lihtin, such as smile, lauh, and surprise. The EV and EVI probe sets include the facial expression with closed mouth, and the EVO and EVIO probe sets include the expression with opened mouth. They have different reconition performances, which will be illustrated in the followin experiments Identification performance under expressions We perform experiments to test our method and compare its performance with other schemes. The compared schemes include Fi. 6. Expression variations in 3D CASIA face database.
6 L. Li et al. / attern Reconition Letters 29 (2008) the raw depth imaes, the raw intensity imaes and the fusion of them. In Section 2, the depth imae and intensity imae have been obtained. Here, raw depth imaes and raw intensity imaes are used to characterize the people, respectively. LDA alorithm (Belhumur et al., 1997) is used for reducin dimensionality since it is widely applied in face reconition. The similarity measure adopts Mahalanobis cosine distance. The trainin set described in the last section is used to create the face subspace for reconition. For each testin pair of the probe set vs. the allery set, the rankone reconition accuracy is computed and the results are listed in Table 1. In another experiment, we use the weihted sum rule (Chan et al., 2005a) to fuse the scores from two above sinle classifiers, depth + LDA and intensity + LDA. The results are shown in the fourth column (named Fusion ) of Table 1. Here other fusion rules have also been used, such as max, min, product and mean, but they have worse performance than weihted sum rule. Our proposed scheme is also tested and the results are shown in the last column of Table 1. The cumulative match score (CMS curves) in the EV probe set is shown in Fi. 7. From these results, the followin useful conclusions can be drawn: Expression variations reatly influence the face reconition performance, which has also been observed in previous work. The fusion of depth and intensity features can improve the reconition performance to a certain extension. Our proposed scheme consistently outperforms the other schemes, such as only depth, intensity and the fusion of them, in four probe sets. Table 1 Rank-one reconition accuracy (%) in different probe sets robe set Depth Intensity Fusion Ours EV EVO EVI EVIO In the probe sets with close-mouth expression, our method has better performance than that in the probe sets with the openmouth expression. It can be explained that the close-mouth expression can be well considered as isometric transformations while the open-mouth expression cannot. Illumination variations reatly influence the reconition accuracy when the intensity information is used. It is identical to the traditional face reconition. However, in this case, our proposed method still outperforms the fusion scheme Comparisons and discussions The most similar work of DII is from Bronstein et al. (2007). They used the isometric deformation of facial surface to obtain the canonical imae, which is robust to some expression variations. The main difference from our work is that they embedded the facial surface into another space by MDS technique and constructed the spherical canonical imaes, and reconition was completed usin canonical imaes. Their method (Bron07) has been implemented, and the results have been compared with ours. Table 2 shows the rank-one reconition accuracy in the different probe sets. It can be observed that their scheme provides the solution for removin the influence of openin mouth. Unfortunately, their method requires the mouth contour, which is manually marked. In our implementation, this solution is not included. From the results in Table 2, it can be seen that our method has the similar performance to theirs. However, their embeddin process is an iteration process and has a lare computational cost. Our method obtains the deformation invariant imae by the radial eodesic distance level curves, which can be computed in an efficient way. In our C with 3.0 GHz CU and 1.0G RAM, it takes about 6 s to process one 3D scannin face with Bronstein s method, and it takes only 3 s by our method. In our proposed scheme, some factors are to influence the reconition accuracy. The accuracy of radial eodesic level curves affects the reconition performance reatly. Noise is one important factor to worsen the level set. Althouh some methods have been adopted to smooth the imaes, we still consider some better ways. In some previous work (Mpiperis et al., 2007; Bronstein et al., 2007), the facial variation is considered as the isometric transformation, in which the eodesic distance between two points is preserved. In our work, we use a kind of similar distance, the radial eodesic distance. However, it is not preserved under isometric deformation of the surface. Our proposed method, the deformation invariant imae, works well only because the radial eodesic distance is robust to the specific isometric-like deformations of the face. In fact, the radial eodesic distance and the oriinal eodesic distance depend on the assumption of isometric transformation. It is a very coarse approximation in the facial surface. The expression variations without the openin mouth can be well modeled as the isometric transformation; the variations with the openin mouth tear the surface and are not considered as the isometric deformation. This also explains why the rank-one reconition accuracy is lower in the Table 2 Rank-one reconition accuracy (%) for comparison in different probe sets robe sets Bron07 Ours Fi. 7. CMS curves in the EV probe set of the CASIA 3D face database. EV EVO EVI EVIO
7 1602 L. Li et al. / attern Reconition Letters 29 (2008) EVO and EVIO probe sets than that in the EV and EVI sets. Thus, a better transformation needs to be developed in order to approximate the expression variation. The study in this issue will be included in our future work. 6. Conclusions In this paper, we have proposed a DII method for robust 3D face reconition under expression variations. The robustness of DII is based on the truth that facial surfaces can be modeled as similar isometric transformations across different expressions. Experimental results illustrate the efficiency of our proposed DII, which ives substantially better reconition performances than depth- LDA, intensity-lda and their fusion. In the future, the facial surface deformation with opened mouths will be further studied to improve the reconition accuracy. References Besl,.J., Mckay, N.D., A method for reistration of 3-D shapes. IEEE Trans. AMI 14 (2), Belhumur,.N., Hespanha, J.., Krieman, D.J., Eienfaces vs. Fisherfaces: Reconition usin class specific linear projection. IEEE Trans. AMI 19 (7), Bowyer, K., Chan, K., Flynn,., A survey of approaches and challenes in 3D and multi-modal 3D+2D face reconition. Comp. Vis. Imae Understand. 101 (1), Bronstein, A.M., Bronstein, M.M., Kimmel, R., Expression-invariant 3D face reconition. In: roc. Auto- and Video-Based erson Authentication (AVBA 03), LCNS 2688, pp Bronstein, A.M., Bronstein, M.M., Kimmel, R., Expression-invariant representations of faces. IEEE Trans. Imae rocess. 16 (1), Chan, K.I., Bowyer, K.W., Flynn,.J., 2005a. An evaluation of multi-modal 2D + 3D biometrics. IEEE Trans. AMI 27 (4), Chan, K.I., Bowyer, K.W., Flynn,.J., 2005b. ARMS: Adaptive riid multi-reion selection for handlin expression variation in 3D face reconition. In: roc. FRGC Workshop, pp Chan, K.I., Bowyer, K.W., Flynn,.J., Multiple nose reion matchin for 3D face reconition under varyin facial expression. IEEE Trans. AMI 28 (10), Kakadiaris, I.A., assalis, G., Toderici, G., Murtuza, N., Lu, Y., Karampatziakis, N., Theoharis, T., D face reconition in the presence of facial expressions: An annotated deformable model approach. IEEE Trans. AMI 29 (4), Lu, X., Jain, A.K., Deformation modelin for robust 3D face matchin. In: roc. CVR 06, pp Mpiperis, I., Malassiotis, S., Strintzis, M.G., D face reconition with the eodesic polar representation. IEEE Trans. Inform. Forensics Security 2 (3), assalis, G., Kakadiaris, I.A., Theoharis, T., Toderici, G., Murtuza, N., Evaluation of 3D face reconition in the presence of facial expressions: An annotated deformable model approach. In: roc. FRGC Workshop, pp ears, N., Heseltine, T., Isoradius contours: New representations and techniques for 3D face reistration and matchin. In: roc. 3rd International Symposium on 3D Data rocessin, Visualization, and Transmission (3DVT 06). hillips,.j., Scrus, W.T., O Toole, A.J., Flynn,.J., Bowyer, K.W., Schott, C.L., Sharpe, M., Report of FRVT 2006 and ICE 2006 Lare-Scale Results, Tech. Rep. NISTIR Tsai, Y.R., Chen, L.-T., Osher, S., Zhao, H.K., Fast sweepin alorithms for a class of Hamilton Jacobi equations. SIAM J. Numer. Anal. 41 (2), Tsalakanidou, F., Malassiotis, S., Strintzis, M.G., Face localization and authentication usin color and depth imaes. IEEE Trans. Imae rocess. 14 (2), Wan, Y., Chua, C., Ho, Y., Facial feature detection and face reconition from 2D and 3D imaes. attern Reconition Lett. 23, Xu, C., Wan, Y., Tan, T., Quan, L., D face reconition based on G-H shape variation. LNCS Spriner, pp Xu, C., Tan, T., Li, S., Wan, Y., Zhon, C., 2006a. Learnin effective intrinsic features to boost 3D-based face reconition. ECCV 06, pp Xu, C., Wan, Y., Tan, T., Quan, L., 2006b. A robust method for detectin nose on 3D point cloud. attern Reconition Lett. 27 (13), Yambor, W., Draper, B., Beveride, R., Analyzin CA-based face reconition alorithms: Eienvector selection and distance measures. In: roc. Second Workshop Empirical Evaluation in Computer Vision. Zhao, W., Chellappa, R., hillips,.j., Rosenfeld, A., Face reconition: A literature survey. ACM Comput. Surveys (CSUR) Archive 35 (4),
Pattern Recognition 42 (2009) Contents lists available at ScienceDirect. Pattern Recognition
Pattern Recognition 42 (2009) 1895 -- 1905 Contents lists available at ScienceDirect Pattern Recognition journal homepage: www.elsevier.com/locate/pr Automatic 3D face recognition from depth and intensity
More informationSCALE SELECTIVE EXTENDED LOCAL BINARY PATTERN FOR TEXTURE CLASSIFICATION. Yuting Hu, Zhiling Long, and Ghassan AlRegib
SCALE SELECTIVE EXTENDED LOCAL BINARY PATTERN FOR TEXTURE CLASSIFICATION Yutin Hu, Zhilin Lon, and Ghassan AlReib Multimedia & Sensors Lab (MSL) Center for Sinal and Information Processin (CSIP) School
More informationDETC D FACE RECOGNITION UNDER ISOMETRIC EXPRESSION DEFORMATIONS
Proceedings of the ASME 2014 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2014 August 17-20, 2014, Buffalo, New York, USA DETC2014-34449
More informationNew Experiments on ICP-Based 3D Face Recognition and Authentication
New Experiments on ICP-Based 3D Face Recognition and Authentication Boulbaba Ben Amor Boulbaba.Ben-Amor@ec-lyon.fr Liming Chen Liming.Chen@ec-lyon.fr Mohsen Ardabilian Mohsen.Ardabilian@ec-lyon.fr Abstract
More informationEnhancing 3D Face Recognition By Mimics Segmentation
Enhancing 3D Face Recognition By Mimics Segmentation Boulbaba Ben Amor, Mohsen Ardabilian, and Liming Chen MI Department, LIRIS Laboratory, CNRS 5205 Ecole Centrale de Lyon, 36 av. Guy de Collongue, 69134
More informationAn Evaluation of Multimodal 2D+3D Face Biometrics
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 4, APRIL 2005 619 An Evaluation of Multimodal 2D+3D Face Biometrics Kyong I. Chang, Kevin W. Bowyer, and Patrick J. Flynn Abstract
More informationIntegrating Range and Texture Information for 3D Face Recognition
Integrating Range and Texture Information for 3D Face Recognition Xiaoguang Lu and Anil K. Jain Dept. of Computer Science & Engineering Michigan State University East Lansing, MI 48824 {Lvxiaogu, jain}@cse.msu.edu
More information3D Face Recognition. Anil K. Jain. Dept. of Computer Science & Engineering Michigan State University.
3D Face Recognition Anil K. Jain Dept. of Computer Science & Engineering Michigan State University http://biometrics.cse.msu.edu Face Recognition 1959? 1960 1972 1973 Face detection using OpenCV Viola-Jones
More informationCombining Statistics of Geometrical and Correlative Features for 3D Face Recognition
1 Combining Statistics of Geometrical and Correlative Features for 3D Face Recognition Yonggang Huang 1, Yunhong Wang 2, Tieniu Tan 1 1 National Laboratory of Pattern Recognition Institute of Automation,
More informationMulti-scale Local Binary Pattern Histograms for Face Recognition
Multi-scale Local Binary Pattern Historams for Face Reconition Chi-Ho Chan, Josef Kittler, and Kieron Messer Centre for Vision, Speech and Sinal Processin, University of Surrey, United Kindom {c.chan,.kittler,k.messer}@surrey.ac.uk
More informationExpression-Invariant 3D Face Recognition using Patched Geodesic Texture Transform
2010 Digital Image Computing: Techniques and Applications Expression-Invariant 3D Face Recognition using Patched Geodesic Texture Transform Farshid Hajati 1, 2, Abolghasem A. Raie 1 1 Faculty of Electrical
More informationIterative Single-Image Digital Super-Resolution Using Partial High-Resolution Data
Iterative Sinle-Imae Diital Super-Resolution Usin Partial Hih-Resolution Data Eran Gur, Member, IAENG and Zeev Zalevsky Abstract The subject of extractin hih-resolution data from low-resolution imaes is
More informationFace Recognition At-a-Distance Based on Sparse-Stereo Reconstruction
Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,
More informationImage-Based Face Recognition using Global Features
Image-Based Face Recognition using Global Features Xiaoyin xu Research Centre for Integrated Microsystems Electrical and Computer Engineering University of Windsor Supervisors: Dr. Ahmadi May 13, 2005
More informationIMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur
IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important
More informationA Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation
A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation Walid Aydi, Lotfi Kamoun, Nouri Masmoudi Department of Electrical National Engineering School of Sfax Sfax University
More informationGraph Matching Iris Image Blocks with Local Binary Pattern
Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of
More informationMultiple Nose Region Matching for 3D Face Recognition under Varying Facial Expression
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 10, OCTOBER 2006 1 Multiple Nose Region Matching for 3D Face Recognition under Varying Facial Expression Kyong I. Chang, Kevin
More informationLinear Discriminant Analysis for 3D Face Recognition System
Linear Discriminant Analysis for 3D Face Recognition System 3.1 Introduction Face recognition and verification have been at the top of the research agenda of the computer vision community in recent times.
More informationConformal mapping-based 3D face recognition
Conformal mapping-based 3D face recognition Przemyslaw Szeptycki, Mohsen Ardabilian, Liming Chen MI Department, LIRIS Laboratory, Ecole Centrale de Lyon, 69134 Lyon, France {przemyslaw.szeptycki, mohsen.ardabilian,
More informationThree-Dimensional Face Recognition: A Fishersurface Approach
Three-Dimensional Face Recognition: A Fishersurface Approach Thomas Heseltine, Nick Pears, Jim Austin Department of Computer Science, The University of York, United Kingdom Abstract. Previous work has
More informationCOMPLETED LOCAL DERIVATIVE PATTERN FOR ROTATION INVARIANT TEXTURE CLASSIFICATION. Yuting Hu, Zhiling Long, and Ghassan AlRegib
COMPLETED LOCAL DERIVATIVE PATTERN FOR ROTATION INVARIANT TEXTURE CLASSIFICATION Yutin Hu, Zhilin Lon, and Ghassan AlReib Multimedia & Sensors Lab (MSL) Center for Sinal and Information Processin (CSIP)
More informationLearning to Recognize Faces in Realistic Conditions
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
More informationOn Modeling Variations for Face Authentication
On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu
More informationPoint-Pair Descriptors for 3D Facial Landmark Localisation
Point-Pair Descriptors for 3D Facial Landmark Localisation Marcelo Romero and Nick Pears Department of Computer Science The University of York York, UK {mromero, nep}@cs.york.ac.uk Abstract Our pose-invariant
More informationAutomatic 3D Face Recognition Combining Global Geometric Features with Local Shape Variation Information
Automatic 3D Face Recognition Combining Global Geometric Features with Local Shape Variation Information Chenghua Xu 1, Yunhong Wang 1, Tieniu Tan 1, Long Quan 2 1 Center for Biometric Authentication and
More informationAn Adaptive Threshold LBP Algorithm for Face Recognition
An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent
More informationLandmark Localisation in 3D Face Data
2009 Advanced Video and Signal Based Surveillance Landmark Localisation in 3D Face Data Marcelo Romero and Nick Pears Department of Computer Science The University of York York, UK {mromero, nep}@cs.york.ac.uk
More informationChapter 5 THE MODULE FOR DETERMINING AN OBJECT S TRUE GRAY LEVELS
Qian u Chapter 5. Determinin an Object s True Gray evels 3 Chapter 5 THE MODUE OR DETERMNNG AN OJECT S TRUE GRAY EVES This chapter discusses the module for determinin an object s true ray levels. To compute
More informationThe Novel Approach for 3D Face Recognition Using Simple Preprocessing Method
The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method Parvin Aminnejad 1, Ahmad Ayatollahi 2, Siamak Aminnejad 3, Reihaneh Asghari Abstract In this work, we presented a novel approach
More informationLearning to Fuse 3D+2D Based Face Recognition at Both Feature and Decision Levels
Learning to Fuse 3D+2D Based Face Recognition at Both Feature and Decision Levels Stan Z. Li, ChunShui Zhao, Meng Ao, Zhen Lei Center for Biometrics and Security Research & National Laboratory of Pattern
More informationFace Detection and Recognition in an Image Sequence using Eigenedginess
Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras
More informationPartial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge
Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge Dong Yi, Shengcai Liao, Zhen Lei, Jitao Sang, and Stan Z. Li Center for Biometrics and Security Research, Institute
More information3D Signatures for Fast 3D Face Recognition
3D Signatures for Fast 3D Face Recognition Chris Boehnen, Tanya Peters, and Patrick J. Flynn Department of Computer Science and Engineering University of Notre Dame, USA {cboehnen,tpeters,flynn}@nd.edu
More informationComponent-based Registration with Curvature Descriptors for Expression Insensitive 3D Face Recognition
Component-based Registration with Curvature Descriptors for Expression Insensitive 3D Face Recognition Neşe Alyüz Boğaziçi University Computer Engineering Dept. Istanbul, Turkey nese.alyuz@boun.edu.tr
More informationChapter - 1 : IMAGE FUNDAMENTS
Chapter - : IMAGE FUNDAMENTS Imae processin is a subclass of sinal processin concerned specifically with pictures. Improve imae quality for human perception and/or computer interpretation. Several fields
More informationAutomatic 3D Face Detection, Normalization and Recognition
Automatic 3D Face Detection, Normalization and Recognition Ajmal Mian, Mohammed Bennamoun and Robyn Owens School of Computer Science and Software Engineering The University of Western Australia 35 Stirling
More informationFace Alignment Under Various Poses and Expressions
Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.
More informationLearning Deep Features for One-Class Classification
1 Learnin Deep Features for One-Class Classification Pramuditha Perera, Student Member, IEEE, and Vishal M. Patel, Senior Member, IEEE Abstract We propose a deep learnin-based solution for the problem
More informationAlgorithms for Recognition of Low Quality Iris Images. Li Peng Xie University of Ottawa
Algorithms for Recognition of Low Quality Iris Images Li Peng Xie University of Ottawa Overview Iris Recognition Eyelash detection Accurate circular localization Covariance feature with LDA Fourier magnitude
More informationFace Recognition for Mobile Devices
Face Recognition for Mobile Devices Aditya Pabbaraju (adisrinu@umich.edu), Srujankumar Puchakayala (psrujan@umich.edu) INTRODUCTION Face recognition is an application used for identifying a person from
More informationUR3D-C: Linear Dimensionality Reduction for Efficient 3D Face Recognition
UR3D-C: Linear Dimensionality Reduction for Efficient 3D Face Recognition Omar Ocegueda 1, Georgios Passalis 1,2, Theoharis Theoharis 1,2, Shishir K. Shah 1, Ioannis A. Kakadiaris 1 Abstract We present
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More information3D Face Recognition Using Spherical Vector Norms Map *
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 3, XXXX-XXXX (016) 3D Face Recognition Using Spherical Vector Norms Map * XUE-QIAO WANG ab, JIA-ZHENG YUAN ac AND QING LI ab a Beijing Key Laboratory of Information
More informationEnhanced Iris Recognition System an Integrated Approach to Person Identification
Enhanced Iris Recognition an Integrated Approach to Person Identification Gaganpreet Kaur Research Scholar, GNDEC, Ludhiana. Akshay Girdhar Associate Professor, GNDEC. Ludhiana. Manvjeet Kaur Lecturer,
More informationFace Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method
Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 2073-4212 Ubiquitous International Volume 7, Number 5, September 2016 Face Recognition ased on LDA and Improved Pairwise-Constrained
More informationIllumination invariant face recognition and impostor rejection using different MINACE filter algorithms
Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms Rohit Patnaik and David Casasent Dept. of Electrical and Computer Engineering, Carnegie Mellon University,
More information3D Face Identification - Experiments Towards a Large Gallery
3D Face Identification - Experiments Towards a Large Gallery Dirk Colbry a, Folarin Oki b, George Stockman b a Arizona State University, School of Computing and Informatics, Tempe, AZ 85287-8809 USA b
More informationCompact signatures for 3D face recognition under varying expressions
Compact signatures for 3D face recognition under varying expressions Fahad Daniyal, Prathap Nair and Andrea Cavallaro Queen Mary University of London School of Electronic Engineering and Computer Science
More information3D FACE RECOGNITION BY POINT SIGNATURES AND ISO-CONTOURS
3D FACE RECOGNITION BY POINT SIGNATURES AND ISO-CONTOURS Iordanis Mpiperis 1,2, Sotiris Malasiotis 1 and Michael G. Strintzis 1,2 1 Informatics and Telematics Institute 2 Information Processing Laboratory
More informationGeneric Face Alignment Using an Improved Active Shape Model
Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn
More information3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationIntegrated 3D Expression Recognition and Face Recognition
Integrated 3D Expression Recognition and Face Recognition Chao Li, Armando Barreto Electrical & Computer Engineering Department Florida International University Miami, Florida, 33174, USA {cli007, barretoa}@fiu.edu
More informationA Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation. Kwanyong Lee 1 and Hyeyoung Park 2
A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation Kwanyong Lee 1 and Hyeyoung Park 2 1. Department of Computer Science, Korea National Open
More informationFuzzy Bidirectional Weighted Sum for Face Recognition
Send Orders for Reprints to reprints@benthamscience.ae The Open Automation and Control Systems Journal, 2014, 6, 447-452 447 Fuzzy Bidirectional Weighted Sum for Face Recognition Open Access Pengli Lu
More informationExpression Invariant 3D Face Recognition with a Morphable Model
Expression Invariant 3D Face Recognition with a Morphable Model Brian Amberg brian.amberg@unibas.ch Reinhard Knothe reinhard.knothe@unibas.ch Thomas Vetter thomas.vetter@unibas.ch Abstract We describe
More informationIris Recognition for Eyelash Detection Using Gabor Filter
Iris Recognition for Eyelash Detection Using Gabor Filter Rupesh Mude 1, Meenakshi R Patel 2 Computer Science and Engineering Rungta College of Engineering and Technology, Bhilai Abstract :- Iris recognition
More informationEnhanced Active Shape Models with Global Texture Constraints for Image Analysis
Enhanced Active Shape Models with Global Texture Constraints for Image Analysis Shiguang Shan, Wen Gao, Wei Wang, Debin Zhao, Baocai Yin Institute of Computing Technology, Chinese Academy of Sciences,
More informationIntensity-Depth Face Alignment Using Cascade Shape Regression
Intensity-Depth Face Alignment Using Cascade Shape Regression Yang Cao 1 and Bao-Liang Lu 1,2 1 Center for Brain-like Computing and Machine Intelligence Department of Computer Science and Engineering Shanghai
More informationCritique: Efficient Iris Recognition by Characterizing Key Local Variations
Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher
More informationA Study on Similarity Computations in Template Matching Technique for Identity Verification
A Study on Similarity Computations in Template Matching Technique for Identity Verification Lam, S. K., Yeong, C. Y., Yew, C. T., Chai, W. S., Suandi, S. A. Intelligent Biometric Group, School of Electrical
More informationI CP Fusion Techniques for 3D Face Recognition
I CP Fusion Techniques for 3D Face Recognition Robert McKeon University of Notre Dame Notre Dame, Indiana, USA rmckeon@nd.edu Patrick Flynn University of Notre Dame Notre Dame, Indiana, USA flynn@nd.edu
More informationProbabilistic Gaze Estimation Without Active Personal Calibration
Probabilistic Gaze Estimation Without Active Personal Calibration Jixu Chen Qian Ji Department of Electrical,Computer and System Enineerin Rensselaer Polytechnic Institute Troy, NY 12180 chenji@e.com qji@ecse.rpi.edu
More informationMingle Face Detection using Adaptive Thresholding and Hybrid Median Filter
Mingle Face Detection using Adaptive Thresholding and Hybrid Median Filter Amandeep Kaur Department of Computer Science and Engg Guru Nanak Dev University Amritsar, India-143005 ABSTRACT Face detection
More informationState of The Art In 3D Face Recognition
State of The Art In 3D Face Recognition Index 1 FROM 2D TO 3D 3 2 SHORT BACKGROUND 4 2.1 THE MOST INTERESTING 3D RECOGNITION SYSTEMS 4 2.1.1 FACE RECOGNITION USING RANGE IMAGES [1] 4 2.1.2 FACE RECOGNITION
More informationGEODESIC RECONSTRUCTION, SADDLE ZONES & HIERARCHICAL SEGMENTATION
Imae Anal Stereol 2001;20:xx-xx Oriinal Research Paper GEODESIC RECONSTRUCTION, SADDLE ZONES & HIERARCHICAL SEGMENTATION SERGE BEUCHER Centre de Morpholoie Mathématique, Ecole des Mines de Paris, 35, Rue
More informationFACE RECOGNITION USING SUPPORT VECTOR MACHINES
FACE RECOGNITION USING SUPPORT VECTOR MACHINES Ashwin Swaminathan ashwins@umd.edu ENEE633: Statistical and Neural Pattern Recognition Instructor : Prof. Rama Chellappa Project 2, Part (b) 1. INTRODUCTION
More informationEmpirical Evaluation of Advanced Ear Biometrics
Empirical Evaluation of Advanced Ear Biometrics Ping Yan Kevin W. Bowyer Department of Computer Science and Engineering University of Notre Dame, IN 46556 Abstract We present results of the largest experimental
More informationA Hierarchical Face Identification System Based on Facial Components
A Hierarchical Face Identification System Based on Facial Components Mehrtash T. Harandi, Majid Nili Ahmadabadi, and Babak N. Araabi Control and Intelligent Processing Center of Excellence Department of
More informationLOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM
LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, University of Karlsruhe Am Fasanengarten 5, 76131, Karlsruhe, Germany
More informationAlgorithm research of 3D point cloud registration based on iterative closest point 1
Acta Technica 62, No. 3B/2017, 189 196 c 2017 Institute of Thermomechanics CAS, v.v.i. Algorithm research of 3D point cloud registration based on iterative closest point 1 Qian Gao 2, Yujian Wang 2,3,
More informationComputationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms
Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Andreas Uhl Department of Computer Sciences University of Salzburg, Austria uhl@cosy.sbg.ac.at
More informationCountermeasure for the Protection of Face Recognition Systems Against Mask Attacks
Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks Neslihan Kose, Jean-Luc Dugelay Multimedia Department EURECOM Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr
More informationSpatial Frequency Domain Methods for Face and Iris Recognition
Spatial Frequency Domain Methods for Face and Iris Recognition Dept. of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 e-mail: Kumar@ece.cmu.edu Tel.: (412) 268-3026
More informationCHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION
CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes
More informationDevelopment and Verification of an SP 3 Code Using Semi-Analytic Nodal Method for Pin-by-Pin Calculation
Journal of Physical Science and Application 7 () (07) 0-7 doi: 0.765/59-5348/07.0.00 D DAVID PUBLISHIN Development and Verification of an SP 3 Code Usin Semi-Analytic Chuntao Tan Shanhai Nuclear Enineerin
More informationExpression-robust 3D Face Recognition using Bending Invariant Correlative Features
Informatica 35 (2011) 231 238 231 Expression-robust 3D Face Recognition using Bending Invariant Correlative Features Yue Ming and Qiuqi Ruan Senior Member, IEEE Institute of Information Science, Beijing
More informationHaresh D. Chande #, Zankhana H. Shah *
Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information
More informationBiometrics Technology: Multi-modal (Part 2)
Biometrics Technology: Multi-modal (Part 2) References: At the Level: [M7] U. Dieckmann, P. Plankensteiner and T. Wagner, "SESAM: A biometric person identification system using sensor fusion ", Pattern
More informationFace Recognition by Multi-Frame Fusion of Rotating Heads in Videos
Face Recognition by Multi-Frame Fusion of Rotating Heads in Videos Shaun J. Canavan, Michael P. Kozak, Yong Zhang, John R. Sullins, Matthew A. Shreve and Dmitry B. Goldgof Abstract This paper presents
More informationFace Recognition by Combining Kernel Associative Memory and Gabor Transforms
Face Recognition by Combining Kernel Associative Memory and Gabor Transforms Author Zhang, Bai-ling, Leung, Clement, Gao, Yongsheng Published 2006 Conference Title ICPR2006: 18th International Conference
More information3-D Shape Matching for Face Analysis and Recognition
3-D Shape Matching for Face Analysis and Recognition Wei Quan, Bogdan J. Matuszewski and Lik-Kwan Shark Robotics and Computer Vision Research Laboratory, Applied Digital Signal and Image Processing (ADSIP)
More informationEquation to LaTeX. Abhinav Rastogi, Sevy Harris. I. Introduction. Segmentation.
Equation to LaTeX Abhinav Rastogi, Sevy Harris {arastogi,sharris5}@stanford.edu I. Introduction Copying equations from a pdf file to a LaTeX document can be time consuming because there is no easy way
More informationA 2D+3D FACE IDENTIFICATION SYSTEM FOR SURVEILLANCE APPLICATIONS
A 2D+3D FACE IDENTIFICATION SYSTEM FOR SURVEILLANCE APPLICATIONS Filareti Tsalakanidou, Sotiris Malassiotis and Michael G. Strintzis Informatics and Telematics Institute Centre for Research and Technology
More informationGabor Surface Feature for Face Recognition
Gabor Surface Feature for Face Recognition Ke Yan, Youbin Chen Graduate School at Shenzhen Tsinghua University Shenzhen, China xed09@gmail.com, chenyb@sz.tsinghua.edu.cn Abstract Gabor filters can extract
More informationImage Processing and Image Representations for Face Recognition
Image Processing and Image Representations for Face Recognition 1 Introduction Face recognition is an active area of research in image processing and pattern recognition. Since the general topic of face
More informationThe HFB Face Database for Heterogeneous Face Biometrics Research
The HFB Face Database for Heterogeneous Face Biometrics Research Stan Z. Li, Zhen Lei, Meng Ao Center for Biometrics and Security Research Institute of Automation, Chinese Academy of Sciences 95 Zhongguancun
More informationAutomated Segmentation Using a Fast Implementation of the Chan-Vese Models
Automated Segmentation Using a Fast Implementation of the Chan-Vese Models Huan Xu, and Xiao-Feng Wang,,3 Intelligent Computation Lab, Hefei Institute of Intelligent Machines, Chinese Academy of Science,
More informationAutomatic thresholding for defect detection
Pattern Recognition Letters xxx (2006) xxx xxx www.elsevier.com/locate/patrec Automatic thresholding for defect detection Hui-Fuang Ng * Department of Computer Science and Information Engineering, Asia
More informationImage Processing - Lesson 10. Recognition. Correlation Features (geometric hashing) Moments Eigenfaces
Imae Processin - Lesson 0 Reconition Correlation Features (eometric hashin) Moments Eienaces Normalized Correlation - Example imae pattern Correlation Normalized Correlation Correspondence Problem match?
More informationPart-based Face Recognition Using Near Infrared Images
Part-based Face Recognition Using Near Infrared Images Ke Pan Shengcai Liao Zhijian Zhang Stan Z. Li Peiren Zhang University of Science and Technology of China Hefei 230026, China Center for Biometrics
More informationA Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images
A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images Karthik Ram K.V & Mahantesh K Department of Electronics and Communication Engineering, SJB Institute of Technology, Bangalore,
More informationFACE RECOGNITION USING INDEPENDENT COMPONENT
Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major
More informationIRIS SEGMENTATION OF NON-IDEAL IMAGES
IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322
More informationCHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION
122 CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 5.1 INTRODUCTION Face recognition, means checking for the presence of a face from a database that contains many faces and could be performed
More informationFacial Expression Detection Using Implemented (PCA) Algorithm
Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with
More informationPart-based Face Recognition Using Near Infrared Images
Part-based Face Recognition Using Near Infrared Images Ke Pan Shengcai Liao Zhijian Zhang Stan Z. Li Peiren Zhang University of Science and Technology of China Hefei 230026, China Center for Biometrics
More informationLinear Discriminant Analysis in Ottoman Alphabet Character Recognition
Linear Discriminant Analysis in Ottoman Alphabet Character Recognition ZEYNEB KURT, H. IREM TURKMEN, M. ELIF KARSLIGIL Department of Computer Engineering, Yildiz Technical University, 34349 Besiktas /
More informationShape Model-Based 3D Ear Detection from Side Face Range Images
Shape Model-Based 3D Ear Detection from Side Face Range Images Hui Chen and Bir Bhanu Center for Research in Intelligent Systems University of California, Riverside, California 92521, USA fhchen, bhanug@vislab.ucr.edu
More information1. INTRODUCTION. AMS Subject Classification. 68U10 Image Processing
ANALYSING THE NOISE SENSITIVITY OF SKELETONIZATION ALGORITHMS Attila Fazekas and András Hajdu Lajos Kossuth University 4010, Debrecen PO Box 12, Hungary Abstract. Many skeletonization algorithms have been
More information