Beyond parametric score normalisation in biometric verification systems
|
|
- Gordon Jordan
- 6 years ago
- Views:
Transcription
1 Published in IET Biometrics Received on 6th October 2013 Revised on 20th February 2014 Accepted on 4th March 2014 Special Issue: Integration of Biometrics and Forensics ISSN Beyond parametric score normalisation in biometric verification systems Vitomir Štruc 1, Jerneja Žganec-Gros 2,Boštjan Vesnicer 2, Nikola Paveši c 1 1 Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, SI-1000 Ljubljana, Slovenia 2 Alpineon d.o.o., Ulica Iga Grudna 15, SI-1000, Ljubljana, Slovenia vitomir.struc@fe.uni-lj.si Abstract: Similarity scores represent the basis for identity inference in biometric verification systems. However, because of the so-called miss-matched conditions across enrollment and probe samples and identity-dependent factors these scores typically exhibit statistical variations that affect the verification performance of biometric systems. To mitigate these variations, scorenormalisation techniques, such as the z-norm, the t-norm or the zt-norm, are commonly adopted. In this study, the authors study the problem of score normalisation in the scope of biometric verification and introduce a new class of non-parametric normalisation techniques, which make no assumptions regarding the shape of the distribution from which the scores are drawn (as the parametric techniques do). Instead, they estimate the shape of the score distribution and use the estimate to map the initial distribution to a common (predefined) distribution. Based on the new class of normalisation techniques they also develop a hybrid normalisation scheme that combines non-parametric and parametric techniques into hybrid two-step procedures. They evaluate the performance of the non-parametric and hybrid techniques in face-verification experiments on the FRGCv2 and SCFace databases and show that the non-parametric techniques outperform their parametric counterparts and that the hybrid procedure is not only feasible, but also retains some desirable characteristics from both the non-parametric and the parametric techniques. 1 Introduction Biometric verification systems typically rely on a matching module to measure the similarity (or dissimilarity) between the feature representation extracted from the live biometric sample (the probe sample) and the template of a given identity stored in the systems database. Here, the template can be a feature representation, a classifier with a corresponding discriminant function or some other type of mathematical model that can be probed for its similarity with the input biometric sample. The output of the matching module is a matching (similarity/dissimilarity) score, which is compared to a decision threshold to make a decision regarding the identity of the live biometric sample. However, because of changes in the conditions across different enrollment and probe samples, similarity scores typically exhibit statistical variations [1], which negatively affect the recognition performance and render approaches relying on a single, global decision threshold as suboptimal. These statistical variations are as well as being dependent on varying external conditions often also identity dependent, which is known in the literature as the Dodington Zoo effect [2]. To mitigate the problem of score variation the computed matching score is commonly subjected to a scorenormalisation technique, which in a sense calibrates the score and makes it comparable to a global decision threshold. In the context of biometric verification, ( parametric ) normalisation techniques, which assume that the matching score is drawn from a Gaussian-shaped distribution and that adjusting this distribution to zero mean and unit variance successfully alleviates the score-variation problem, have emerged as the most popular. Since different strategies can be adopted to produce the required estimates of the first and second statistical moment of the Gaussian distribution, different normalisation techniques have been proposed by different researchers (e.g. [3 6]). In this paper, we build on our work from [7, 8] and introduce a new class of score-normalisation techniques, which make no assumption regarding the shape of the distribution from which the matching scores are drawn. We present a simple procedure based on the rank transform for mapping the initial score distribution to a common (predefined) one. Although the basic idea of adjusting the score distribution to a common one is still the same as in the case of parametric techniques, this class of so-called non-parametric score-normalisation techniques relaxes the assumptions pertaining to the shape of the score distribution and is, therefore expected to have an advantage when it comes to the verification performance after normalisation. Note that a parametric technique exploiting a similar idea was already proposed in [9], where it was shown explicitly that relaxing the Gaussian assumption is indeed beneficial for the recognition performance. Although the assumption-free nature of the non-parametric normalisation techniques is expected to positively affect the verification performance of a given biometric system, it 62
2 comes at a price. As will be elaborated in the remainder of the paper, non-parametric techniques require a much higher number of score samples than their parametric counterparts for the normaliaation procedure and this then results in a significantly higher computational complexity. The reason for this setting is the fact that non-parametric normalisation techniques operate on the entire score distribution, which needs to be estimated reliably, while the parametric techniques rely only on the estimation of two parameters. These issues may limit the deployment of non-parametric normalisation techniques in verification systems in which the operational speed is crucial. We will show that for certain types of score-normalisation techniques it is possible to retain the performance gains because of the non-parametric normalisation, while preserving the (run-time) computational complexity of the parametric normalisation techniques. This, however, applies only to score-normalisation techniques that combine two normalisation techniques into a single two-step procedure. An example of such a two-step normalisation technique can be found in the popular zt-norm, which combines the so-called z- and t-norms [Details on the t-, z- and zt-norms are given in the remainder.] [3, 10, 11]. The zt-norm is considered to be amongst the most effective scorenormalisation techniques and is regularly used in the fields of speaker- and signature-verification [3, 4]. Instead of performing the zt-norm in either a pure parametric or a pure non-parametric manner, in this paper, we propose to use a hybrid technique, where the z-norm step, which can be conducted off-line, is performed non-parametrically, and the t-norm step is performed parametrically. Our experimental assessments, conducted on the Face Recognition Grand Challenge (FRGC) and SCFace databases, show that the non-parametric score-normalisation techniques indeed ensure a better verification performance than their parametric equivalents and that the proposed hybrid normalisation scheme still outperforms its parametric counterpart, while exhibiting the same computational complexity. The rest of the paper is structured as follows. In Section 2, we first describe the theoretical background required for understanding the problem of score normalisation and review the existing work done in this field. In Section 3, we introduce the new class of non-parametric and hybrid score-normalisation techniques and elaborate on their characteristics in Section 4. We present experiments on the FRGCv2 and SCFace databases in Section 5 and conclude the paper with some final comments and directions for future work in Section 6. 2 Theoretical background 2.1 Problem formulation Score normalisation represents a problem that arises in the context of biometric verification [Score normalisation is important in other areas (e.g. classifier fusion [12, 13]) as well, but in this paper, we focuson the problem of biometric verification.]. Before approaching score normalisation, it is, therefore necessary to define the problem of biometric verification in a more formal manner. Let us assume that there are N identities enroled in the given biometric system and that these identities are labelled with C 1, C 2,..., C N. Furthermore, let us assume that we are given an input feature vector x and a claimed identity C i from the pool of the N enrolled identities. The goal of biometric verification is to assign the pair (C i, x) to class w 1 (a genuine/client claim) if the claim of identity is accepted, and to class w 2 (a false/impostor claim) otherwise. Typically, the acceptance of the claim is determined based on the return value [The matching or similarity scores] of the scoring function s = δ i (x) associated with the identity C i [14, 15], that is, { (C i, x) = w 1, if d i (x) D for i [ {1, 2,..., N} w 2, otherwise where Δ stands for the so-called decision threshold [Here, we assume that the scoring function δ i (.) returns small values for large similarities and large values for small similarities.]. There are usually at least two possibilities for how to compute the matching score for a given probe vector x and a claimed identity C i, which depend on the way the identities are modelled in the biometric system. In the case, the identities are modelled using a classifier, the matching score is produced based on a discriminant function associated with the classifier, where the return value of the discriminant function serves as an indicator of classmembership. When the identities are not represented with a classifier, but with some other type of template, the matching score is typically generated by measuring the similarity of the probe feature vector and the enrolled template using some similarity measure. In the remainder of the paper, we will refer to any function returning a matching score, albeit a discriminant function or a similarity measure, as a scoring function. From a generative point of view, verification can be seen as a two-class generating process, w 1 and w 2, that generate observations s according to the probability density functions (PDFs) p(s w 1 ) and p(s w 2 )[16]. Based on this formulation and a specific value for the decision threshold Δ, it is possible to define the total Bayes error of the verification process as ER = D 1 p(s w 2 )p(w 2 )ds + 1 D (1) p(s w 1 )p(w 1 )ds (2) where p(w 2 ) and p(w 1 ) represent the prior probabilities of the two classes, w 2 and w 1, respectively. To illustrate the importance of score normalisation let us have a look at the simple toy example presented in Fig. 1. Here, the classes w 1 and w 2 rely on two base classifiers (identities) C 1 and C 2 to generate the scores s [5, 17]. The green plot represents the client-score distribution p(s w 1 ), the black plot represents the impostor-score distribution p(s w 2 ), the dotted red and blue plots represent the identity-specific client-score distributions p(s w 1, C 1 ) and p(s w 1, C 2 ), and the dotted magenta and cyan plots represent the identity-specific impostor-score distributions p(s w 2, C 1 ) and p(s w 2, C 2 ), respectively. In this example, the identity-specific class-conditional distributions form the basis for computing the impostor- and client-score distributions (i.e. the green plot represents a weighted sum of the red and blue plots and the black plot represents a weighted sum of the cyan and magenta plots). Assume for the moment that the criterion for determining the decision threshold was an equal contribution of both terms of (2) to the Bayes error. As we can see from Fig. 1, this criterion results in the threshold value of Δ if only the class-conditional distributions p(s w 1 ) and p(s w 2 ) are 63
3 Fig. 1 Illustration of the role of score normalisation The plots in the upper figure show the impostor and client-score distributions, while the plots in the lower figure show identity-specific impostor and client-score distributions. The distributions in the upper figure were created based on the distributions in the lower figure (best viewed in colour). For all plots equal priors are assumed. It should be noted that the decision thresholds for the EERs are set in a such a way that the cumulative density functions of the presented densities take the same value, for example, A1=A 2 considered. However, if the identity-specific information is considered as well, the decision threshold for the identities C 1 and C 2 has to be set adaptively at Δ and Δ, respectively, to ensure the optimal operation of the system with respect to the selected criterion (assuming equal priors for the two classes) [18 20]. Another possibility is to apply score-normalisation techniques to the computed scores s with the goal of aligning the identity-specific class-conditional [Here, the term class refers to either the impostor or the client class.] score distributions p(s w 1, C i ) and/or p(s w 2, C i ), for i {1, 2,, N} and making the scores comparable to a single global decision threshold [3, 5, 10]. Note that the presented example applies only to target-centric score-normalisation techniques (such as the z-norm). However, a similar example could also be presented for the probe-centric techniques (such as the t-norm). 2.2 Score normalisation background and related work Score-normalisation techniques can be divided into two groups based on the nature of the scores that are used for the normalisation. If the scores used by the normalisation technique are generated based on client-verification attempts, then the technique is said to be client-centric and, similarly, if the scores used by the normalisation technique are generated based on impostor-verification attempts, then the normalisation technique is said to be impostor-centric. As the data for producing client scores is usually not available (or extremely scarce), most of the existing techniques fall into the latter group, even though attempts have been made to develop client-centric score-normalisation techniques as well [20]. Among the group of impostor-centric score-normalisation techniques, the zero-normalisation or z-norm, the testnormalisation or t-norm [3] and combinations of the two (zt-norm) are among the most popular. Promising efforts on incorporating client scores into impostor-centric normalisation techniques (i.e. efforts towards clientimpostor-centric methods) have also been proposed in the literature in the form of the F-norm [11], the EER-norm [4], the MS-LLR-norm [5] and other recently proposed techniques, e.g. [21, 22]. Formally, score-normalisation techniques try to define a mapping c [5] c: s s (3) in such a way that the resulting normalised scores s are well calibrated and, thus, comparable to a single global decision threshold. In the above equation, s denotes a raw score representing the output of the matching module of a biometric verification system and s stands for the normalised version of the score. The total Bayes error of the verification process after normalisation can then be written as ER = Dc 1 1 p(s w 2 )p(w 1 )ds + p(s w 1 )p(w 1 )ds D c (4) where Δ c again defines a decision threshold determined based on the selected performance criterion. Obviously, the goal of the score normalisation is to ensure that ER < ER. Impostor-centric score-normalisation techniques typically define the mapping c based on the impostor-score distributions p(s w 2 ) [7, 8]. The most popular techniques from this group are presented in the next section. 2.3 Parametric solutions Parametric score-normalisation techniques commonly assume a certain shape for the class-conditional score-distributions and then apply normalisation steps in accordance with the 64
4 assumptions made. The most popular impostor-centric score-normalisation techniques, such as the z- or t-norm, for example, assume that the impostor-score distribution p(s w 2 ) (estimated based on the model of the claimed identity or the given probe vector) takes a Gaussian form, that is, p(s w 2 ) =N(s; m, s) (5) where μ and σ denote the mean and standard deviation of the impostor-score distribution. Score normalisation is then conducted by shifting and scaling the assumed Gaussian distribution to zero mean and unit variance c(s) = s = s m (6) s As we can see, the goal of these techniques is to normalise the impostor-score distributions for all the identities (in the case of the z-norm) or for each probe vector x (in the case of the t-norm) to N (s ; 0, 1) and to ensure that the scores from all the verification attempts are drawn from the same distribution. This in turn suggests that the scores are well calibrated and can be compared to a single global threshold. The z- and t-norm both use (6) to normalise the impostor-score distributions to N (s ; 0, 1), with the difference being that the z-norm generates the required scores by subjecting a number of probe samples (or z-impostors) to the scoring function δ i (.) associated with the target/claimed identity (i.e. C i ), while the t - norm generates its scores by subjecting the given probe sample to a number of scoring functions δ j (.) (or t-models) corresponding to some background identities C 1t, C 2t,, C jt,, C mt, where m denotes the total number of said identities (t-models). Based on the generated score sets each type of score normalisation computes its μ and σ, respectively [7, 8]. 3 Beyond parametric techniques Parametric score-normalisation techniques operate by making certain assumptions regarding the shape of the relevant score distributions and by adjusting the parameters of the assumed distributions to normalise the scores. In this section, we introduce a new class of non-parametric normalisation techniques that make no such assumptions and show how they can be combined with parametric techniques into hybrid normalisation approaches. 3.1 Non-parametric score normalisation Conducting score normalisation in a non-parametric manner pursues the same goal as conducting the normalisation procedure in a parametric way, namely, making the normalised scores comparable to a single, global decision threshold. However, unlike parametric techniques, the non-parametric approaches do not make any assumptions regarding the shape of the impostor-score distribution p(s w 2 ). To be able to relax the Gaussian assumption, the non-parametric normalisation techniques need to estimate the PDF of the impostor-score distribution p(s w 2 ), which can be achieved by using kernel density estimation (KDE) [23]. Once the PDF is estimated, the impostor-score distribution can be normalized by mapping it to a predefined shape. This procedure forms the basis for the work presented in [7, 8], but requires an estimate of the open hyper-parameters of the KDE and is computationally relatively intense. In this paper we introduce a slightly different approach, which follows the same basic steps, but relies on a rank transform, has no open hyper-parameters and is computationally simpler. Non-parametric score normalization can be formally described as follows: let r be a random variable with the property [8, 24] r = F(s) = s q= 1 p s (q)dq (7) where q is a dummy variable for the integration. Furthermore, let s be another random variable with the property r = G(s ) = s p s (x)dx (8) x= 1 where x again denotes a dummy variable for the integration. If we assume that the PDF p s (q) represents our impostor-score distribution p(s w 2 ) and that p s (x) represents the PDF of a predefined target distribution, then the class of nonparametric score-normalisation techniques can be described as follows c(s) = G 1 (r) = G 1 (F(s)) = s (9) where G(.) and F(.) denote the cumulative density functions (CDF) of their corresponding arguments and G 1 (.) represents the inverse of the CDF [7]. Using the above expressions, it is possible to implement most of the existing parametric score-normalisation techniques in a nonparametric fashion. This includes the popular z- and t-norms [7, 8]. In practice, it is not necessary to estimate the PDF of (7) explicitly using the KDE. Instead, the left-hand side of the equation can be computed directly using the so-called rank transform [25, 26]. With the rank transform, the score samples that would otherwise serve to estimate the PDF of (7) are first put in ascending order. Each score in the ordered sequence is then assigned an index (or rank) that represents the position of the score in the ordered sequence [In other words, each score is assigned a number (i.e. the rank) that counts the number of scores smaller than the currently observed score in the available set of scores.]. Once the rank R of each score is determined, the left-hand side of (7) can be computed as r = F(s) = n R n (10) where n denotes the number of elements in the set of scores and the rank R corresponds to the input score s. Note again that with the presented procedure we use the available score samples to estimate a CDF [i.e. F(s)] rather than a PDF. From this point of view, the proposed technique could also be interpreted so as to relate to the copula family of algorithms from the field of multivariate statistics [27]. To illustrate the idea of non-parametric score normalisation an example of an impostor-score distribution was generated through preliminary experimentation on the FRGCv2 database. The distribution is shown in Fig. 2 (left). It is clear that the non-parametric z-norm (denoted as nz-norm) maps the entire score distribution to a predefined shape, while the parametric z-norm only shifts and scales the score distribution, but leaves its shape unchanged. 65
5 computational complexity by combining non-parametric and parametric score-normalisation techniques into hybrid procedures. The target distribution still remains an open parameter and needs to be selected empirically. 3.2 Hybrid score normalisation We have pointed out before that popular score-normalisation techniques such as the z- and the t-norm rely on the same (Gaussian) assumption when normalising the scores, but differ in the way that the scores (which are used for estimating the parameters of the normalisation techniques) are created. Owing to this difference, score-normalisation techniques that operate on different scores are often combined into two-step normalisation techniques, where one technique is applied after the other. An example of such a two-step normalisation technique can be found in the zt-norm [3], where the t-norm technique is applied on z-norm normalised scores. However, there are also other examples, for example, [28]. Formally, two-step normalisation techniques can be defined based on the following mapping s = c(s ) and s = f(s) (11) Fig. 2 Illustration of the idea of non-parametric score normalisation The original score distribution (left) is only scaled and shifted with the parametric normalisation technique (middle) With the non-parametric technique, the entire distribution is mapped to a predefined shape in this case to a log-normal shape (right) Let us emphasise again that the proposed non-parametric score-normalisation techniques allow us to conduct score normalisation without any assumptions regarding the shape of the impostor-score distribution. However, there are still a number of issues that need to be kept in mind: Computational complexity: non-parametric scorenormalisation techniques require significantly more data for the normalisation procedure than their parametric counterparts and as such may result in an increased computational complexity of the verification system during the run-time, when the score distribution needs to be computed. This is especially true for normalisation techniques involving scores generated with the probe vector (such as the t-norm), as these cannot be pre-computed off-line. Target distribution: parametric score-normalisation techniques only shift and scale the score distributions in accordance with the assumptions they make; non-parametric techniques, on the other hand, require a target distribution for the mapping in (9). This target distribution is an open parameter of the non-parametric techniques that can affect their performance. In the next section, we introduce hybrid scorenormalisation techniques that try to mitigate the problem of where both c and f denote the score-normalisation techniques, as defined in (3), s denotes the normalised version of the input score s after the first normalisation, and s stands for the final normalised score. Although parametric techniques are often combined into two-step procedures, the authors of [7] suggested combining non-parametric techniques. As we emphasised in the previous section, such an approach is computationally demanding and is, therefore often not suitable for deployment in biometric verification systems. To overcome this shortcoming, parametric and non-parametric normalisation techniques can be combined into a hybrid (two-step) normalisation technique. In the case of the zt-norm this means that the first step of the normalisation technique, which relates to the score samples created based on the scoring function associated with the target/claimed identity, is conducted in a non-parametric manner, whereas the second step of the normalisation techniques, which relates to the sample-score population created with the probe biometric sample, is conducted in a parametric manner. Since the rank transform needed for the non-parametric z-norm step can be conducted off-line and the mapping in (9) can be implemented in the form of a look-up table, the first step of the hybrid normalisation does not induce any computational burden during the run-time. The second step of the hybrid technique is identical to the parametric version of the zt-norm. Note that the proposed hybrid approach can be applied to any two-step normalisation technique, where the first step involves scores computed with scoring functions associated with the target/claimed identities, and is not limited only to the zt-norm studied in this paper. 4 Score-normalisation revisited To provide a better insight into the characteristics of the different score-normalisation techniques and emphasise the differences between them conceptual diagrams of all the normalisation techniques are presented in Fig. 3. Here, no norm represents the matching process without score 66
6 Fig. 3 Conceptual diagrams of different score-normalisation techniques a No-norm bz-norm ct-norm dzt-norm enz-norm fnt-norm g nzt-norm h hzt-norm Here, C i denotes the claimed identity, x t denotes the probe feature vector, δ i stands for the scoring functions associated with the ith identity, δ t stands for the scoring functions of the given t-model, n represents the number of z-impostor used to create the sample scores for the z-norm variants, mrepresents the number of t-models used to create the sample scores for the t-norm variants and the abbreviation LUT stands for look-up table normalisation, z-norm (Fig. 3b), t-norm (Fig. 3c) and zt-norm (Fig. 3d) represent the parametric techniques, nz-norm (Fig. 3e), nt-norm (Fig. 3f ) and nzt-norm (Fig. 3g) stand for the non-parametric versions of the techniques shown in Figs. 3b d, and hzt-norm (Fig. 3h) stands for the hybrid zt-norm. The same notation is also used in the remainder of this paper. Note that the score-normalisation techniques, which operate on score distributions created by comparing the target tempate/model with a number of probe vectors (i.e. variants of the z-norm) [These techniques are often referred to as target-centric normalisation techniques], can typically pre-compute the parameters required for the normalisation and, therefore do not result in an additional computational burden. Normalisation techniques that operate on score distributions created by comparing the probe sample to a number of background identities (i.e. variants of the t-norm) [These techniques are often referred to as probe-centric normalisation techniques], on the other hand, need to compute the score samples on-line and, therefore require an additional computational effort. A summary of the storage requirements and the computational steps that need to be conducted on-line for different normalisation techniques is presented in Table 1. Table 1 Comparison of different score-normalisation techniques Normalisation technique Steps needed z-norm t-norm nz-norm nt-norm Need to store for each client Need to compute on-line for each probe z-norm two parameters: μ, σ n/a t-norm n/a two parameters: μ, σ nz-norm look-up-table n/a nt-norm n/a look-up-table zt-norm two parameters: μ, σ two parameters: μ, σ nzt-norm look-up-table look-up-table hzt-norm look-up-table two parameters: μ, σ The table shows the steps needed for the given technique (not necessarily in the correct order) and the storage requirements per client/ probe sample. n/a stands for not applicable since there is nothing to compute or store. 67
7 5 Experiments 5.1 Experimental database and setup For the experiments presented in the remainder of this paper, we make use of two challenging databases, that is, the second version of the Face Recognition Grand Challenge database (FRGCv2) [29] and the SCFace database [30]. The FRGCv2 database features more than facial images (of 466 distinct subjects) captured in diverse conditions (e.g. outdoors, indoors, under artificial lighting etc.) and exhibits characteristics that are known to affect the performance of existing face-recognition technology [31]. We select the most challenging experimental configuration defined for the FRGCv2 database for our experiments, that is, FRGCv2 Experiment 4. Here, images are available for the training, 8024 images for the probe/query, and images are at disposal for the gallery/target set. The SCFace database contains 2080 images of 130 subjects (16 images per subject) taken in uncontrolled indoor environments using five video-surveillance cameras. The images are of varying quality and resolution, except for one image per subject, which represents a mug-shot image captured in controlled conditions. For the SCFace database we define a similar experimental protocol as for the FRGCv2 database. Thus, we divide the data into disjoint sets used for the training and testing. With the defined protocol, 480 images of 30 subjects are used for the training, whereas the rest is randomly partitioned into gallery/target and probe/query sets. The experimental protocols for both databases are summarised in Table 2. Prior to the evaluation, we subject all of the images from both databases to a preprocessing procedure that geometrically normalises the facial images in accordance with manually marked eye-centre coordinates, crops the facial regions to a pre-defined size of pixels and finally applies the histogram equalisation to the cropped images to improve the contrast and (partially) compensate for the lighting conditions present during the image acquisition. Some examples of the preprocessed facial images from both databases are shown in Fig. 4. For the experiments on the FRGCv2 database, we implement a variant of principal component analysis (PCA) [32] and use it in conjunction with the whitenedcosine-based (or cosine Mahalanobis) similarity scoring procedure. Based on the implemented classification technique, we generate the similarity (or matching score) matrix that forms the basis for assessing the score-normalisation techniques [Note that considering the entire similarity matrix when producing Table 2 Characteristics of FRGCv2 experiment 4 Database Image set #Images Image quality FRGCv2 training controlled and uncontrolled target/gallery controlled probe/query 8014 uncontrolled SCFace training 480 controlled and uncontrolled target/gallery 800 randomly partitioned probe/query 800 randomly partitioned performance metrics on the FRGCv2 database corresponds to the so-called all vs. all experimental scenario.]. This setting more or less corresponds to a closed-set verification scenario. For the experiments on the SCFace database, we use a more competent classification technique, that is, the technique proposed by Liu [33]. The technique relies on Gabor filters and Kernel Fisher Analysis for the classification. Like with the FRGCv2 database, we again generate a similarity matrix (of size ) that forms the basis for the score normalisation. 5.2 Performance measures We measure the effectiveness of the score-normalisation techniques based on the performance metrics derived from the false-acceptance error rate (FAR) and the false-rejection error rate (FRR). These error rates are defined as [34 37] { } simp s imp D FAR(D) = { } (12) and s imp { FRR(D) = s } cli scli. D { } s cli (13) where {s cli } and {s imp } represent sets of client and impostor scores generated during the experiments, respectively,. returns the cardinality of its argument, Δ denotes the decision threshold and that the inequalities are set in a way that assumes that dissimilarity measures were used to produce the matching scores. The FAR and FRR are typically used as estimates for the two terms of the Bayes error defined in (2). Fig. 4 Samples from the FRGCv2 (left) and SCFace (right) databases Note that the images are presented at different stages of the preprocessing chain the histogram of the SCFace samples has not been equalised yet a Geometric normalisation, cropping, scaling and histogram equalisation b Geometric normalisation, cropping and scaling 68
8 Note that both the FAR and FRR are dependent on the value of the decision threshold Δ. Thus, selecting a specific value for this threshold results in different performance metrics. For our assessments, we selected three such metrics, that is, the equal error rate (EER), the verification rate at the false-acceptance error rate of 0.1% (VER 0.1FAR ) and the verification rate at the false-acceptance rate of 1% (VER 0.1FAR ). Here, the performance metrics can be computed as follows and EER = 1 ( 2 FAR(D ) eer) + FRR(D eer ) (14) VER 0.1FAR = 1 FRR(D ver01 ) (15) VER 1 FAR = 1 FRR(D ver1 ) (16) where the corresponding decision thresholds are defined as and D eer = argmin FAR(D) FRR(D) (17) D D ver01 = argmin FAR(D) (18) D D ver1 = argmin FAR(D) 0.01 (19) D In addition to the presented performance metrics, we also compute the half total error (HTER) at each operating point. The HTER is defined as HTER k = 1 ( 2 FAR(D ) k) + FRR(D k ) (20) where k { eer, ver01, ver1}. Here, the HTER serves as an estimate of the Bayes error defined in (2) under the assumption of equal priors, that is, p(w 1 )=p(w 2 ) = 0.5. Next to the quantitative performance metrics presented above, we also use performance curves to present the results of our experiments. Specifically, we use receiver operating characteristic (ROC) curves, which plot the verification rate (VER) against the FAR for various values of the decision threshold D, and the detection error tradeoff (DET) curves, which plot the FAR against the FAR for various values of the decision threshold D. In contrast to the ROC curves, the DET curves use a non-linear scale for the x- andy-axes and, therefore better emphasise the performance differences between the assessed techniques [38]. 5.3 Results and discussion In our first series of verification experiments, we implement three popular (parametric) normalisation techniques and compare their performance to the performance of their non-parametric counterparts. Specifically, we implement the t-norm, z-norm and the two-step zt-norm as well as the non-parametric equivalents, which are denoted as nt-norm, nz-norm and nzt-norm in the remainder of this paper. Note that it would be possible to conduct a tz-norm procedure (by applying z-norm on t normalised scores); however, since this would involve enormous amounts of computation for each test sample during the run-time, this options is not feasible in practice and is therefore omitted from our experiments. For this series of experiments, we use only the FRGCv2 database. Before the non-parametric normalisation techniques can be implemented, a target distribution for (8) needs to be selected. To this end, we assess different types of target distributions when implementing the non-parametric normalisation techniques, that is, uniform, Gaussian and log-normal distributions. The three target distributions are defined as p unif (s ) = 1 b a (21) where in our case a = 0 and b = n [n denotes the number of scores used for the normalisation see (10)] p gaus (s ) = 1 ( s exp (s m) 2 ) 2p 2s 2 (22) where μ and σ represent the mean and the standard deviation of the Gaussian distribution, and ( p logn (s 1 ) = s s exp (lns m) 2 ) 2p 2s 2 (23) where μ and σ represent the mean and the standard deviation of the log-normal distribution. For the experiments we select μ = 0 and σ = 1 for the Gaussian target distribution and μ =0 and σ = 0.5 for the log-normal distribution. The results from this series of experiments are shown in the form of DET and ROC curves in Fig. 5. Here, the pair of graphs on the upper-left (marked as Fig. 5a) presents a comparison of the baseline performance of the PCA technique (denoted as NO NORM) and the performance of the parametric and non-parametric versions of the z-norm. For the non-parametric z-norms the name in the brackets indicates the target distribution that was used to generate the performance curves. The pair of graphs on the upper-right (marked as Fig. 5b) shows the same comparison for the parametric t-norm and the different versions of the non-parametric nt-norm. The pair of graphs on the lower-left side of Fig. 5 (marked as Fig. 5c) shows the comparison of the baseline PCA technique and the zt- and nzt-norms, while the last graphs on the lower-right (marked as Fig. 5d) depict a comparison of all the assessed techniques in this series of experiments and their relative ranking for the non-parametric techniques only the results for the log-normal target distributions are shown. The same comparison is also shown in Table 3, where the characteristic error rates are tabulated for the assessed techniques. The first thing to notice from the presented results is the fact that except for the z-norms, where no significant improvement over the baseline performance was observed, all the other techniques resulted in significant performance gains. In general, the two-step normalisation techniques outperformed the single-step ones and the non-parametric normalisation techniques consistently outperformed their parametric equivalents along most operating points of the DET and ROC curves when the log-normal distribution was used as the target distribution. When the uniform distribution was used as the target distribution for the non-parametric score-normalisation techniques, the performance of the biometric verification systems increased 69
9 Fig. 5 Effect of (parametric and non-parametric) score normalisation on the verification performance DET (left) and ROC (right) curves generated during Experiment 4 on the FRGCv2 database (best viewed in colour) a Comparison no-, z-, nz-norm b Comparison no-, t-, nt-norm c Comparison no-, zt-, nzt-norm d Comparison all for most operating points over the baseline (i.e. NO NORM), but was worse than that of the parametric equivalents. For the normal target distribution the performance of the non-parametric score-normalisation techniques was very similar to that of the parametric techniques, with very little difference over all the operating points for all the implemented methods. Note that the improved performance of the non-parametric normalisation techniques (with a log-normal target distribution) comes at the expense of an increased computational complexity. The non-parametric techniques require significantly more data (i.e. a larger score population) to estimate the entire score-distribution reliably, while the parametric techniques only require enough data to estimate the mean and the variance. This fact results in the need for storing a larger background set of templates in a face-verification system to ensure a sufficient number of scores for the non-parametric normalisation techniques. If storage is no issue, the non-parametric score-normalisation techniques can ensure a better verification performance than their parametric counterparts. When looking at the characteristic error rates in Table 3, similar conclusions to the ones presented above can be made. Of particular interest here are the values of the HTER, which indicate that the Bayes error is indeed reduced with (most of) the score-normalisation techniques. Here, the HTER at the equal-error operating point equals the EER and is, therefore not tabulated separately in Table 3. We can again see that the non-parametric normalisation techniques with a log-normal target distribution ensure the best verification performance. We, therefore select this distribution for all of our subsequent experiments. Another interesting issue relevant in the context of non-parametric score normalisation is the effect of the normalisation procedure on the unseen client-score distribution. With parametric score-normalisation techniques this distribution is simply shifted and scaled based on the mean and standard deviation computed from the Table 3 Quantitative comparison of the score-normalisation techniques on the FRGCv2 database with experiment 4 Perform. metric No norm Parametric techniques Non-parametric techniques Uniform distribution Log-normal distribution Normal distribution z t zt nz nt nzt nz nt nzt nz nt nzt EER VER 0.1FAR VER 1FAR HTER ver HTER ver The results for the non-parametric techniques are presented for three different target distributions. The best results for each presented performance metric are presented in bold. 70
10 Fig. 6 Effect of a non-parametric score-normalisation technique (with a log-normal target distribution) on synthetic client and impostor-score distributions a Sample histograms before normalisation b Sample histograms after normalisation impostor-score distribution. However, with the nonparametric techniques the client distribution obtains re-mapped in accordance with the CDF computed from the impostor-score distribution. To examine this effect more closely, we generate two synthetic score populations and estimate their distributions as shown in Fig. 6. Here, the light-blue distribution corresponds to the impostor distribution that forms the foundation for the nonparametric normalisation procedure and the dark-blue distribution corresponds to the client distribution, which is mapped along with the impostor distribution. Note that with a target log-normal distribution the impostor-score distribution takes an approximately log-normal shape after the normalisation, while for the client-score distribution most of the mass obtains concentrated around a specific value and heavy tails appear on both sides of the distribution. It is important to emphasise at this point that when implementing the non-parametric techniques, we need to ensure that the CDF from (7) (estimated from the impostor-score distribution) is defined over the domain of the client-score distribution as well, otherwise the part of the client-score distribution over the undefined domain obtains mapped onto a single point. So far, we have established the relative ranking of the score-normalisation techniques, examined some of the characteristics of the non-parametric score-normalisation procedures and demonstrated the importance of choosing an appropriate target distribution for the normalisation. The next issue we need to investigate is the feasibility of the hybrid two-step normalisation technique. In Section 3.2, where we have introduced the hybrid normalisation scheme, we proposed carrying out the first normalisation step in a non-parametric manner based on a pre-computed lookup table and conduct the second normalisation step parametrically. This procedure exhibits approximately the same computational complexity during the run-time as the parametric techniques, but hopefully results in an improved recognition performance. To assess the proposed hybrid two-step zt-norm (denoted as hzt-norm in the figures) we apply it to the FRGCv2 similarity matrix and compare the generated results to the remaining two two-step procedures, that is, the zt-norm and the nzt-norm, and the baseline Fig. 7 Comparison of the two-step techniques DET (left) and ROC (right) curves generated during Experiment 4 on the FRGCv2 database (best viewed in colour) The target distribution for the non-parametric part of the HZT-normalisation technique was again chosen to be lognormal The results for the nzt technique are shown here again for comparison purposes they are the same as in Fig. 5, where the technique was labelled nzt-norm (lognormal) a Comparison of parametric and non-parametric techniques b Comparison of the two-step score-normalisation techniques 71
11 Fig. 8 Effect of (parametric and non-parametric) score normalisation on the verification performance DET (left) and ROC (right) curves generated during experiments on the SCFace database (best viewed in colour) a Comparison of parametric and non-parametric techniques b Comparison of the two-step score normalization techniques performance of the PCA. The results of the experiments are again shown in the form of DET (on the left) and ROC (on the right) curves in Fig. 7. The hybrid zt-norm technique achieves the VER 0.1FAR of 11.1%, the VER 1FAR of 30.8% and the EER of 18.3%. In comparison to the parametric two-step normalisation technique, the composite procedure produces better verification results along all the operating points of the DET and ROC curves. When compared to the non-parametric two-step normalisation techniques, the proposed hybrid procedure still results in a very competitive performance. At this point, we need to stress that the lower computational cost of the hybrid techniques comes at the expense of storage requirements, as a look-up table needs to stored in the verification system for each registered client. Furthermore, the lower computational load is ensured only for the run-time operation of the system (in verification mode), as the look-up table needs to be computed during the enrolment of a client. To validate the findings made so far, we conduct additional experiments on the SCFace database using a state-of-the-art face-recognition technique (i.e. GaborKFA [33]). We again implement all the normalisation techniques and again use the log-normal target distribution for the non-parametric techniques. We apply all the normalisation approaches to our similarity matrix and generate all the relevant performance metrics and curves. The results of this series of experiments are presented in Fig. 8 and Table 4. As can be seen, the results of this series of experiments are much closer together than in the case of the FRGCv2 database. Although almost all of the score-normalisation techniques improve the verification performance over the baseline, this improvement is not as large as on the FRGCv2 database. The reasons for such a setting can undoubtedly be found in the characteristics of the database and the better performing classifier, which leaves less room for an improvement through score normalisation. Overall, the non-parametric and hybrid normalisation procedures are again the best performing techniques considering all the operating points on the ROC and DET curves. The non-parametric techniques (with a log-normal target distribution) again outperform their parametric counterparts and the HTER is again reduced after the score normalisation for most of the assessed normalisation techniques. All in all, it seems that the findings made during the experiments on the FRGCv2 database, generalise to other databases as well. In the last series of experiments, we tried to assess the significance of the obtained results. To this end, we observed the relative change in the EER with respect to the baseline performance without normalisation as suggested by Sun et al. [39] rel. change of EER = EER norm EER baseline EER baseline (24) Here, EER norm stands for the EER after normalisation with the selected score-normaliaation technique and EER baseline for the EER achieved without score normalisation. A negative change of this metric implies an improvement over the baseline. To establish the significance of the observed results on the FRGCv2 and SCFace databases, we randomly sub-sample the original similarity matrices 20 times and generate box plots for the relative change in the EERs. The results are shown in Fig. 9. Note that because of the smaller number of experiments on the SCFace database the dispersion of the results is much larger for this database than for the FRGCv2 database. In general, all the normalisation techniques improve significantly over the baseline on the Table 4 Quantitative comparison of the score normalization techniques on the scface database Perform. metric No norm Parametric techniques Non-parametric techniques Hybrid technique z t zt nz nt nzt hzt EER VER 0.1FAR VER 1FAR HTER VER 0.1FAR HTER VER 1FAR The results for the non-parametric techniques are presented for the log-normal target distributions. The best results for each presented performance metric are presented in bold. 72
Use of Extreme Value Statistics in Modeling Biometric Systems
Use of Extreme Value Statistics in Modeling Biometric Systems Similarity Scores Two types of matching: Genuine sample Imposter sample Matching scores Enrolled sample 0.95 0.32 Probability Density Decision
More informationOnline Signature Verification Technique
Volume 3, Issue 1 ISSN: 2320-5288 International Journal of Engineering Technology & Management Research Journal homepage: www.ijetmr.org Online Signature Verification Technique Ankit Soni M Tech Student,
More informationPEOPLE naturally recognize others from their face appearance.
JOURNAL OF L A TEX CLASS FILES, VOL. X, NO. X, MONTH YEAR 1 On the Use of Discriminative Cohort Score Normalization for Unconstrained Face Recognition Massimo Tistarelli, Senior Member, IEEE, Yunlian Sun,
More informationCHAPTER 6 RESULTS AND DISCUSSIONS
151 CHAPTER 6 RESULTS AND DISCUSSIONS In this chapter the performance of the personal identification system on the PolyU database is presented. The database for both Palmprint and Finger Knuckle Print
More informationFace Recognition At-a-Distance Based on Sparse-Stereo Reconstruction
Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,
More informationClient Dependent GMM-SVM Models for Speaker Verification
Client Dependent GMM-SVM Models for Speaker Verification Quan Le, Samy Bengio IDIAP, P.O. Box 592, CH-1920 Martigny, Switzerland {quan,bengio}@idiap.ch Abstract. Generative Gaussian Mixture Models (GMMs)
More informationBetter than best: matching score based face registration
Better than best: based face registration Luuk Spreeuwers University of Twente Fac. EEMCS, Signals and Systems Group Hogekamp Building, 7522 NB Enschede The Netherlands l.j.spreeuwers@ewi.utwente.nl Bas
More informationCombining Audio and Video for Detection of Spontaneous Emotions
Combining Audio and Video for Detection of Spontaneous Emotions Rok Gajšek, Vitomir Štruc, Simon Dobrišek, Janez Žibert, France Mihelič, and Nikola Pavešić Faculty of Electrical Engineering, University
More informationImproving the Post-Smoothing of Test Norms with Kernel Smoothing
Improving the Post-Smoothing of Test Norms with Kernel Smoothing Anli Lin Qing Yi Michael J. Young Pearson Paper presented at the Annual Meeting of National Council on Measurement in Education, May 1-3,
More informationFace Recognition using Eigenfaces SMAI Course Project
Face Recognition using Eigenfaces SMAI Course Project Satarupa Guha IIIT Hyderabad 201307566 satarupa.guha@research.iiit.ac.in Ayushi Dalmia IIIT Hyderabad 201307565 ayushi.dalmia@research.iiit.ac.in Abstract
More informationGraph Matching Iris Image Blocks with Local Binary Pattern
Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of
More informationNIST. Support Vector Machines. Applied to Face Recognition U56 QC 100 NO A OS S. P. Jonathon Phillips. Gaithersburg, MD 20899
^ A 1 1 1 OS 5 1. 4 0 S Support Vector Machines Applied to Face Recognition P. Jonathon Phillips U.S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards and Technology Information
More informationChapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea
Chapter 3 Bootstrap 3.1 Introduction The estimation of parameters in probability distributions is a basic problem in statistics that one tends to encounter already during the very first course on the subject.
More informationPROBLEM FORMULATION AND RESEARCH METHODOLOGY
PROBLEM FORMULATION AND RESEARCH METHODOLOGY ON THE SOFT COMPUTING BASED APPROACHES FOR OBJECT DETECTION AND TRACKING IN VIDEOS CHAPTER 3 PROBLEM FORMULATION AND RESEARCH METHODOLOGY The foregoing chapter
More informationComputationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms
Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Andreas Uhl Department of Computer Sciences University of Salzburg, Austria uhl@cosy.sbg.ac.at
More informationThree-Dimensional Face Recognition: A Fishersurface Approach
Three-Dimensional Face Recognition: A Fishersurface Approach Thomas Heseltine, Nick Pears, Jim Austin Department of Computer Science, The University of York, United Kingdom Abstract. Previous work has
More informationIllumination Invariant Face Recognition by Non-Local Smoothing
Illumination Invariant Face Recognition by Non-Local Smoothing Vitomir Štruc and Nikola Pavešić Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, SI-1000 Ljubljana, Slovenia {vitomir.struc,nikola.pavesic}@fe.uni-lj.si.com
More informationImpact of Eye Detection Error on Face Recognition Performance
Impact of Eye Detection Error on Face Recognition Performance Abhishek Dutta a, Manuel Günther b, Laurent El Shafey b, Sébastien Marcel b, Raymond Veldhuis a and Luuk Spreeuwers a a University of Twente,
More informationINTERPRETING FINGERPRINT AUTHENTICATION PERFORMANCE TECHNICAL WHITE PAPER
INTERPRETING FINGERPRINT AUTHENTICATION PERFORMANCE TECHNICAL WHITE PAPER Fidelica Microsystems, Inc. 423 Dixon Landing Rd. Milpitas, CA 95035 1 INTRODUCTION The fingerprint-imaging segment of the biometrics
More informationEstimating the wavelength composition of scene illumination from image data is an
Chapter 3 The Principle and Improvement for AWB in DSC 3.1 Introduction Estimating the wavelength composition of scene illumination from image data is an important topics in color engineering. Solutions
More informationA Study on the Consistency of Features for On-line Signature Verification
A Study on the Consistency of Features for On-line Signature Verification Center for Unified Biometrics and Sensors State University of New York at Buffalo Amherst, NY 14260 {hlei,govind}@cse.buffalo.edu
More informationA GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION
A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, Universität Karlsruhe (TH) 76131 Karlsruhe, Germany
More informationOnline and Offline Fingerprint Template Update Using Minutiae: An Experimental Comparison
Online and Offline Fingerprint Template Update Using Minutiae: An Experimental Comparison Biagio Freni, Gian Luca Marcialis, and Fabio Roli University of Cagliari Department of Electrical and Electronic
More informationIntroduction to Pattern Recognition Part II. Selim Aksoy Bilkent University Department of Computer Engineering
Introduction to Pattern Recognition Part II Selim Aksoy Bilkent University Department of Computer Engineering saksoy@cs.bilkent.edu.tr RETINA Pattern Recognition Tutorial, Summer 2005 Overview Statistical
More informationA Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation. Kwanyong Lee 1 and Hyeyoung Park 2
A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation Kwanyong Lee 1 and Hyeyoung Park 2 1. Department of Computer Science, Korea National Open
More informationSpatial Frequency Domain Methods for Face and Iris Recognition
Spatial Frequency Domain Methods for Face and Iris Recognition Dept. of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 e-mail: Kumar@ece.cmu.edu Tel.: (412) 268-3026
More informationProbabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information
Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Mustafa Berkay Yilmaz, Hakan Erdogan, Mustafa Unel Sabanci University, Faculty of Engineering and Natural
More informationOccluded Facial Expression Tracking
Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento
More informationAwell-known technique for face recognition and verification
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 2, NO. 1, MARCH 2007 55 Learning Discriminant Person-Specific Facial Models Using Expandable Graphs Stefanos Zafeiriou, Anastasios Tefas, Member,
More informationCharacter Recognition
Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches
More informationCombining Biometric Scores in Identification Systems
1 Combining Biometric Scores in Identification Systems Sergey Tulyakov and Venu Govindaraju, Fellow, IEEE Both authors are with the State University of New York at Buffalo. 2 Abstract Combination approaches
More informationFACE RECOGNITION USING INDEPENDENT COMPONENT
Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major
More informationTraining-Free, Generic Object Detection Using Locally Adaptive Regression Kernels
Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIENCE, VOL.32, NO.9, SEPTEMBER 2010 Hae Jong Seo, Student Member,
More informationGabor Volume based Local Binary Pattern for Face Representation and Recognition
Gabor Volume based Local Binary Pattern for Face Representation and Recognition Zhen Lei 1 Shengcai Liao 1 Ran He 1 Matti Pietikäinen 2 Stan Z. Li 1 1 Center for Biometrics and Security Research & National
More informationLearning Patch Dependencies for Improved Pose Mismatched Face Verification
Learning Patch Dependencies for Improved Pose Mismatched Face Verification Simon Lucey Tsuhan Chen Carnegie Mellon University slucey@ieee.org, tsuhan@cmu.edu Abstract Gallery Image Probe Image Most pose
More informationApplication of Characteristic Function Method in Target Detection
Application of Characteristic Function Method in Target Detection Mohammad H Marhaban and Josef Kittler Centre for Vision, Speech and Signal Processing University of Surrey Surrey, GU2 7XH, UK eep5mm@ee.surrey.ac.uk
More informationComputational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions
Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions Thomas Giraud Simon Chabot October 12, 2013 Contents 1 Discriminant analysis 3 1.1 Main idea................................
More informationDeep Face Recognition. Nathan Sun
Deep Face Recognition Nathan Sun Why Facial Recognition? Picture ID or video tracking Higher Security for Facial Recognition Software Immensely useful to police in tracking suspects Your face will be an
More informationRemote Sensing & Photogrammetry W4. Beata Hejmanowska Building C4, room 212, phone:
Remote Sensing & Photogrammetry W4 Beata Hejmanowska Building C4, room 212, phone: +4812 617 22 72 605 061 510 galia@agh.edu.pl 1 General procedures in image classification Conventional multispectral classification
More informationEstimating Human Pose in Images. Navraj Singh December 11, 2009
Estimating Human Pose in Images Navraj Singh December 11, 2009 Introduction This project attempts to improve the performance of an existing method of estimating the pose of humans in still images. Tasks
More informationIMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur
IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important
More informationHaresh D. Chande #, Zankhana H. Shah *
Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information
More informationMachine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham
Final Report for cs229: Machine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham Abstract. The goal of this work is to use machine learning to understand
More informationUnsupervised learning in Vision
Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual
More informationHybrid Biometric Person Authentication Using Face and Voice Features
Paper presented in the Third International Conference, Audio- and Video-Based Biometric Person Authentication AVBPA 2001, Halmstad, Sweden, proceedings pages 348-353, June 2001. Hybrid Biometric Person
More informationBiometric quality for error suppression
Biometric quality for error suppression Elham Tabassi NIST 22 July 2010 1 outline - Why measure quality? - What is meant by quality? - What are they good for? - What are the challenges in quality computation?
More informationProbability Models.S4 Simulating Random Variables
Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard Probability Models.S4 Simulating Random Variables In the fashion of the last several sections, we will often create probability
More informationOn Optimisation of Smart Card Face Verification Systems
On Optimisation of Smart Card Face Verification Systems Thirimachos Bourlai Josef Kittler Kieron Messer Centre of Vision, Speech and Signal Processing, School of Electronics and Physical Sciences University
More informationSAR SURFACE ICE COVER DISCRIMINATION USING DISTRIBUTION MATCHING
SAR SURFACE ICE COVER DISCRIMINATION USING DISTRIBUTION MATCHING Rashpal S. Gill Danish Meteorological Institute, Ice charting and Remote Sensing Division, Lyngbyvej 100, DK 2100, Copenhagen Ø, Denmark.Tel.
More informationCLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS
CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of
More informationFeature scaling in support vector data description
Feature scaling in support vector data description P. Juszczak, D.M.J. Tax, R.P.W. Duin Pattern Recognition Group, Department of Applied Physics, Faculty of Applied Sciences, Delft University of Technology,
More informationBootstrapping Method for 14 June 2016 R. Russell Rhinehart. Bootstrapping
Bootstrapping Method for www.r3eda.com 14 June 2016 R. Russell Rhinehart Bootstrapping This is extracted from the book, Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation,
More informationFigure 1. Example sample for fabric mask. In the second column, the mask is worn on the face. The picture is taken from [5].
ON THE VULNERABILITY OF FACE RECOGNITION SYSTEMS TO SPOOFING MASK ATTACKS Neslihan Kose, Jean-Luc Dugelay Multimedia Department, EURECOM, Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr
More informationStatistical Analysis of Metabolomics Data. Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte
Statistical Analysis of Metabolomics Data Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte Outline Introduction Data pre-treatment 1. Normalization 2. Centering,
More informationMobile Human Detection Systems based on Sliding Windows Approach-A Review
Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg
More informationPHOG:Photometric and geometric functions for textured shape retrieval. Presentation by Eivind Kvissel
PHOG:Photometric and geometric functions for textured shape retrieval Presentation by Eivind Kvissel Introduction This paper is about tackling the issue of textured 3D object retrieval. Thanks to advances
More information6. Multimodal Biometrics
6. Multimodal Biometrics Multimodal biometrics is based on combination of more than one type of biometric modalities or traits. The most compelling reason to combine different modalities is to improve
More informationImage Contrast Enhancement in Wavelet Domain
Advances in Computational Sciences and Technology ISSN 0973-6107 Volume 10, Number 6 (2017) pp. 1915-1922 Research India Publications http://www.ripublication.com Image Contrast Enhancement in Wavelet
More informationCS 195-5: Machine Learning Problem Set 5
CS 195-5: Machine Learning Problem Set 5 Douglas Lanman dlanman@brown.edu 26 November 26 1 Clustering and Vector Quantization Problem 1 Part 1: In this problem we will apply Vector Quantization (VQ) to
More informationClustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford
Department of Engineering Science University of Oxford January 27, 2017 Many datasets consist of multiple heterogeneous subsets. Cluster analysis: Given an unlabelled data, want algorithms that automatically
More informationRobust Shape Retrieval Using Maximum Likelihood Theory
Robust Shape Retrieval Using Maximum Likelihood Theory Naif Alajlan 1, Paul Fieguth 2, and Mohamed Kamel 1 1 PAMI Lab, E & CE Dept., UW, Waterloo, ON, N2L 3G1, Canada. naif, mkamel@pami.uwaterloo.ca 2
More information1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra
Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation
More informationRandom projection for non-gaussian mixture models
Random projection for non-gaussian mixture models Győző Gidófalvi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92037 gyozo@cs.ucsd.edu Abstract Recently,
More informationFace recognition algorithms: performance evaluation
Face recognition algorithms: performance evaluation Project Report Marco Del Coco - Pierluigi Carcagnì Institute of Applied Sciences and Intelligent systems c/o Dhitech scarl Campus Universitario via Monteroni
More informationLearning to Recognize Faces in Realistic Conditions
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
More informationFace Hallucination Based on Eigentransformation Learning
Advanced Science and Technology etters, pp.32-37 http://dx.doi.org/10.14257/astl.2016. Face allucination Based on Eigentransformation earning Guohua Zou School of software, East China University of Technology,
More informationExploring Similarity Measures for Biometric Databases
Exploring Similarity Measures for Biometric Databases Praveer Mansukhani, Venu Govindaraju Center for Unified Biometrics and Sensors (CUBS) University at Buffalo {pdm5, govind}@buffalo.edu Abstract. Currently
More informationReport: Reducing the error rate of a Cat classifier
Report: Reducing the error rate of a Cat classifier Raphael Sznitman 6 August, 2007 Abstract The following report discusses my work at the IDIAP from 06.2007 to 08.2007. This work had for objective to
More informationThe Expected Performance Curve: a New Assessment Measure for Person Authentication
R E S E A R C H R E P O R T I D I A P The Expected Performance Curve: a New Assessment Measure for Person Authentication Samy Bengio 1 Johnny Mariéthoz 2 IDIAP RR 03-84 March 10, 2004 submitted for publication
More informationDynamic skin detection in color images for sign language recognition
Dynamic skin detection in color images for sign language recognition Michal Kawulok Institute of Computer Science, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland michal.kawulok@polsl.pl
More informationResearch Article The Complete Gabor-Fisher Classifier for Robust Face Recognition
Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2, Article ID 84768, 26 pages doi:.55/2/84768 Research Article The Complete Gabor-Fisher Classifier for Robust Face
More informationColor Local Texture Features Based Face Recognition
Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India
More informationINF 4300 Classification III Anne Solberg The agenda today:
INF 4300 Classification III Anne Solberg 28.10.15 The agenda today: More on estimating classifier accuracy Curse of dimensionality and simple feature selection knn-classification K-means clustering 28.10.15
More informationLab 5 - Risk Analysis, Robustness, and Power
Type equation here.biology 458 Biometry Lab 5 - Risk Analysis, Robustness, and Power I. Risk Analysis The process of statistical hypothesis testing involves estimating the probability of making errors
More informationPre-control and Some Simple Alternatives
Pre-control and Some Simple Alternatives Stefan H. Steiner Dept. of Statistics and Actuarial Sciences University of Waterloo Waterloo, N2L 3G1 Canada Pre-control, also called Stoplight control, is a quality
More informationIllumination invariant face recognition and impostor rejection using different MINACE filter algorithms
Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms Rohit Patnaik and David Casasent Dept. of Electrical and Computer Engineering, Carnegie Mellon University,
More informationCS 231A Computer Vision (Fall 2012) Problem Set 3
CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest
More informationAutomatic Fatigue Detection System
Automatic Fatigue Detection System T. Tinoco De Rubira, Stanford University December 11, 2009 1 Introduction Fatigue is the cause of a large number of car accidents in the United States. Studies done by
More informationApplications Video Surveillance (On-line or off-line)
Face Face Recognition: Dimensionality Reduction Biometrics CSE 190-a Lecture 12 CSE190a Fall 06 CSE190a Fall 06 Face Recognition Face is the most common biometric used by humans Applications range from
More informationRobust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma
Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Presented by Hu Han Jan. 30 2014 For CSE 902 by Prof. Anil K. Jain: Selected
More informationSupplementary material for the paper Are Sparse Representations Really Relevant for Image Classification?
Supplementary material for the paper Are Sparse Representations Really Relevant for Image Classification? Roberto Rigamonti, Matthew A. Brown, Vincent Lepetit CVLab, EPFL Lausanne, Switzerland firstname.lastname@epfl.ch
More informationComparison of supervised self-organizing maps using Euclidian or Mahalanobis distance in classification context
6 th. International Work Conference on Artificial and Natural Neural Networks (IWANN2001), Granada, June 13-15 2001 Comparison of supervised self-organizing maps using Euclidian or Mahalanobis distance
More informationClustering and Visualisation of Data
Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some
More informationTHE preceding chapters were all devoted to the analysis of images and signals which
Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to
More informationFacial Expression Classification with Random Filters Feature Extraction
Facial Expression Classification with Random Filters Feature Extraction Mengye Ren Facial Monkey mren@cs.toronto.edu Zhi Hao Luo It s Me lzh@cs.toronto.edu I. ABSTRACT In our work, we attempted to tackle
More informationThe Approach of Mean Shift based Cosine Dissimilarity for Multi-Recording Speaker Clustering
The Approach of Mean Shift based Cosine Dissimilarity for Multi-Recording Speaker Clustering 1 D. Jareena Begum, 2 K Rajendra Prasad, 3 M Suleman Basha 1 M.Tech in SE, RGMCET, Nandyal 2 Assoc Prof, Dept
More informationRegression III: Advanced Methods
Lecture 3: Distributions Regression III: Advanced Methods William G. Jacoby Michigan State University Goals of the lecture Examine data in graphical form Graphs for looking at univariate distributions
More informationFrequency Distributions
Displaying Data Frequency Distributions After collecting data, the first task for a researcher is to organize and summarize the data so that it is possible to get a general overview of the results. Remember,
More informationShort Survey on Static Hand Gesture Recognition
Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of
More informationLOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM
LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, University of Karlsruhe Am Fasanengarten 5, 76131, Karlsruhe, Germany
More informationThe Expected Performance Curve: a New Assessment Measure for Person Authentication
The Expected Performance Curve: a New Assessment Measure for Person Authentication Samy Bengio Johnny Mariéthoz IDIAP CP 592, rue du Simplon4 192 Martigny, Switzerland {bengio,marietho}@idiap.ch Abstract
More informationBagging for One-Class Learning
Bagging for One-Class Learning David Kamm December 13, 2008 1 Introduction Consider the following outlier detection problem: suppose you are given an unlabeled data set and make the assumptions that one
More informationExplicit fuzzy modeling of shapes and positioning for handwritten Chinese character recognition
2009 0th International Conference on Document Analysis and Recognition Explicit fuzzy modeling of and positioning for handwritten Chinese character recognition Adrien Delaye - Eric Anquetil - Sébastien
More informationOutline. Incorporating Biometric Quality In Multi-Biometrics FUSION. Results. Motivation. Image Quality: The FVC Experience
Incorporating Biometric Quality In Multi-Biometrics FUSION QUALITY Julian Fierrez-Aguilar, Javier Ortega-Garcia Biometrics Research Lab. - ATVS Universidad Autónoma de Madrid, SPAIN Loris Nanni, Raffaele
More informationCOPULA MODELS FOR BIG DATA USING DATA SHUFFLING
COPULA MODELS FOR BIG DATA USING DATA SHUFFLING Krish Muralidhar, Rathindra Sarathy Department of Marketing & Supply Chain Management, Price College of Business, University of Oklahoma, Norman OK 73019
More informationImproving the detection of excessive activation of ciliaris muscle by clustering thermal images
11 th International Conference on Quantitative InfraRed Thermography Improving the detection of excessive activation of ciliaris muscle by clustering thermal images *University of Debrecen, Faculty of
More informationA Hybrid Face Detection System using combination of Appearance-based and Feature-based methods
IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.5, May 2009 181 A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods Zahra Sadri
More informationRecap: Gaussian (or Normal) Distribution. Recap: Minimizing the Expected Loss. Topics of This Lecture. Recap: Maximum Likelihood Approach
Truth Course Outline Machine Learning Lecture 3 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Probability Density Estimation II 2.04.205 Discriminative Approaches (5 weeks)
More information1. Estimation equations for strip transect sampling, using notation consistent with that used to
Web-based Supplementary Materials for Line Transect Methods for Plant Surveys by S.T. Buckland, D.L. Borchers, A. Johnston, P.A. Henrys and T.A. Marques Web Appendix A. Introduction In this on-line appendix,
More informationI CP Fusion Techniques for 3D Face Recognition
I CP Fusion Techniques for 3D Face Recognition Robert McKeon University of Notre Dame Notre Dame, Indiana, USA rmckeon@nd.edu Patrick Flynn University of Notre Dame Notre Dame, Indiana, USA flynn@nd.edu
More information