Determination of 3-D Image Viewpoint Using Modified Nearest Feature Line Method in Its Eigenspace Domain

Similar documents
Development of Fuzzy Manifold and Fuzzy Nearest Distance for Pose Estimation of Degraded Face Images

FACE RECOGNITION USING SUPPORT VECTOR MACHINES

Principal Component Analysis and Neural Network Based Face Recognition

Principal Component Analysis (PCA) is a most practicable. statistical technique. Its application plays a major role in many

Face Detection and Recognition in an Image Sequence using Eigenedginess

Computer Aided Drafting, Design and Manufacturing Volume 26, Number 2, June 2016, Page 8. Face recognition attendance system based on PCA approach

Semi-Supervised PCA-based Face Recognition Using Self-Training

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition

Color Space Projection, Feature Fusion and Concurrent Neural Modules for Biometric Image Recognition

Recognition of Non-symmetric Faces Using Principal Component Analysis

CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS

Announcements. Recognition I. Gradient Space (p,q) What is the reflectance map?

NOWADAYS, there are many human jobs that can. Face Recognition Performance in Facing Pose Variation

Face Recognition using Principle Component Analysis, Eigenface and Neural Network

Toward Part-based Document Image Decoding

Face Recognition Using SIFT- PCA Feature Extraction and SVM Classifier

A Matlab based Face Recognition GUI system Using Principal Component Analysis and Artificial Neural Network

Face Recognition System Using PCA

On Modeling Variations for Face Authentication

A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION

Face recognition based on improved BP neural network

A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation. Kwanyong Lee 1 and Hyeyoung Park 2

Invariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction

Applications Video Surveillance (On-line or off-line)

An Automatic Face Identification System Using Flexible Appearance Models

Dr. K. Nagabhushan Raju Professor, Dept. of Instrumentation Sri Krishnadevaraya University, Anantapuramu, Andhra Pradesh, India

Adaptive Video Compression using PCA Method

Linear Discriminant Analysis for 3D Face Recognition System

An Efficient Secure Multimodal Biometric Fusion Using Palmprint and Face Image

Recognition: Face Recognition. Linda Shapiro EE/CSE 576

PCA and KPCA algorithms for Face Recognition A Survey

Image Processing and Image Representations for Face Recognition

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

Face detection and recognition. Many slides adapted from K. Grauman and D. Lowe

Comparison of Different Face Recognition Algorithms

Image-Based Face Recognition using Global Features

Human Action Recognition Using Independent Component Analysis

Diagonal Principal Component Analysis for Face Recognition

Facial Expression Detection Using Implemented (PCA) Algorithm

Task analysis based on observing hands and objects by vision

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

FACIAL ANIMATION FROM SEVERAL IMAGES

Performance Evaluation of the Eigenface Algorithm on Plain-Feature Images in Comparison with Those of Distinct Features

CS 195-5: Machine Learning Problem Set 5

Class-Information-Incorporated Principal Component Analysis

Weighted Multi-scale Local Binary Pattern Histograms for Face Recognition

Face Recognition using Tensor Analysis. Prahlad R. Enuganti

Eigenfaces versus Eigeneyes: First Steps Toward Performance Assessment of Representations for Face Recognition

Tensor Sparse PCA and Face Recognition: A Novel Approach

GPU Based Face Recognition System for Authentication

Bahar HATİPOĞLU 1, and Cemal KÖSE 1

FACE RECOGNITION USING INDEPENDENT COMPONENT

Image Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 8, March 2013)

Generic Face Alignment Using an Improved Active Shape Model

A Two-stage Scheme for Dynamic Hand Gesture Recognition

Similarity Image Retrieval System Using Hierarchical Classification

Face Recognition using Eigenfaces SMAI Course Project

3D Models and Matching

Application of Principal Components Analysis and Gaussian Mixture Models to Printer Identification

images (e.g. intensity edges) using the Hausdor fraction. The technique backgrounds. We report some simple recognition experiments in which

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM

Face detection and recognition. Detection Recognition Sally

Object. Radiance. Viewpoint v

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma

Robust Pose Estimation using the SwissRanger SR-3000 Camera

Human Activity Recognition Based on Weighted Sum Method and Combination of Feature Extraction Methods

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

Image Variant Face Recognition System

Figure (5) Kohonen Self-Organized Map

Published by: PIONEER RESEARCH & DEVELOPMENT GROUP( 1

Performance Evaluation of PCA and LDA for Face Recognition

Face Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method

Dimension Reduction CS534

FACE RECOGNITION UNDER LOSSY COMPRESSION. Mustafa Ersel Kamaşak and Bülent Sankur

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method

FACE RECOGNITION BASED ON GENDER USING A MODIFIED METHOD OF 2D-LINEAR DISCRIMINANT ANALYSIS

Expanding gait identification methods from straight to curved trajectories

Face Recognition by Independent Component Analysis

Comparative Analysis of Face Recognition Algorithms for Medical Application

Learning to Recognize Faces in Realistic Conditions

Face Recognition using Laplacianfaces

Modelling imprints of pharmaceutical tablets for imprint quality visual inspection

Automatic Pixel Selection for Optimizing Facial Expression Recognition using Eigenfaces

Local Similarity based Linear Discriminant Analysis for Face Recognition with Single Sample per Person

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

An Integration of Face detection and Tracking for Video As Well As Images

Chapter 4 Face Recognition Using Orthogonal Transforms

Heat Kernel Based Local Binary Pattern for Face Representation

Waleed Pervaiz CSE 352

Robust Face Recognition via Sparse Representation

ECG782: Multidimensional Digital Signal Processing

Learning Object Models from Appearance *

Combining Gabor Features: Summing vs.voting in Human Face Recognition *

A New Multi Fractal Dimension Method for Face Recognition with Fewer Features under Expression Variations

Haresh D. Chande #, Zankhana H. Shah *

Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation

Face Recognition using FSS-DSOP for Small Sample Size Problem with Illumination

Transcription:

Determination of 3-D Image Viewpoint Using Modified Nearest Feature Line Method in Its Eigenspace Domain LINA +, BENYAMIN KUSUMOPUTRO ++ + Faculty of Information Technology Tarumanagara University Jl. Letjen S.Parman 1,Jakarta 11440 INDONESIA ++ Faculty of Computer Science University of Indonesia, Depok Campus 16424 INDONESIA Abstract: Since viewpoint is one of the important information in face recognition system, a viewpoint estimation system is developed to determine the pose position of the 3-D human face images. The system is developed using projection of an unknown viewpoint of a spatial image into its eigenspace representation and calculated its distances to all of available lines connection between every two known viewpoints. We then developed FullyK-LT and SubsetK-LT methods, in order to transform the images from its spatial domain into eigenspace representation. In this paper, we propose a new method that modified the Nearest Feature Line method by adding more feature lines that can be acquired from the projections of every feature point to other feature lines. The developed system is then applied to determine the pose position of 3-D human face of Indonesian people in various positions and expressions. Using our method, it is proved that the increment number of feature lines improved the recognition rate significantly; due to it provide more feature information of an object. The experimental results shown that the highest recognition rate is 96.64% for SubsetK-LT using our Modified Nearest Feature Line method. Key words: Viewpoint recognition system, Feature-line method, Karhunen-Loeve Transformation 1 Introduction The growth of multimedia technology has triggered the needs of 3-D face recognition systems. Face identification is very important problem in law enforcement and forensics, authentication of access into building or automatic transaction machine, and searching for faces in video databases, intelligent user interfaces and others. This face recognition system can hopefully recognize 3-D human face images with various features, such as viewpoints changes, lighting, expression, etc [1]-[4]. Several methods, such as combined feature and template matching [2], template matching using Karhunen-Loeve transformation of large set of face images [3], have developed by many researchers, and good results can be obtained, especially for 2-D frontal or semi-frontal images. Since it is argued that 3-D recognition can also be accomplished by using linear combinations of as few as four or five 2-D viewpoint images [5,6], many researchers are trying to recognize more general view positions of images that cover the entire 3-D viewing sphere. We have developed a 3-D face recognition system using a cylindrical structure of hidden layer neural network [7]. However, since the viewpoints of the given image is a priori unknown, we have assume that the face viewpoints of the given image is already known, which in turn, gave high recognition rate of the developed system [8, 9]. Since these viewpoints

are such important information for the 3D face recognition system, we develop pose position or viewpoints estimation system, to determine the pose position of the 2-D image correctly [10]. In this paper, we proposed a new method to estimate the viewpoint of 3-D objects that can also be used as a subsystem in our developed 3-D face recognition system. The developed viewpoint estimation system consists of a viewpoint features extraction subsystem that define the feature-space of the used system, and the viewpoint estimation subsystem that clustered the unknown viewpoint input face by comparing with viewpoints of faces which are stored in a gallery. In the viewpoint features extraction subsystem, a feature-space is developed based on transforming every face in the spatial domain as a vector in the feature-space. As already written in [10], the authors have introduced two kinds of transformation methods, i e the Fully K-LT method and Subset K-LT method as the transformation procedures. Fully K-LT method is the Karhunen-Loeve transformation technique that transforms all of the model images into only one eigenspace, while in Subset K-LT method; every group of images that lies in the same viewpoint will be transformed into every one-subeigenspace. By using this method, we can have many sub-eigenspace related with the number of viewpoints images. In viewpoint recognition subsystem, a Nearest Feature Line method [11] will be used to cluster the unknown viewpoints of images by calculating its nearest distance with all of the feature points. The Nearest Feature Line (NFL) method assumes that at least we should have two feature points of each viewpoint class, and a line that can be draw between two points could represents some changes of faces due to lighting, expression, etc. When there is an unknown face that would be determined its view position, it should be firstly transformed into its eigenspace representation by the K-L transformation. Clustering is then determined by calculating the nearest distance of the transformedunknown point into all available feature lines in the eigenspace. We have proved that by using the conventional NFL techniques, the estimation rate of the system is not high enough. We then developed Subset K-LT method to increase the estimation rate of the system. In order to further increase the viewpoint estimation rate of the system, we propose, in this paper, a new technique of increasing the number of feature lines without making any addition of the number of feature points. The increment of feature lines are accomplished by creating projection lines from each feature point to the other feature lines in the same class. The detail explanation about this technique including with its results will be discussed thoroughly in the next sessions. 2 Feature Extraction Subsystem In this feature extraction subsystem, we use the Karhunen-Loeve transformation to develop a featurespace for higher discrimination power. Karhunen- Loeve transformation means to orthogonalize, which in turns increasing the separability of the extracted feature, and reducing its dimensionality [12]. This transformation will be used to project every image in the face gallery, into a feature point that has a smaller vector dimension. As we have discussed earlier in this paper, the transformation process for image vectors is accomplished by the use the two kinds of Karhunen- Loeve transformation methods. In the Fully K-LT method, the unknown images that will be estimated its viewpoint should be projected into only one eigenspace. However, when using Subset K-LT method, we should firstly determine the matrix transformation of every viewpoint-class of the images. Suppose we have a class images with a viewpoints of 15 0, then an eigenspace of 15 0 is constructed by using all of the images in this viewpoint-class. And this procedure is repeated until all sub-eigenspaces of all viewpoints of the images are completely constructed. Using Subset K-LT method, therefore, the unknown images that will be estimated its viewpoint should be projected into all of the available eigenspaces. By using these two different methods, each image in spatial space is now represented as a feature point in only one eigenspace for Fully K-LT method or as a feature point in every one of the subeigenspaces for Subset K-LT method.

The standard analysis using its principal components of image data x(k) R n, k=0,1,...,m is done by determining a set of hidden linear representation parameters y 1 (k),..., y m (k), called factors or features. These features could then be used for linear least mean-squared-error reconstruction of the original data. If there is enough correlation between the observation variables x 1 (k), then we can reconstruct the image with acceptable accuracy using a number of features m which is much smaller than the data dimension n. We often call n is the superficial dimensionality of the image data while m the intrinsic dimensionality of the image data. The Karhunen-Loeve transformation method is firstly done by forming a base vector of image in d dimensions, i.e., z n =[z 1,z 2, z d ]. The next step is computing the average vector of M images through: M 1 µ z = M z n (1) n= 1 and determine the covariance matrix C z : M 1 T C z = ( zn z )( zn z ) M µ µ n= 1 (2) From this covariance matrix, we can derive a set of λ z and e z which are the eigen values and the eigen vectors. The eigenvectors e z are orthonomal and the corresponding eigenvalues λ z are nonnegative. Assuming that there are no repeated eigenvalues and that they are arranged in decreasing order, λ 1 >λ 2 > >λ m, a matrix transformation is then constructed based on the importance of these eigen value. The e z matrix transformation then be used to map a set of z n image vectors to be a set of y n feature vectors in eigen space, through : y T n = ez ( zn µ z ) (3) while the inverse reconstruction of z n vectors can be done through: z T n = ez yn + µ z (4) The importance of every eigenvalue that is related to its strength on giving an optimal matrix transformation for higher estimation rate, can be determined by computing the cummulative proportion of the eigen value using: k p 1 α k = λi λ j (5) i= 1 j= 1 The Karhunen-Loeve transformation matrix is then reformed by using this cummulative proportion, and we should recalculate the equations (3) and (4) to compute z n dan y n. 3 Viewpoint Recognition Subsystem The viewpoint recognition subsystem used in this viewpoint estimation system is developed based on feature line distance calculation, for both FullyK-LT and SubsetK-LT. However, each method has different procedures on determining the viewpoint-class classification that will be discussed later. Each image in the spatial domain will be projected as a point in the eigenspace domain, and since we have only limited variations of images in the gallery, any variation could be approximated by point that lie in all of the possibilities of straight line between all of the points in that eigenspace domain. The straight line between those points of the same class is called feature-line (FL) of that viewpoint-class, and for each class we have: G c = N c ( N c - 1 ) / 2 (6) where G c denotes number of feature lines and N c the number of feature points from the viewpoint-class. Suppose there is a new image with unknownviewpoint that will be estimated. Assume that the unknown image is transformed into the eigenspace domain as x and its projection to the feature line of previous known points x 1 and x 2 is p. Projection-point p and its relation to x 1 and x 2 can be calculated through: p = x 1 + µ (x 2 - x 1 ) (7)

with: µ = (x - x 1 ).(x 2 - x 1 ) / (x 2 - x 1 ).(x 2 - x 1 ) (8) The minimum distance is then calculated for all of the viewpoint-class through: d(x,p ) = x-p (9) and is sorted in ascending order. The x now is clustered into the viewpoint-class, which has the minimum distance. In order to increase the viewpoint estimation rate of the system, we proposes a new technique of increasing the number of feature lines without making any addition of the number of feature points. As already stated, the additional feature lines are done by creating projection lines from each feature point to the other feature lines in the same class, as can be illustrated in Fig.1. x1 x2 x3 x 2 x3 X 1 x 1x 2 x 1x 3 X 2 x2 x1x3 x3 x1x2 Fig.1 The process of forming the feature lines Assume we have three feature points x 1, x 2, and x 3, and in the conventional Nearest Feature Line (NFL) method, we can form only three feature lines to capture some variation from these two images. However, in our Modified Nearest Feature Line (M- NFL) technique, we add more feature lines by projecting each feature point to all available feature lines, i e, point x 1 is projected to line x 2 x 3 to form the x 1 x 2 x 3 line, point x 2 is projected to line x 1 x 3 to form the line x 2 x 1 x 3, and point x 3 is projected to line x 1 x 2 to form the line x 3 x 1 x 2. By the addition of feature line above, the total number of feature lines in this M-NFL method can be calculated through: as a substitution to the Eq.6. X 3 G c = N c ( N c - 1 ) 2 / 2 (10) By using this technique, every unknown image object in spatial domain will be firstly transformed to every eigenspace domain, and projected as point p to all available feature lines through Eq.7, and calculated its nearest distance through Eq.9. Table 1. Three data sets that are used in this experiment, including with different percentage of training/testing paradigm Data Set Train Images Test Images 1 16 52 %-age 23,53% 76,47% 2 20 52 %-age 27,78% 72,22% 3 28 52 %-age 35% 65% Training (degree) 0,60, 120,180 0,45,90, 135,180 0,30,60, 90,120, 150, 180 Testing (degree) 0,15,30,45, 60,75,90, 105,120, 135,150, 165, 180 0,15,30,45, 60,75,90, 105,120, 135,150, 165,180 0,15,30,45, 60,75,90, 105,120, 135,150, 165,180 4 Experimental Result And Analysis The experimental procedure is conducted using a 3-D face database that consists of 4 Indonesian persons. The images are taken under four different expressions such as, neutral, smile, angry and laugh expressions. The 2-D images are given from 3-D human face image by gradually changing visual points, which is successively varied from -90 0 to +90 0 with an interval of 15 0. Part of the 3-D human face database that is used in this experiment can be seen in Fig.2 The overall data set of the experimental set-up is depicted in Table 1, including with its percentage of training/testing paradigm. The percentage of training/testing in Data#1 is 23.53%:76.47%, increased up to 27.78%:72.22% in Data#2, and up to 35%:65% in Data#3, respectively. It is hoped that after those experiments, we could have a clear picture of the minimum percentage in which the system still produces high estimation rate.

Expressions Angle of Rotations -90 o -60 o -45 o -30 o 0 o +30 o +45 o +60 o +90 o Neutral Smile Angry Laugh Fig.2 Example of images with different viewpoints and expressions 4.1 Viewpoint estimation by the use of FullyK-LT method In the viewpoint estimation system using the FullyK- LT, the unknown viewpoint image x is projected as p into all available feature lines, and the minimum distance is then calculated between x and its projected point p. Suppose that the minimum distance is determined to be in the line that connected two points of x 1 and x 2, and if those two points belong to the same viewpoint-class, then x will be clustered into the same viewpoint-class with that of x 1 and x 2. However, if x 1 and x 2 are points that belong to different viewpoint-classes, then x will be clustered into viewpoint-class that in between of those viewpoint-class in which x 1 and x 2 belongs. Table 2. Recognition rate on determining the viewpoint using FullyK-LT method Data set Recognition Rate (percentage) 90% 95% 99% 1 26,92% 26,44% 26,92% 2 42,31% 42,31% 42,79% 3 28,85% 30,29% 29,33% Results of experiments by using FullyK-LT method are presented in Table 2. As can be seen in this table, the highest estimation rate of the FullyK- LT is about 26.92% for Data#1, 42.79% for Data#2, and 30.29% for Data3, respectively. This results show that conventional NFL method could not determining the viewpoint of the unknown images with high estimation rate. 4.2 Viewpoint estimation by the use of Subset K-LT method Let the unknown image in the spatial domain is transformed to be an unknown point of x in every defined viewpoint-class in its eigenspace by the use of each SubsetK-LT matrix transformer. Then, project the transformed-point of x into every available lines, and suppose the projection of the unknown point of x become p r, in each of the viewpoint-class, with -90 0 < r < +90 0. The clustering mechanism is then determined by calculating the minimum distance of xp r which is the distance between point x and p r using the Eq.9. As we can see in the experimental data set, some of the unknown images have its viewpoint were in between of the known training set, and as the consequences, to have a higher estimation rate of the system, we can clustered the unknown point of x into three categories of viewpoints, i.e., viewpoint-class r; or between viewpoint-class (r - t) and viewpoint-class r; or between viewpoint-class r and viewpoint-class

(r + t); where t denotes the interval viewpoint of classes. If the minimum distance xp r is nearly to 0 then x will be definitely clustered into viewpoint-class r 0. However, if the distance xp r has a higher value, it means that the unknown viewpoint input image cannot be definitely clustered into viewpoint-class-r 0, or in other word, we shall clustered x into between viewpoint-class-(r-t) 0 and viewpoint-class-r 0 or into between viewpoint-class-r 0 and viewpoint-class- (r+t) 0. If this is happened, another algorithm is developed to cluster the unknown viewpoint of an input image, which is illustrated in Fig. 3. To precisely clustering the unknown viewpoint of an input image, we should calculated the minimum distance of the projected point to viewpoint-class of (r-t) 0, xp r-t and compared with that of viewpoint-class of (r+t) 0, xp r+t. If xp r-t is higher than xp r+t then x will be clustered into between viewpoint-class of (r-t) 0 and viewpoint-class of r 0. Otherwise, if xp r-t is lower than xp r+t then x will be clustered into between viewpoint-class of r 0 and viewpoint-class of (r+t) 0. Class ( r t) xpr t X xp r Class r xp r + t Being compared Class ( r+ t) Fig.3 Comparison of minimum distance on classification of unknown point x with SubsetK-LT Table 3. Recognition rate on determining the viewpoint using SubsetK-LT method Data set Recognition Rate (percentage) 90% 95% 99% 1 46,63% 47,12% 48,08% 2 85,09% 86,06% 83,65% 3 90,38% 89,42% 89,90% Results of experiments by using SubsetK-LT method are presented in Table 3. As can be seen from Table 3, the estimation rate of the system is 48.08% for Data#1,which is increase up to 86.06% for Data#2, and the highest estimation rate is 90.38% for Data#3, respectively. Those results show that even using lower training/testing paradigm such as in Data#2, the recognition rate of the system using SubsetK-LT is high enough to determine the viewpoint of the unknown image in the system. Comparing with that of the used FullyK-LT method, the estimation rate by using SubsetK-LT method is significantly high. 5 Additional of Feature Line for Higher Estimation Rate As we have mentioned above, we propose a new technique by increasing the number of feature lines without making any addition of the number of feature points. As the FullyK-LT method is not appropriate in determining the viewpoint of the unknown images, in this experiments, we only used SubsetK-LT method as a platform on applying our M-NFL technique. Results of experiments by using Subset K- LT method and the Modified Nearest Feature Line technique are presented in Table 4. Table 4. Recognition rate on determining the viewpoint using SubsetK-LT method and Modified NFL technique Data set Recognition Rate (percentage) 90% 95% 99% 1 48,08% 48,56% 48,56% 2 91,83% 91,83% 92,79% 3 96,15% 96,15% 96,64% As we can see in this table, the estimation rate is about 48.56% for Data#1, and could be increased up to 92.79% for Data#2 with 99% of cumulative percentage of the used eigenvalues. While the highest estimation rate is achieved when using Data#3, i e, 96.64% with 99% of cumulative percentage of eigenvalues. This result has shown that our Modified NFL technique could increase the estimation rate of the developed system.

Table 5 shows more rigorously about the comparison of the NFL and our M-NFL technique. Second column shows the correct classification percentage using NFL that is classified correct also by our M-NFL method, while the third column shows the percentage of incorrect classification by NFL method but classified as correct classification by M-NFL method. The fourth column shows the inverse condition of the third column, in which the correct classification NFL method is classified as incorrect by M-NFL method. It is shown that in every used data-set with its different cumulative percentage, the percentage of being classified as incorrect by NFL but classified as correct by M-NFL method (Column 3rd) is always higher than the inverse condition (Column 4th). Table 5 also shows that the percentage of the correct classification by both NFL and M-NFL methods is increasing along with higher percentage of the training/testing paradigm. Higher estimation rate along with higher training/testing paradigm is also happened in the Column 3, while being stable in the Column 4. Tabel 5. Comparison of percentage classification determining the viewpoint with NFL and M-NFL methods Cumulative Percentage and Data Set 90% 95% 99% Classification Percentage Correct Incorrect NFL- NFL- Correct M- Correct M- NFL NFL Correct NFL- Incorrect M- NFL Data#1 46,16% 1,92% 0,48% Data#2 85,10% 6,73% 0% Data#3 89,90% 6,25% 0,48% Data#1 47,12% 1,44% 0% Data#2 85,58% 6,25% 0,48% Data#3 88,94% 7,21% 0,48% Data#1 47,60% 0,96% 0,48% Data#2 83,17% 9,62% 0,48% Data#3 89,43% 7,21% 0,48% This shows that increasing the percentage of training/testing paradigm is not affected much in the disadvantage of M-NFL, while it would increasing the advantage M-NFL method, compare with that of using NFL method. This comparison shows, that the nearest calculation through our method has higher percentage of correctness, which means that the addition of feature lines can improve the estimation rate in the viewpoint estimation system. 6 Conclusions We had developed a viewpoint estimation system based on a viewpoint features selection subsystem and a viewpoint estimation subsystem. Fully K-LT Subset K-LT methods are utilized, and based on its results of experiments, the Subset K-LT method shows higher estimation rate compare with that of Fully K-LT method. To increase the estimation rate higher, we then developed a Modified NFL technique by increasing the number of feature lines without increasing the number of points in the eigenspace domain. Result of experiment shows that the used of M-NFL technique with Subset K-LT method performed on 99% of cumulative percentage of using eigenvalues has highest estimation rate of about 97%, compare with only about 30% when using NFL technique with Fully K-LT method using the same cumulative percentage value. It is shown that our model is robust enough to determine the viewpoint of an- unknown images. This system is now being developed as a subsystem in our 3D face recognition system. ACKNOWLEDGEMENT The authors would like to express their gratitude to Ministry of Research and Technology of Indonesia for their support funding of this research through RUT X under contract number 14.06/SK/RUT/2003 References: [1] R. Brunelli and T. Poggio, Face recognition through geometrical features, Proceedings of ECCV 92, Santa Margherita Ligure, 1992, pp. 792-800. [2] Craw, D. Tock and A. Bennet, Finding face features, Proceedings of ECCV 92, Santa Margherita Ligure, 1992, pp. 93-96. [3] M. Kirby and L. Sirovich, Application of the Karhunen-Loeve procedure for the

characterization of human face, IEEE Trans PAMI, Vol.12, No.1, 1990, pp. 103-108. [4] M. Turk and A. Pertland, Face recognition using Eigenfaces, Proc. IEEE CCVP 91, 1991, pp. 586-591. [5] S. Ullmann and R. Basri, Recognition of linear combination of models, IEEE Trans. PAMI, Vol.13, No.10, 1991, pp. 992-1007. [6] T. Poggio and S. Edelman, A network that learns to recognize three dimensional objects, Nature, Vol.343, No.6255, 1990, pp. 263-266. [7] B. Kusumoputro, Renny Isharini and A. Murni, Development of neural network with cylindrical structure of hidden layer and its application in 3- D recognition system, Proc. IECI CECI-2001, Vol.6, 2001, pp.24-27. [8] B. Kusumoputro and A. Sulita, Genetics algorithms in optimization of cylindrical-hidden layer neural network for 3-D object recognition system, Neural Network World, Vol.10, No. 4, 2000, pp. 631-639. [9] B. Kusumoputro, 3-D Face Recognition System Using Cylindrical-Hidden Layer Neural Network and its optimization through genetic algorithms, Proc. IASTED Artificial Intelligence, 2002, pp. 118-123. [10] R. Sripomo and B. Kusumoputro, Pose estimation system of 3D human face using nearest feature lines in its eigenspace representation, IEEE Proc. of APCCAS, 2002, pp. 241-246. [11] S.Z. Li and J. Lu, Face recognition using the nearest feature line method, IEEE Trans. on Neural Networks, Vol.10, No.2, 1999,pp.439-443. [12] L.T. Jolliffe, Principal Component Analysis, Springer Verlaag, 1986