第 6 回画像センシングシンポジウム, 横浜,00 年 6 月 and pose variation, thus they will fail under these imaging conditions. n [7] a method for facial landmarks detection

Similar documents
2. 集団の注目位置推定 提案手法では 複数の人物が同一の対象を注視している状況 置 を推定する手法を検討する この状況下では 図 1 のよう. 顔画像からそれぞれの注目位置を推定する ただし f は 1 枚 この仮説に基づいて 複数の人物を同時に撮影した低解像度顔

Lecture 4 Branch & cut algorithm

FACE RECOGNITION USING INDEPENDENT COMPONENT

Androidプログラミング 2 回目 迫紀徳

Methods to Detect Malicious MS Document File using File Structure Inspection

今日の予定 1. 展開図の基礎的な知識 1. 正多面体の共通の展開図. 2. 複数の箱が折れる共通の展開図 :2 時間目 3. Rep-Cube: 最新の話題 4. 正多面体に近い立体と正 4 面体の共通の展開図 5. ペタル型の紙で折るピラミッド型 :2 時間目 ~3 時間目

A. 展開図とそこから折れる凸立体の研究 1. 複数の箱が折れる共通の展開図 2 通りの箱が折れる共通の展開図 3 通りの箱が折れる共通の展開図そして. 残された未解決問題たち 2. 正多面体の共通の展開図 3. 正多面体に近い立体と正 4 面体の共通の展開図 ( 予備 )

Vehicle Calibration Techniques Established and Substantiated for Motorcycles

Googleの強みは ささえるのは世界一のインフラ. Google File System 2008年度後期 情報システム構成論2 第10回 クラウドと協調フィルタリング. 初期(1999年)の Googleクラスタ. 最近のデータセンタ Google Chrome Comicより

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Analysis on the Multi-stakeholder Structure in the IGF Discussions

Face detection and recognition. Detection Recognition Sally

Face detection and recognition. Many slides adapted from K. Grauman and D. Lowe

Studies of Large-Scale Data Visualization: EXTRAWING and Visual Data Mining

Yamaha Steinberg USB Driver V for Mac Release Notes

Cloud Connector 徹底解説. 多様な基盤への展開を可能にするための Citrix Cloud のキーコンポーネント A-5 セールスエンジニアリング本部パートナー SE 部リードシステムズエンジニア. 哲司 (Satoshi Komiyama) Citrix

電脳梁山泊烏賊塾 構造体のサイズ. Visual Basic

MORPH-II: Feature Vector Documentation

マルチビットアップセット耐性及びシングルビットアップセット耐性を備えた

フラクタル 1 ( ジュリア集合 ) 解説 : ジュリア集合 ( 自己平方フラクタル ) 入力パラメータの例 ( 小さな数値の変化で模様が大きく変化します. Ar や Ai の数値を少しずつ変化させて描画する. ) プログラムコード. 2010, AGU, M.

Centralized (Indirect) switching networks. Computer Architecture AMANO, Hideharu

Eye Detection by Haar wavelets and cascaded Support Vector Machine

A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION

サンプル. NI TestStand TM I: Introduction Course Manual

Visual object classification by sparse convolutional neural networks

Relaxed Consistency models and software distributed memory. Computer Architecture Textbook pp.79-83

Facial Expression Detection Using Implemented (PCA) Algorithm

J の Lab システムの舞台裏 - パワーポイントはいらない -

携帯電話の 吸収率 (SAR) について / Specific Absorption Rate (SAR) of Mobile Phones

暗い Lena トーンマッピング とは? 明るい Lena. 元の Lena. tone mapped. image. original. image. tone mapped. tone mapped image. image. original image. original.

携帯電話の 吸収率 (SAR) について / Specific Absorption Rate (SAR) of Mobile Phones

Video Annotation and Retrieval Using Vague Shot Intervals

CS4733 Class Notes, Computer Vision

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

MySQL Cluster 7.3 リリース記念!! 5 分で作る MySQL Cluster 環境

Image Coding with Active Appearance Models

IRS16: 4 byte ASN. Version: 1.0 Date: April 22, 2008 Cisco Systems 2008 Cisco, Inc. All rights reserved. Cisco Systems Japan

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos

An ICA based Approach for Complex Color Scene Text Binarization

Facial Expression Recognition Using Non-negative Matrix Factorization

~ ソフトウエア認証への取り組みと課題 ~

Image-Based Face Recognition using Global Features

Generic Face Alignment Using an Improved Active Shape Model

Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian

API サーバの URL. <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE COMPLIANCE_SCAN SYSTEM "

Preparing Information Design-Oriented. Posters. easy to. easy to. See! Understand! easy to. Convey!

Online Meetings with Zoom

Real-time facial feature point extraction

Disguised Face Identification (DFI) with Facial KeyPoints using Spatial Fusion Convolutional Network. Nathan Sun CIS601

Computer Vision I - Filtering and Feature detection

A face recognition system based on local feature analysis

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Region-based Segmentation

Nonfinancial Reporting Track:03 Providing non-financial information to reporters, analysts and asset managers; the EDINET Case

Verify99. Axis Systems

Human Action Recognition Using Independent Component Analysis

Chapter 1 Videos Lesson 61 Thrillers are scary ~Reading~

Image Processing. Image Features

BMW Head Up Display (HUD) Teardown BMW ヘッドアップディスプレイティアダウン

Interdomain Routing Security Workshop 21 BGP, 4 Bytes AS. Brocade Communications Systems, K.K.

Schedule for Rest of Semester

JASCO-HPLC Operating Manual. (Analytical HPLC)

Postprint.

Haresh D. Chande #, Zankhana H. Shah *

Multi-Attribute Robust Facial Feature Localization

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting

EE 701 ROBOT VISION. Segmentation

Image Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images

Face and Nose Detection in Digital Images using Local Binary Patterns

WD/CD/DIS/FDIS stage

arxiv: v1 [cs.cv] 28 Sep 2018

Certificate of Accreditation

Peering 101. August 2017 TPF. Walt Wollny, Director Interconnection Strategy Hurricane Electric AS6939

Chapter 3 Image Registration. Chapter 3 Image Registration

Yamaha Steinberg USB Driver V for Windows Release Notes

Yamaha Steinberg USB Driver V for Windows Release Notes

Introduction to Information and Communication Technology (a)

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints

Unofficial Redmine Cooking - QA #782 yaml_db を使った DB のマイグレーションで失敗する

Learning to Recognize Faces in Realistic Conditions

Project to Transfer Mission-critical System of Banks to Private Cloud

Comparative Study of ROI Extraction of Palmprint

Recognition: Face Recognition. Linda Shapiro EE/CSE 576

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM

Clustering CS 550: Machine Learning

Emotional States Control for On-line Game Avatars

CSE 252B: Computer Vision II

ユーザー入力およびユーザーに 出力処理入門. Ivan Tanev

Mouse Pointer Tracking with Eyes

Phoneme Analysis of Image Feature in Utterance Recognition Using AAM in Lip Area

本書について... 7 本文中の表記について... 7 マークについて... 7 MTCE をインストールする前に... 7 ご注意... 7 推奨 PC 仕様... 8 MTCE をインストールする... 9 MTCE をアンインストールする... 11

Local Feature Detectors

Chapter 4 Face Recognition Using Orthogonal Transforms

Face Detection and Recognition in an Image Sequence using Eigenedginess

Snoop cache. AMANO, Hideharu, Keio University Textbook pp.40-60

Hierarchical Parallelization of Coloring Procedures for ILU Preconditioner

CHAPTER 4 FACE RECOGNITION DESIGN AND ANALYSIS

The Optical Characteristics of the Fore-Optics and the Calibration for. Direct-Sun UV Observation with Brewer Spectrophotometers

Transcription:

S3-7 第 6 回画像センシングシンポジウム, 横浜,00 年 6 月 Local ndependent Components Analysis-Based Facial Features Detection Method M. Hassaballah,, omonori KANAZAWA 3, Shinobu DO 3, Shun DO Department of Computer Science, Ehime University, 790-8577, Japan Department of Mathematics, Faculty of Science, South Valley University, Qena, Egypt 3 ecompute Corporation, 8-4, Matsuyama, Ehime 79-804, Japan E-mail: m.hassaballah@ic.cs.ehime-u.ac.jp, ido@cs.ehime-u.ac.jp Abstract 目や鼻や口といった顔特徴の検出は その後の顔画像の解析において重要なステップである 本稿では グレースケールの画像から目と鼻を検出する方法を紹介する 独立成分分析法 (CA) を顔特徴の外観や形状の学習に用いることで CA 基底ベクトルに対して高応答の領域を顔特徴として検出した 一層の性能改善のために CA に加えて目と鼻の局所特徴を適用し 異なるデータベースによる実験において有望な結果を得た ntroduction Facial feature detection plays an important role in many face-related applications including face recognition, face validation, and facial expression recognition []. hese applications need to detect the facial features robustly and efficiently. However, this is not an easy machine-vision task at all. he difficulty comes from high inter-personal variation (e.g. gender, race), intra-personal changes (e.g. pose, expression), and from acquisition conditions (e.g. lighting, image resolution). n spite of these difficulties, several methods have been proposed to detect certain facial features, especially eye as it is the most salient and stable feature of the human face. Ryu and Oh [] introduced a method based on eigenfeatures derived from the eigenvalues and eigenvectors of the binary edge data set and neural networks to detect eye region. he eigenfeatures extracted from the positive and negative training samples of eye are used to train a multilayer perceptron (MLP) whose output indicates the degree to which a particular image patch contains an eye within itself. An ensemble network consisting of a multitude of independent MLPs was used to enhance the general performance of a single MLP. t was tested on 80 images without glasses from ORL database and its performance was 9.7% and 85.5% for left and right eye respectively. he advantage is that it does not need a large training set by taking advantage of eigenfeatures and sliding window. Kroon et al. [3] address the problem of eye detection for the purpose of face matching in low and standard definition image and video content. hey present a probabilistic eye detection method based on well-known multi-scale local binary patterns (LBPs). he detailed survey of the most recent eye detection methods [4] concluded that the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond. On the other hand, there are few methods that address nose detection problem, even though it is not less important than eye or other facial features. t does not affect so much by facial expressions and in several cases is the only facial feature which is clearly visible during the head motion. Most of the existing approaches detect nose depending basically on the prior detection of eye center and considering nose can be located within certain pixels below the line connecting the centers of two eyes [5]. Other approaches detect nose depending on the reflection of light on the nose tip or using projection methods [6]. However, all existing projection methods do not consider complex conditions such as illumination S3-7-

第 6 回画像センシングシンポジウム, 横浜,00 年 6 月 and pose variation, thus they will fail under these imaging conditions. n [7] a method for facial landmarks detection based on extracting oriented edges and constructing edge maps at two resolution levels is proposed. t can achieve average nose detection of 78% on 330 images from Pictures of Facial Affect database. he method was not fully automatic and required manual classification of the located edge regions. n spite of considerable amount of previous work on the subject, detection of facial features will remains a challenging problem because the shape and texture of facial features vary widely under changing expression, head pose and illumination. herefore, new robust methods are required. n this paper, we present a method to detect facial features (e.g. eye and nose). he method depends on higher-order statistics and second-order moments of the training data decorrelated by CA basis vectors as well as local characteristics of facial features. he novelty of the method is to apply CA on parts of face containing facial features rather than whole face image as done in face recognition field. his new use of CA might capture the advantages of spatially localized of its basis vectors which will help in highlighting salient regions in the data and provide a better probabilistic model of the training data. t was evaluated on different databases under various imaging conditions. he remainder of this paper is organized as follows. A brief background about CA is introduced in Section. he outline of the proposed method for detection of facial features is presented in Section 3. Experimental results are reported in Section 4, the conclusions are given in Section 5. ndependent component analysis ndependent component analysis can be seen as an unsupervised learning method based on higher order statistics for extracting independent sources from an observed mixture, where neither the mixing matrix nor the distribution of the sources are known. Formally, the classical model of CA is given as X = A S () where X denotes the observation matrix containing the linear mixtures in its rows X = [ X, X,..., X m ], A is a nonsingular mixing matrix, and S denotes the source matrix containing statistically independent source vectors in its rows S = [ S, S,..., S m ]. Under the assumptions that the sources are statistically independent and non-gaussian (at most one of them can have Gaussian distribution), an un-mixing matrix W can be found by maximizing some measure of independence. n other words, the estimated separation matrix, W under ideal conditions, is the inverse of the mixing matrix A. Y = W X, W = A, and Y S () Generally, CA aims to estimate the un-mixing matrix W and thus to recover each hidden source using S k = W k X, where W is the kth row of W k without any prior knowledge. here are a number of algorithms to estimate the un-mixing matrix W in the CA model (). he FastCA algorithm [8] is used in this work to estimate the CA basis vectors. he FastCA is basically based on a fixed-point iteration scheme for finding a maximum of the nongaussianity of W X, where the nongaussianity is measured by the maximization of the so-called negentropy. According to [8] and after a series of mathematical calculations, the Cs can be estimated by w i E{ z g( w i z)} E{ g ( w i z) } w i (3) where wi is a row vector in W, z is the preprocessing whitening data and the nonlinearity g can be any smooth function that gives robust approximations of negentropy such as, g( y) = tanh( α y), α (4) he estimation process in (3) is repeated for a chosen n (number of basis vectors) with orthogonalize the matrix W = w, w,... ) using / W ( WW ) W, where ( w n S3-7-

第 6 回画像センシングシンポジウム, 横浜,00 年 6 月 / / / ( WW ) = E dig ( d,... d n ) E (5) he convergence will occur when the absolute value of the dot-product of old and new W is (almost) equal, i.e. < W t, W t+ > =. More information and details about CA and FastCA can be found in [8]. Fig. : Eye subspace; the highest 0 CA basis images 3 Method Overview 3. Computing CA features subspaces A set of 00 images for eye and nose is used in the training step. n order to minimize the impact of noise or variations in illumination each image is preprocessed with norm normalization. he width w and height h of the training images are determined experimentally with respect to the face width W F. Examples of eye and nose training images are shown in Fig. (). hen the FastCA algorithm is implemented to estimate the CA basis vectors which will form the feature subspace. he number of basis vectors for each subspace is optimized experimentally. Hence, 0 vectors are found sufficient for eye subspace, while 30 vectors are sufficient for nose subspace. he eye and nose subspaces are shown in Fig. () and (3) respectively. t is clear that the computed basis vectors of each subspace are spatially localized to the salient regions of eye and nose such as iris, eyelids, nostrils, textures, and edges. his localization might robust the performance to local noise and distortion. (a) Eye training images 60 x 30 pixels (b) Nose training images 70 x 50 pixels Fig. : Facial Features training images Fig. 3: Nose subspace; the highest 30 CA basis images 3. Detection of facial features Searching for facial features in the image can be done either directly in the entire image, or rely on the output of a face detector indicating that these features are present in the image. Unfortunately, searching in the whole image is not a suitable method for real time implementations and is more prone to errors. herefore, a face detector [9] is applied first to locate the face, then searching for regions that contain eye and nose is done in the located face. he located face is normalized to a fixed size of W F x W F pixels. he normalized face is scanned to find the region which might contain eye/nose, the regions data are treated as w x h dimensional vectors. Each region (vector) is locally preprocessed using norm normalization as in the training step. he subspaces corresponding to the feature point vector in the w x h dimensional facial feature subspace can be expressed as linear subspaces spanned by multiple CA vectors. he projection angle θ of an input vector projected onto the CA subspace represents the extent to which the input vector is analogous to the feature point vector. For verification the value of θ; specifically Cos (θ) between S3-7-3

第 6 回画像センシングシンポジウム, 横浜,00 年 6 月 the input vector and each feature point s subspace (CA basis vectors) is obtained. his angle measure is invariant under linear changes in the contrast of the image and furthermore the cosine similarity measure was previously found to be effective for face processing. he eye or nose region is the input vector that falls into its subspace with the smallest θ or highest similarity. he similarity S θ ( R j, CAVectors) between the region R j, j j =,,...M (number of regions) and CA basis vectors is calculated using Cos (θ) of the projection component S θ = Cos θ j j = n i = V, BaseVector i V (6) where n is the number of basis vectors that form the feature subspace and V, BaseVectori is the inner product between input vector (representing j-th candidate region) and i-th base vector of the CA subspace. he region R k with the highest similarity is declared as the target facial feature region. 3.3 Local characteristics facial features he performance of the method described in section 3. can be improved significantly if local characteristics of eye and nose are used besides CA method. For eye, a variance filter is used. o construct this eye variance filter a set of 30 eye images of left and right eye of same size (i.e. 60 x 30 pixels) as the eye training images is selected randomly of different persons. Each eye image is divided into 3 x 3 non-overlapped subblocks of size 0 x 0 pixels thus, each subblock has its unique features as shown in Fig. (4). he variance of each subblock is calculated, and then the eye variance filter F e is constructed by calculating average of the variance in the subblock images over all 30 eye images F e = N N j σ j = j where σ is the j-th variance filter of the σ subblock (7) eye image and N is the number of used eye images (N=30 in this work). o detect the region that most probably contains eye using the proposed eye variance filter, variance vector of each candidate region of the face is calculated in the same manner as in the eye variance filter F e. hen the correlation between the variance vector of each subblock of the region and the eye variance filter F e (7) is calculated using this formula E[( E( ))( E( )] Fe Fe R( σ, F e ) = (8) D( ) D( ) Fe where and F are the concatenated vectors of e variance of subblock σ i of the eye candidate region and Fe respectively. All the regions which have high response to the eye variance filter are selected to generate a small list of possible eye region pairs. f the response of the candidate region to F e is greater than expected threshold (e.g. =4), this means that the candidate region might contain eye. After that, all the selected regions by applying eye variance filter are then verified using CA method mentioned in section 3. to select only two regions which represent the left and right eye. Fig. 4: Variance filter model On the other hand, for nose three different local characteristics are used as follows: (i) Similarity of both sides: he left and right sides of nose are similar in a front-view face as shown in Fig. (5a), this property of similarity can be measured using Euclidean distance between both sides. (ii) Dark-White-Dark (DWD) property: Also, the lower part of nose region is characterized by two dark S3-7-4

第 6 回画像センシングシンポジウム, 横浜,00 年 6 月 nostrils and a light subregion due to the reflection of light on the nose as shown in Fig. (5b). his property can be identified by the average of gray intensity in each subregion, where the average in the two nostrils regions is less than the average of middle lighter subregion containing nose tip. (iii) he variation in lower/upper parts property: When the face is rotated some degrees these two properties are despaired and the only clear property is the variation between lower part and upper part as shown in Fig. (5c). his variation can be measured by the variance in each part. Based on this analysis, we search for a certain region among the ten highest regions detected by CA method which satisfies the properties (i)-(iii). Note that in the case of eye detection CA method has been applied before local characteristics while in nose detection the opposite has been done. Fig. 5: he local characteristics of nose region 4 Experimental Results n order to find optimum number of CA basis vectors for both feature spaces the proposed method was evaluated on XMVS database as shown in Fig. (6). he highest successful detection rate of 93. % for eye is achieved when the number of CA components is 0 while for nose 95.8% successful detection rate achieved at 30 vectors, and after that the marginal benefit slows. Also, there is trade-off between running time and CA dimension. Using fewer basis vectors requires less storage and results in faster matching hence 0 and 30 vectors are sufficient. herefore, the number of CA basis vectors is set to the first highest 0 and 30 basis vectors for eye and nose subspaces respectively. he proposed CA method is also evaluated on other three databases namely; 500 images of FERE, BioD, and JAFFE. n this context, the facial features are detected based on the highest response to feature space. he results of this experiment are reported in able (). Because of noise in the images due to lighting condition or facial expression, the detected region is not accurate and thus the whole successful detection rate (i.e. images with correctly detected eye/nose region relative to the whole set of facial database) is not so high, especially on BioD and JAFEE databases. he best results have been obtained on the XMVS database because the image quality is very high, the illumination is always uniform, and the image scale is fixed. he performance of CA method is improved significantly when the local characteristics of eye and nose are used besides CA as shown in able (). n some databases such as XMVS an improvement rate more than 5% has been achieved while in BioD database the improvement is not significant due to the severe conditions in this database. Compared with other methods [] achieved detection rate of 9.7% and 85.5% for left and right eye on 80 images without glasses from ORL database, the proposed method achieves an average eye detection rate of 94.3 % on about 4000 images while for nose it can achieve an average detection rate of 96 % outperforming [7] which achieved average detection rate of 78% on 330 images from Pictures of Facial Affect database. Example of successful results is given in Fig. (7). he main advantages of the proposed method are; the method is very simple. he average execution time of the method on a PC with Core(M) Duo CPU,.4 GHz, and 4GB RAM is less than 80 msec. Moreover, detection of nose does not depend on eye detection as in the existing methods. Future research will focus on detecting fiducial points of eye and nose such as eye corners, iris, nostrils or nose tip within the detected region as well as mouth corners. Fig. 6: Selection the optimum number of basis vectors S3-7-5

第 6 回画像センシングシンポジウム, 横浜,00 年 6 月 able : Performance of CA only on different databases Feature XMVS FEER BioD JAFFE Eye 93.% 9.3% 87% 85.3% Nose 95.8% 94.3% 9.8% 9.5% able :Performance of CA besides local characteristics Feature XMVS FEER BioD JAFFE Eye 98.4% 97.% 9.6% 90.% Nose 99% 96% 9.3% 96.% Fig. 7: Example of successful results Conclusions n this paper, an automatic method to detect eye and nose location from facial images based on the response of face regions to feature subspaces is presented. Eye and nose spaces are estimated using independent components analysis basis vectors. n order to further improve the performance of the method, we proposed a subregionbased framework that depends on local characteristics of facial features regions. he efficiency of the method is evaluated in different databases stressing variety of imaging conditions. t has been shown by experimental results that the proposed method can accurately detect the eye/nose with high detection rate comparable with existing methods. Reference [] Asteriadis S., Nikolaidis N., Pitas,., Facial feature detection using distance vector fields, Pattern Recognition, 4, 388-398, 009. [] Ryu Y.-S., Oh S.-Y., Automatic extraction of eye and mouth fields from a face image using eigenfeatures and ensemble networks, Applied ntelligence, 7, pp. 7-85, 00. [3] Kroon B., Maas S., Boughorbel S., Hanjalic A., Eye localization in low and standard definition content with application to face matching, Computer Vision and mage Understanding, 3 (8), 9-933, 009. [4] Dan W. H., Qiang J., n the eye of the beholder: A survey of models for eyes and gaze, EEE rans. on Pattern Analysis and Machine ntelligence, 3(3), pp. 478-500, 00. [5] Sankaran, P., Gundimada, S., ompkins, R.C., Asari, V.K., Pose angle determination by faces, eyes and nose localization, EEE CVPR 05, 6-67, 005. [6] Campadelli, P., Lanzarotti, R., Fiducial point localization in color images of face foregrounds, mage and Vision Computing,, 863-87, 004. [7] Gizatdinova Y., Surakka V., Feature-based detection of facial landmarks from neutral and expressive facial images, EEE rans. on Pattern Analysis and Machine ntelligence, 8(), 35-39, 006. [8] Hyvarinen A., Oja E., ndependent component analysis: algorithms and applications, Neural Networks, 3, pp. 4-430, 000. [9] Viola P., Jones J. M., Robust real-time face detection, nternational Journal of Computer Vision, 57(), pp. 37-54, 004. Acknowledgments he authors would like to acknowledge the financial support of the Egyptian government for this work under Grant No./3/0-006-007. he authors would also like to thank the owners of used databases XMVS, FERE, BioD, and JAFFE for experiments of this work. S3-7-6