Local Quaternary Patterns and Feature Local Quaternary Patterns

Similar documents
Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Gender Classification using Interlaced Derivative Patterns

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram

Extraction of Texture Information from Fuzzy Run Length Matrix

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

A Binarization Algorithm specialized on Document Images and Photos

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A New Feature Local Binary Patterns (FLBP) Method

Face Recognition using 3D Directional Corner Points

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Combination of Color and Local Patterns as a Feature Vector for CBIR

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Scale Selective Extended Local Binary Pattern For Texture Classification

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Classifier Selection Based on Data Complexity Measures *

A New Feature of Uniformity of Image Texture Directions Coinciding with the Human Eyes Perception 1

TN348: Openlab Module - Colocalization

An efficient method to build panoramic image mosaics

A COMBINED APPROACH USING TEXTURAL AND GEOMETRICAL FEATURES FOR FACE RECOGNITION

Recognizing Faces. Outline

Detection of an Object by using Principal Component Analysis

Fast Feature Value Searching for Face Detection

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity

Infrared face recognition using texture descriptors

Face Recognition by Fusing Binary Edge Feature and Second-order Mutual Information

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION

An Image Fusion Approach Based on Segmentation Region

Discriminative Dictionary Learning with Pairwise Constraints

Combination of Local Multiple Patterns and Exponential Discriminant Analysis for Facial Recognition

Modular PCA Face Recognition Based on Weighted Average

Robust Mean Shift Tracking with Corrected Background-Weighted Histogram

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Computer Aided Drafting, Design and Manufacturing Volume 25, Number 2, June 2015, Page 14

Learning a Class-Specific Dictionary for Facial Expression Recognition

Feature-based image registration using the shape context

Using Fuzzy Logic to Enhance the Large Size Remote Sensing Images

Histogram of Template for Pedestrian Detection

A Deflected Grid-based Algorithm for Clustering Analysis

Brushlet Features for Texture Image Retrieval

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

Cluster Analysis of Electrical Behavior

Face Recognition Method Based on Within-class Clustering SVM

Local Tri-directional Weber Rhombus Co-occurrence Pattern: A New Texture Descriptor for Brodatz Texture Image Retrieval

A Computer Vision System for Automated Container Code Recognition

A Gradient Difference based Technique for Video Text Detection

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline

An Automatic Eye Detection Method for Gray Intensity Facial Images

A Gradient Difference based Technique for Video Text Detection

Facial Expression Recognition Based on Local Binary Patterns and Local Fisher Discriminant Analysis

Inverse-Polar Ray Projection for Recovering Projective Transformations

Face Detection with Deep Learning

Learning-based License Plate Detection on Edge Features

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE

Suppression for Luminance Difference of Stereo Image-Pair Based on Improved Histogram Equalization

Human Face Recognition Using Generalized. Kernel Fisher Discriminant

Palmprint Feature Extraction Using 2-D Gabor Filters

PERFORMANCE EVALUATION FOR SCENE MATCHING ALGORITHMS BY SVM

Collaboratively Regularized Nearest Points for Set Based Recognition

Unsupervised Texture Segmentation Using Feature Distributions

A Clustering Algorithm for Key Frame Extraction Based on Density Peak

AUTOMATIC RECOGNITION OF TRAFFIC SIGNS IN NATURAL SCENE IMAGE BASED ON CENTRAL PROJECTION TRANSFORMATION

Real-time Motion Capture System Using One Video Camera Based on Color and Edge Distribution

3D vector computer graphics

Video-Based Facial Expression Recognition Using Local Directional Binary Pattern

Fingerprint matching based on weighting method and SVM

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Research and Application of Fingerprint Recognition Based on MATLAB

FINDING IMPORTANT NODES IN SOCIAL NETWORKS BASED ON MODIFIED PAGERANK

A Shadow Detection Method for Remote Sensing Images Using Affinity Propagation Algorithm

Improved SIFT-Features Matching for Object Recognition

A Probabilistic Approach to Detect Urban Regions from Remotely Sensed Images Based on Combination of Local Features

EFFICIENT H.264 VIDEO CODING WITH A WORKING MEMORY OF OBJECTS

USING GRAPHING SKILLS

The Research of Support Vector Machine in Agricultural Data Classification

Enhanced Face Detection Technique Based on Color Correction Approach and SMQT Features

Medical X-ray Image Classification Using Gabor-Based CS-Local Binary Patterns

Object-Based Techniques for Image Retrieval

Editorial Manager(tm) for International Journal of Pattern Recognition and

Learning Ensemble of Local PDM-based Regressions. Yen Le Computational Biomedicine Lab Advisor: Prof. Ioannis A. Kakadiaris

Edge Detection in Noisy Images Using the Support Vector Machines

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Iris recognition algorithm based on point covering of high-dimensional space and neural network

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Shape-adaptive DCT and Its Application in Region-based Image Coding

Integrated Expression-Invariant Face Recognition with Constrained Optical Flow

2 ZHENG et al.: ASSOCIATING GROUPS OF PEOPLE (a) Ambgutes from person re dentfcaton n solaton (b) Assocatng groups of people may reduce ambgutes n mat

Object Tracking Based on PISC Image and Template Matching

DETECTION OF MOVING OBJECT BY FUSION OF COLOR AND DEPTH INFORMATION

Querying by sketch geographical databases. Yu Han 1, a *

Mining User Similarity Using Spatial-temporal Intersection

Facial Expression Recognition Using Sparse Representation

Pictures at an Exhibition

Robust Classification of ph Levels on a Camera Phone

2-Dimensional Image Representation. Using Beta-Spline

Transcription:

Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents a new local texture descrptor, Local Quaternary Patterns (LQP) and ts extenson, Feature Local Quaternary Patterns (FLQP). The LQP, whch encodes four relatonshps of local texture, ncludes more nformaton of local texture than the Local Bnary Patterns (LBP) and Local Ternary Patterns (LTP). The FLQP whch encodes both local and feature nformaton s expected to perform better than the LQP for texture descrpton and pattern recognton. To reduce the sze of feature dmensons and hstograms of both LQP and FLQP, a new codng schema s proposed to splt the LQP and FLQP nto two bnary codes: the upper and lower bnary codes. As a result, the total possble values of splt LQP and FLQP are reduced to 512. The feasblty of the proposed LQP and FLQP methods s demonstrated on an eye detecton problem. Expermental results usng the BoID database show that both the FLQP and the LQP methods archve better performance than the feature LTP, the LTP, the feature LBP and the LBP methods. Specfcally, the FLQP method acheves the hghest eye detecton rate among all the competng methods. Keywords: Local Quaternary Patterns (LQP), Feature Local Quaternary Patterns (FLQP), Local Bnary Patterns (LBP), Feature Local Bnary Patterns (FLBP) and Local Ternary Patterns (LTP). 1 Introducton Local Bnary Patterns (LBP) [1] has recently become a popular method n texture descrpton for content based mage search and feature extracton for pattern recognton and computer vson. The most mportant propertes of the LBP operator are ts tolerance aganst llumnaton and computatonal smplcty, whch makes t possble to analyze mages n real-world n real-tme. The Local Bnary Patterns (LBP) has been wdely appled n many applcatons, such as face recognton [2-4], face detecton [5], [6], and facal expresson analyss [-10]. Tan and Trggs [2] argued that the orgnal LBP tends to be senstve to nose, especally n near-unform mage regons, because t thresholds exactly at the value of the central pxel. To solve the problem, they proposed 3- valued codes, called Local Ternary Patterns (LTP). In LTP, neghbor pxels are compared wth an nterval [-r, +r] around the value of the center pxel. A neghbor pxel s assgned 1, 0 or -1, f ts value s above +r, n the nterval [- r, +r] or below r, respectvely. Because the radus r s not changed wth the gray scale, the LTP s no longer a strctly gray-scale nvarant texture descrpton, and s less tolerance aganst llumnaton than LBP. The LTP has 6561 possble values, whch not only poses a computatonal challenge but also leads to sparse hstograms. To solve these problems, a codng scheme s ntroduced to splt a LTP code nto two bnary codes, the postve one (PLTP) and the negatve one (NLTP). Therefore the total number of possble values of two splt bnary codes s reduced to 512. Some of experments n [2], [10] show that LTP and LBP acheved smlar, although LTP doubles the sze of feature dmensons and hstograms, and has a hgher computatonal cost than LBP. To mprove the performance of LTP, we present n ths paper a new local texture descrptor, Local Quaternary Patterns (LQP) and ts extenson, Feature Local Quaternary Patterns (FLQP). The LQP encodes four relatonshps of local texture, and therefore t ncludes more nformaton of local texture than the LBP and the LTP. To reduce the sze of feature dmensons and hstograms of LQP, a codng scheme s ntroduced to splt each LQP code nto two bnary codes, the upper LQP (ULQP) and the lower LQP (LLQP). The possble LQP values are reduced to 512. We [11] have ntroduced a new Feature Local Bnary Patterns (FLBP) method to mprove upon the LBP approach. In ths paper, we further extend LQP to FLQP, and demonstrate that FLQP mproves upon LQP and other competng methods, such as LBP, FLBP, LTP, and Feature LTP (FLTP). The FLQP whch encodes both local and feature nformaton s expected to perform better than the LQP for texture descrpton and pattern analyss. We further show that the FLQP code can be splt nto two bnary codes as well, the upper FLQP (UFLQP) and the lower FLQP (LFLQP). To demonstrate the feasblty of the proposed LQP and FLQP methods, we apply them to eye detecton on the BoID database. Expermental results show that both FLQP and LQP acheve better eye detecton performance than FLTP, LTP, FLBP and LBP. The FLQP method has the best performance among all the methods.

2 Local Bnary Patterns and Local Ternary Patterns Before we ntroduce our Local Quaternary Patterns (LQP) and Feature Local Quaternary Patterns (FLQP), we brefly revew LBP and LTP. LBP defne a gray-scale nvarant texture descrpton by comparng a center pxel used as a threshold, wth those pxels n ts local neghborhood [1]. Specfcally, for a 3 3 neghborhood of a pxel p = [x, y] t, each neghbor s labeled by a number from 0 to shown n Fg.1. The neghbors of the pxel p thus may be defned as follows: t Np, x, y, 0, 1, 2,, (1) where s the number used to label a neghbor. The value of the LBP code of a pxel p(x, y) s calculated as follows: LBP p 2 Slbp{ G N p,, G( p )} (2) 0 where G(p) and G[N(p, )] are the gray levels of the pxel p and ts neghbor N(p, ), respectvely. S lbp s a threshold functon that s defned as follows: 1, f g gc; Slbp ( g, gc) (3) 0, otherwse. Fg. 1 The 3 3 neghborhood of a pxel p and the label of ts neghbors Tan and Trggs proposed a Local Ternary Pattern or LTP operator [2]. In LTP the threshold functon S ltp s defned as follows: 1, f g gc r Sltp g, gc, r 0, f g gc r 1, f g gc r where r s the radus of the nterval around the grey level of the central pxel. Fg. 2 shows an example of the computaton of the LTP. The grey level of the central pxel s 40 and r s 5. A neghbor pxel s assgned to 1, 0 or -1, f ts grey level s greater than or equal to 45, between 44 and 36, or less than or equal to 35, respectvely. The total number of the possble LTP codes s 3 8 = 6561, whch leads to a large sze for the feature dmenson and sparse hstograms of the LTP codes. To solve the problem, an LTP code s splt nto two bnary codes: the postve and negatve halves as shown n Fg. 2. The postve half of LTP (PLTP) (4) s obtaned by replacng -1 wth 0. The negatve half of LTP (NLTP) s obtaned by frst replacng the 1 wth 0 and then changng -1 to 1. Thus an LTP code can be represent by two bnary codes. As a result, the total number of the splt LTP codes s reduced to 512. Fg. 2 Computng the LTP and splttng t to two bnary codes, PLTP and NLTP 3 Local Quaternary Patterns We now present our new Local Quaternary Patterns (LQP) whch encodes four relatonshps of local texture, and therefore t ncludes more nformaton of local texture than LBP and LTP. The threshold functon of LQP s defned usng two bnary dgts as follows: S g, g, r lqp c 11, f g gc r 10, f gc g gc r 01, f gc r g gc 00, f g gc r where r s the radus of the nterval around the value of the central pxel and may be defned as follows: (5) r c g c (6) where c s a constant and τ s a parameter to control the contrbuton of g c to r. To reduce the total number of codes, an LQP code can be splt nto two bnary codes, the upper and lower halves. The upper half of LQP (ULQP) s obtaned by extractng the frst dgt of LQP code. The lower half of LTP (LLQP) s obtaned by extractng the second dgt of LQP code. Thus the total number of LQP codes s reduced to 512. From Eq. 5 we can derve the threshold functons of ULQP and LLQP, S ulqp and S llqp whch may be formulated as follows:,, S g g S g g () S ulqp c lbp c [1 Slbp ( g, gc )] 1, f g ( 1) ; (,, ) gc r g g r 0, otherwse. llqp c (8)

The threshold functon of ULQP, S ulqp s equal to the threshold functon of LBP and s not depend on the r. The ULQP and LLQP are therefore defned as follows: ULQP p LBP p (9) LLQP p 2 Sllqp{ G N p,, G( p )} (10) 0 Note that the ULQP s the same as the LBP. Fg 3 shows an example of the computaton of the LQP. The grey level of the central pxel s 40 and r s 5. The ULQP code s 1111001 whch s the same as LBP code. For LLQP, a pxel s assgned 1 f t s greater than or equal to 45, or t s less than 40 and greater than or equal to 35, otherwse t s assgned 0. Fg. 3 Computng the LQP and splttng t to two bnary codes ULQP and LLQP 4 Feature Local Quaternary Patterns We [11] have ntroduced a new Feature Local Bnary Patterns (FLBP) method. FLBP generalzes the LBP approach by ntroducng feature pxels, whch may be broadly defned by, for example, the edge pxels, the ntensty peaks or valleys n an mage. FLBP whch encodes both local and feature nformaton, has been shown more effectve than LBP for texture descrpton and pattern recognton, such as eye detecton. In ths paper, we extend LQP to FLQP. Next we brefly revew the concepts of dstance vector [12] and FLBP method, and then ntroduce our FLQP method. In a bnary mage, each pxel assumes one of two dscrete values: 0 or 1. Whle pxels of value 0 are called the background pxels, pxels of 1 are called feature pxels. Let p and q represent a pxel and ts nearest feature pont n a bnary mage, respectvely. The dstance vector of p pontng to q s defned below:, dv p q p q arg mn ( p, r) r F (11) where F s the set of feature pxels of the bnary mage. δ s a dstance metrc. FLBP s defned on the concepts of True Center (TC) whch s the center pxel of a gven neghborhood, and Vrtual Center (VC) whch s a pxel used to replace the center pxel of a gven neghborhood. The TC whch may be any pxel on the path ponted by dv(p) from p to q, s defned below: t ( ) C p p dv p (12) where α t [0, 1] s a parameter that controls the locaton of the TC. The VC whch may be any pxel on the path ponted by dv(p) from p to q as well, s defned below: v ( ) t C p p dv p (13) where α v [0, 1] s a parameter that controls the locaton of the VC. The general form of FLBP s defned below: 0 v FLBP p 2 Slbp{ G N C t p,, G[ C v p ]} (14) where N(C t (p), ) defned by Eq. 1 represents the neghbors of the TC. G[C v (p)] and G[N(C t (p), )] are the gray levels of the VC and the neghbors of the TC, respectvely. Next, we use the grayscale mage shown n Fg. 4 to llustrate how to compute the FLBP code. We assume that the upper left pxel s at locaton (1, 1) n the Cartesan coordnate system wth the horzontal axs pontng to the rght and the vertcal axs pontng downwards. As dscussed before, feature pxels are broadly defned. Here we defne the feature pxels n Fg. 4 to be those wth gray level greater than 80. Because the pxel at the coordnates (6, 6) n Fg. 4 s the only pxel whose gray level s greater than 80, As a result the pxel becomes the only feature pxel n the bnary mage shown n Fg. 4(b). And ths feature pxel becomes the nearest one for all the pxels n Fg. 4. (b) Fg. 4 a grayscale mage used n the examples of FLBP computaton. (b) The bnary feature mage derved by extractng feature pxel wth gray level greater than 80 from the grayscale mage. We select the pxel p at coordnates (2, 2) n Fg. 4 as an example to compute the FLBP code. We frst compute the dv(p). Gven p = [2, 2] t, and q = [6, 6] t, we

have dv(p) = q p = [4, 4] t. Then we determne the locatons of TC and VC whch are controlled by the parameters α t and α v respectvely. Fg. 5 shows two examples of the computaton of FLBP wth dfferent locatons of TC and VC. In Fg. 4 gven α t = 0.5 and α v = 0.25, we have C t (p) = p + α t dv(p) = [5, 5] t, and C v (p) = p + dv(p) = [3, 3] t. Therefore, the TC s the pxel at locaton (5, 5) and the VC s the pxel at locaton (3, 3). Accordng to Eq. 14 we replace the gray level 60 of TC at locaton (5, 5) by the gray level 30 of VC at locaton (3, 3), and threshed the neghbors of the TC. We have the bnary FLBP code: FLBP(2, 2) = 10101001. Fg. 5(b) shows another example of the FLBP(2, 2) computaton when α t = 0.25, and α v = 0.5. Smlarly, we locate the TC s the pxel at locaton (3, 3) and the VC s the pxel at locaton (5, 5). The bnary FLBP code becomes: FLBP(2, 2) = 00111100 when α t = 0.25, and α v = 0.5. Fg. 6 shows an example of the FLBP representatons of a face mage. Fg. 6 and (b) dsplay a face mage and ts bnary feature mage. The feature pxel of the bnary mage s derved usng our LRBT method when β = 0.1. Fg. 6(c) shows the LBP mage of the face mage n Fg. 6. Fg. 6(d) - (g) exhbt the FLBP mages when α t = 0.25, 0.5, 0.5, 1, respectvely, and α v = 0. Fg. 6(h) (k) exhbt the FLBP mages when α v = 0.25, 0.5, 0.5, 1, respectvely, and α t = 0. (b) (c) (d) (e) (f) (g) (b) Fg. 5 The computaton of FLBP for the pxel at (2, 2). An example when TC (α t = 0.5) s at (5, 5) and VC (α v = 0.25) s at (3, 3) (b) An example when TC (α t = 0.25) s at (3, 3) and VC (α v = 0.5) s at (5, 5) In [11] we present a new feature pxel extracton method, the LBP wth Relatve Bas Thresholdng (LRBT) method. The LRBT method frst computes the LBP representaton usng the relatve bas threshold functon defned below: 1, f g (1 ) gc; S( g, gc, ) (15) 0, otherwse. where β s a parameter that controls the contrbuton of g c to the bas. Then the LRBT method derves the bnary LRBT feature mage by convertng the LBP mage to a bnary mage, whose feature pxels correspond to those whose LBP code s greater than 0, and the background pxels correspond to the pxels n the LBP mage wth the LBP code 0. (h) () (j) (k) Fg. 6 A face mage (b) The bnary LRBT feature mage of (c) The LBP representaton of the face mage of (d) (g) The FLBP mage when α v = 0 and α t = 0.25, 0.5, 0.5, 1 respectvely (h) (k) The FLBP mage when α t = 0 and α v = 0.25, 0.5, 0.5, 1 respectvely Our new feature local quaternary patterns or FLQP can be splt nto two bnary codes, the upper half of FLQP (UFLQP) and the lower half of FLQP (LFLQP) usng the threshold functons defned n Eqs. and 8, respectvely. The UFLQP s equvalent to FLBP. The general form of the UFLQP and the LFLQP s defned below: UFLQP 0 p FLBP p (16) LFLQP p 2 Sllqp{ G N C t p,, G C v p, r} (1) Fg. shows the computaton of FLQP when r = 5. The gray level mage and the feature pxel are the same as those n Fg. 4. We use the same pxel p at (2, 2) and the same values of α v and α t as those n Fg. 5 to compute

FLQP. Therefore the computaton of dv(p), TC and VC are the same as the examples n Fg. 5. Fg. shows the FLQP computaton of the pxel p at (2, 2), when α v = 0.25, and α t = 0.5. Because the UFLQP s the same as the FLBP shown n Fg 5, only the LFLQP s shown n Fg.. Frst the grey level 60 of TC at (5, 5) s replaced by the grey level 30 of VC at (3, 3). For LFLQP, a neghborhood pxel s assgned 1 f t s greater than or equal to 35, or t s less than 30 and greater than or equal to 25, and s assgned 0 otherwse. Then we have LFLQP(2, 2) = 10011100. Fg. (b) shows another example of the FLQP computaton when α v = 0.5, and α t = 0.25. The UFLQP s the same as FLBP shown n Fg 5(b) and not shown n Fg. (b). Frst the grey level 30 of TC at (3, 3) s replaced by the grey level 60 of VC at (5, 5). For LFLQP, a neghborhood pxel s assgned 1 f t s greater than or equal to 65, or t s less than 60 and greater than or equal to 55, and s assgned 0 otherwse. Then we have the LFLQP(2, 2) = 00110011. (b) (c) (d) (e) (f) (g) (h) () Fg. 8 The LLQP mage when r = 0.1g c (b) - (e) The LFLQP mages when r = 0.1g c, α t = 0.25, 0.5, 0.5, 1, respectvely, and α v = 0 (f) - () The LFLQP mages when 0.1g c, α v = 0.25, 0.5, 0.5, 1, respectvely, and α t = 0 5 Experments (b) Fg. The computaton of FLQP for the pxel at (2, 2). An example when TC (α t = 0.5) s at (5, 5) and VC (α v = 0.25) s at (3, 3) (b) An example when TC (α t = 0.25) s at (3, 3) and VC (α v = 0.5) s at (5, 5) Fg. 8 shows an example of the FLQP representatons when r = 0.1gc. The face mage and the bnary LRBT feature mage are the same as Fg. 6 and Fg. 6(b). Fg 8 shows the LLQP mage. The ULQP mage s the same as LBP mage n Fg. 6(c). Fg. 8(b) - (e) show LFLQP mages when αt = 0.25, 0.5, 0.5, 1, respectvely, and αv = 0. Ther correspondng UFLQP are the same as Fg. 6(d) (g). Fg. 8(f) - () show LFLQP mages when αv = 0.25, 0.5, 0.5, 1, respectvely, and αt = 0. Ther correspondng UFLQP are the same as Fg. 6(h) (k). We apply the FLQP methods on eye detecton. Fg. 9 shows the system archtecture of our FLQP-based eye detecton method whch s smlar to the FLBP method on eye detecton ntroduced n [11]. Fg. 9 conssts of three major steps. In frst step, a bnary mage, whch contans the feature pxels of the grayscale face mage, s derved by applyng LRBT feature pxel extracton method. In the second step the FLQP representaton of the face mage s formed based on the grayscale mage and a dstance vector feld or DVF, whch s obtaned by computng the dstance vector between each pxel and ts nearest feature pxel defned n the bnary mage. The FLQP code s then splt to two bnary codes, from whch two mages, UFLQP and LFLQP mages are formed. In the fnally step, each eye canddate s compared wth the eye template based on the UFLQP and LFLQP hstograms and smlarty measures. An eye template s constructed from a number of tranng eye samples. Each eye sample s dvded nto a grd of u v cells. The occurrences of the UFLQP codes n a cell are collected nto a UFLQP hstogram. The occurrences of the LFLQP codes n a cell are collected nto a LFLQP hstogram. The eye template s thus defned by uv UFLQP mean hstograms and uv LFLQP mean hstograms of the tranng eye samples. The smlarty measure to compare the

UFLQP and LFLQP hstograms of an eye template T and an eye canddate C s defned as follows: M C, T g b 2 ( C, j T, j ) (18) C 1 j 1, j T, j where C,j represents the j-th bn of the hstogram of the -th cell of the eye canddate wndow, T,j represents the j-th bn of the hstogram of the -th cell of the eye template, g = uv s the total number of cells of the u v grd, and b s the number of bns of a hstogram. The fnal smlarty measure s the sum of smlarty values of the UFLQP and LFLQP hstograms. The eye canddate has the largest smlarty value wth the eye template s the locaton of the detected eye. We have used a fast algorthm n hstogram and smlarty measure computaton n [11]. The dea of the fast algorthm s to update only two columns or two rows correspondng to the two consecutve eye canddate wndows for the hstogram and smlarty computaton nstead of repeatng the computaton for the whole new wndow. As a result, the fast algorthm sgnfcantly mproves the computatonal effcency of the eye detecton method. Fg. 8 The system archtecture of our FLQP-based eye detecton method We assess the eye detecton performance of FLQP and LQP method usng the BoID databases whch contans 1,521 grayscale frontal face mages wth spatal resoluton of 384 286. All facal mages n our experments are cropped and normalzed to the sze of 132 18. To construct the eye template, we collected 0 pars of eye samples that are not from the BoID database. Eye samples are cropped to 3 1. Eye detecton performance s determned by a relatve dstance error and s defned as follows: γ = d 1 / d 2 (19) where d 1 s the Eucldean dstance between the detected eye center and the ground truth eye center, and d 2 s the ntraocular dstance between the two ground truth eye centers. Table 1 compares the performance of the FLQPbased, the LQP-based, the FLTP-based, the LTP-based, the FLBP-based, and the LBP-based eye detecton methods. The best experments are selected from each method to make the comparson. The eye detecton success rate when γ 0.25, 0.1 and 0.05 and the average γ are shown n Table 1. In [11] the experments show that the 5 5 neghborhood sze s better than the 3 3 neghborhood sze, and 3 4 grd sze of eye canddate wndow yelds the best eye detecton performance. We apply 5 5 neghborhood sze and 3 4 grd sze n all experments n Table 1. The expermental results lead to the followng fndngs. LQP performs better than LTP and LBP. FLQP perform better than FLTP and FLBP. These results demonstrate that the proposed LQP and FLQP, whch encode four relatonshps of local texture, are more effectve than the LTP, FLTP, LBP, and FLBP for texture descrpton and pattern recognton, such as eye detecton. FLQP acheves the best eye detecton performance. Specfcally, the average γ of the LQP, FLTP, LTP, FLBP, and LBP-based eye detecton methods are 6.1%,.5%, 9.%, 6.9%, and 125.6% hgher than the average γ of the FLQP-based eye detecton method. The results ndcate that FLQP mproves upon FLTP, LTP, FLBP, and LBP for eye detecton. FLQP and FLBP perform better than LQP and LBP methods, respectvely. FLTP methods archve better results than LTP methods except LTP obtans hgher success rate than FLTP when γ 0.1. The results llustrate that the feature local methods (FLQP, FLTP, and FLBP), whch encode both local and feature nformaton, perform better than the local methods (LQP, LTP, and LBP) whch do not encode feature nformaton. Our experments show that LTP methods mprove upon the LBP methods. However the FLBP methods archve better results than FLTP except FLTP s better for success rates when γ 0.05. The FLTP method does not outperform the FLBP method. Our results are consstent wth the expermental results reported n [2], [10] whch showed that LTP and LBP acheved smlar results for face and facal expresson recognton, although LTP has a hgher computatonal cost than LBP.

Table 1 The eye detecton success rates when γ 0.25, 0.1 and 0.05 and average γ usng the FLQP-based, the LQP-based, the FLTP-based, the LTP-based, the FLBP-based and the LBP-based eye detecton methods Method γ 0.25 γ 0.1 γ 0.05 Average γ FLQP, β = 0.2, α v = 0.25, α t = 0, r = 0.18g c 98.5 96.19 89.1 0.0360 LQP, r = 0.0g c 98.39 95.63 89.38 0.0382 FLTP, β=0.2, α v = 0.25, α t = 0, r = 3 98.29 95.1 89.12 0.038 LTP, r = 4 98.03 95.50 88.95 0.0395 FLBP, β = 0.2, α v = 0.25, α t = 0, 98.65 95.23 8.84 0.0385 LBP 92.34 90.34 83.14 0.0812 6 Conclusons We present n ths paper Local Quaternary Patterns (LQP) and Feature Local Quaternary Patterns (FLQP). The FLQP and LQP whch encodes four relatonshps of the local texture nclude more nformaton of the local texture than the local bnary patterns or LBP and the local ternary patterns or LTP. The FLQP, whch encodes both local and feature nformaton, s expected to perform better than the LQP for texture descrpton and pattern recognton. To reduce the feature dmenson of LQP and FLQP, a new codng scheme s proposed to splt the LQP nto two bnary codes: the Upper LQP (ULQP) and the Lower LQP (LLQP), and the FLQP nto two bnary codes: the Upper FLQP (UFLQP) and the Lower FLQP (LFLQP). Expermental results usng the BoID database show that () LQP and FLQP perform better than LTP, FLTP, LBP, and FLBP for eye detecton. () FLQP acheves the best eye detecton performance. () FLQP, FLTP, and FLBP perform better than LQP, LTP, and LBP, respectvely. References [1] T. Ojala, M. Petkanen, and D. Harwood, A comparatve study of texture measures wth classfcaton based on feature dstrbutons, Pattern Recognton, vol. 29, no. 1, pp. 51 59, 1996. [2] X. Tan and B. Trggs, Enhanced local texture feature sets for face recognton under dffcult lghtng condtons, IEEE Transactons on Image Processng, vol. 19, no. 6, pp. 1635-1650, 2010. [3] T. Ahonen, A. Hadd, and M. Petkanen, Face descrpton wth local bnary patterns: applcaton to face recognton, IEEE Transactons on Pattern Analyss and MachneIntellgence, vol. 28, no. 12, pp. 203 2041, 2006. [4] Z. Lu and C. Lu, Fuson of color, local spatal and global frequency nformaton for face recognton, Pattern Recognton, vol. 43, no. 8, pp. 2882 2890, 2010. [5] A. Hadd, M. Petkanen, and T. Ahonen, A dscrmnatve feature space for detectng and recognzng faces, n Proc. Int. Conf. Computer Vson and Pattern Recognton (CVPR), Washngton, DC, June 2 July 2, 2004, pp. 9 804. [6] H. Zhang and D. Zhao, Spatal hstogram features for face detecton n color mages, n Proc. Advances n Multmeda Informaton Processng: 5th Pacfc Rm Conference on Multmeda, Tokyo, Japan, November 30 - December 3, 2004, pp. I: 3-384 [] C. Shan, S. Gong, and P. W. McOwan, Facal expresson recognton based on local bnary patterns: a comprehensve study, Image and Vson Computng, Vol. 2, No 6, pp. 803-816, 2009. [8] G. Zhao and M. Petkanen, Experments wth facal expresson recognton usng spatotemporal local bnary patterns, n Proc. Int. Conf. Multmeda and Expo (ICME), Bejng, Chna, July 2-5, 200, pp. 1091-1094. [9] S. Moore and R. Bowden, Local bnary patterns for mult-vew facal expresson recognton, Computer Vson and Image Understandng, vol. 115, no. 4, pp. 541 558, 2011 [10] T. Grtt, C. Shan, V. Jeanne, and R. Braspennng, Local features based facal expresson recognton wth face regstraton errors, n Proc. IEEE Int. Conf. Automatc Face and Gesture Recognton (FG), Amsterdam, The Netherlands, Sept. 1-19, 2008. [11] J. Gu and C. Lu, A New Feature Local Bnary Patterns (FLBP) Method, n Proc. 16th Internatonal Conference on Image Processng, Computer Vson, and Pattern Recognton, Las Vegas, Nevada, USA, July 16-19, 2012, [12] P. Danelson, Eucldean dstance mappng, Computer Graphcs and Image Processng, vol. 14, no. 3, pp. 22 248, 1980.