Pupil Detection under Lighting and Pose Variations in the Visible and Active Infrared Bands

Size: px
Start display at page:

Download "Pupil Detection under Lighting and Pose Variations in the Visible and Active Infrared Bands"

Transcription

1 Pupil Detection under Lighting and Pose Variations in the Visible and Active Infrared Bands Thirimachos Bourlai #1, Cameron Whitelam #2, Ioannis Kakadiaris 3 # LDCSEE, West Virginia University, Evansdale Drive, Morgantown, WV , U.S.A. 1 Thirimachos.Bourlai@mail.wvu.edu 2 cwhitela@gmail.com Dept. of Computer Science, University of Houston, 4800 Calhoun, Houston, TX , U.S.A. 3 ioannisk@central.uh.edu Abstract We propose a novel and efficient methodology for the detection of human pupils using face images acquired under controlled and difficult (large pose and illumination changes) conditions in variable spectra (i.e., visible, multi-spectral, and short wave infrared (SWIR)). The methodology is based on template matching, and is composed of an offline and an online mode. During the offline mode, band-dependent eye templates are generated for each eye from the face images of a pre-selected number of subjects. Using the eye templates that are generated in the offline mode, the online pupil detection mode determines the locations of the human eyes and the pupils. A combination of texture- and template-based matching algorithms is used for this purpose. Our method achieved a significantly high detection rate, yielding an average of 96.38% pupil detection accuracy across all datasets used. Based on a comparative analysis on different databases, we concluded that: (i) a single methodological approach can be used to efficiently detect human eyes and pupils of face images (with strong pose and illumination variations) acquired in the visible and hyper-spectral bands, and (ii) the use of texture-based matching and normalized band-specific templates significantly increases detection accuracy. To the best of our knowledge, this is the first time in the open literature that the problem of multi-band pupil detection on face images in the presence of lighting and pose variations, is being investigated using a unified algorithm. I. INTRODUCTION Most of the commercial and research face recognition (FR) algorithms currently available require the determination of eye center (pupil) coordinates as part of the authentication process and their performance can be heavily impacted by an inefficient eye detection algorithm [1]. There are three main eye detection methodologies: Feature-based methods that focus on eye characteristics (e.g., eye shape and color distribution). Appearance-based methods that use the photometric appearance of the eyes for detection through classification approaches. Template-based methods that slide a pre-designed eye model (template) across a face image to obtain the best eye matches. Park et al. [2] determined the eye candidates by using texturebased eye filtering, and then detected the eye locations using /11/$26.00 c 2011 IEEE. face geometry. The face images used were acquired under controlled conditions with some variability in terms of facial expressions. Xingming et al. [3] proposed a pose-independent Adaboost method to detect faces, a heuristic rule to filter the non-eye candidates, and a Support Vector Machine (SVM) classifier to verify eye pairs. The method was successfully tested on a small dataset of visible spectrum images acquired under three different illumination conditions. Asteriadis et al. [4] proposed a method for eye localization based only on the geometrical information of human eyes. After an edge map is extracted from a detected face, a vector is assigned to every pixel. Then, the length and slope information for these vectors is used to find the eyes. The authors reported Fig. 1. Results of preliminary experiments that highlight the challenges. They were obtained by applying a benchmark [5] and a commercial eye detection technique (G8 provided by L1 Systems) on the original UHDB11 database (non-synthetic data with strong pose and illumination variations), as well as on the synthetic (where strong pose variations were imposed) WVU multispectral and SWIR datasets.

2 Fig. 2. Examples of face samples from the four databases used, before (top) and after (bottom) illumination normalization. 96.7% and 98.6% on the detection of eye centers when using visible band face images from both the XM2VTS and the BioID database, respectively. Chen and Liu [6] proposed a method to detect eye centers that uses color information and wavelet features. SVM was used for classification. The method achieved a 94.92% eye detection accuracy based on experiments performed on the Facial Recognition Grand Challenge (FRGC) database. Morimoto et al. [7] proposed a robust pupil detection technique that uses two near infrared time-multiplexed light sources synchronized with the camera frame rate. A common characteristic of the aforementioned techniques is that they are not designed to simultaneously deal with images acquired under multiple bands. However, in most operational scenarios (e.g., in military and law enforcement applications) [8], face images may have to be acquired not only under variable illumination conditions and poses, but also under variable spectra. The main advantages of using hyper-spectral images for recognition are that they are suitable for covert applications [9] and can be useful in a nighttime environment [10]. An efficient hyper-spectral eye detection algorithm (benchmark technique) that can perform eye detection at variable bands was proposed by Whitelam et al. in [5]. However, the limitation of the approach is that it was not designed to be pose and illumination invariant. In this work, we address the aforementioned limitations and propose a methodology that is capable of detecting human pupils from multi-band face images acquired under various realistic unconstrained situations, or when the original face datasets were manually rotated (adjusting the roll descriptor - Fig. 4) right and left resulting in pose variations. First, our proposed method was tested on the original data from the UHDB11 database (the face images of the dataset were acquired under strong light and pose variations) [11]. The proposed method was also tested on the WVU database [5], where face images were acquired at variable spectra, as well as a subset of the FRGC database [12] (acquired in the visible spectrum). WVU and FRGC were also used to generate two synthetic visible and hyper-spectral databases, where face images were randomly rotated to the right and left (at variable degrees), and thereby imposing strong pose variations. Preliminary experiments highlighted the problem of applying a benchmark [5] and G8 on the above datasets (Fig. 1). We evaluated the performance of the proposed method by computing its detection accuracy on the UHDB11, WVU, and FRGC datasets, and compared the performance of the algorithm against a commercial and a benchmark eye detection method (designed to operate on face images acquired in both the visible and active infrared bands). The experimental results demonstrated that the proposed approach can be used to efficiently detect human eyes and pupils on multi-spectral face images yielding a detection accuracy that ranges from 92.89% to 99.30%. The detection process is computationally feasible ( 5 s per image in a Matlab environment). The rest of the paper is organized as follows: Section II presents the proposed pupil detection system. The experimental procedure used to evaluate the proposed method and the databases used are described in Section III. Conclusions and future work are presented in Section IV. II. METHODOLOGY In this section, we outline the technique used for eye and pupil localization. The salient stages of the proposed method are described below, and the overall process is illustrated in Fig. 3. (i) Photometric Normalization (PN): PN was applied to all visible and hyper-spectral face images. As conventional techniques (e.g., histogram-equalization and homomorphic filtering) did not perform well, we followed the approach proposed by Tan and Triggs [13] that incorporates a series of algorithmic steps. The steps were chosen in a certain order to eliminate the effects of illumination variations, local shadowing and highlights, while still preserving the essential elements of

3 Fig. 3. Overview of the process used (eye and pupil detection). visual appearance for more efficient face recognition. The approach is based on the following steps: - Gamma Correction: A nonlinear gray-level transformation was used to replace gray level I with I γ (for γ > 0) or log(i) (for γ = 0), where γ [0, 1] is a user-defined parameter (here, γ = 2). This step enhances the local dynamic range of the image in dark or shadowed regions, while compressing it in bright regions. - Difference of Gaussian (DoG) Filtering: A blurred version of an original gray scale image was subtracted from another less blurred version of the original image using two Gaussian kernels (σ 1 = 1 and σ 2 = 2) and image convolution. - Masking: This process was used to suppress facial regions that are considered to be irrelevant. - Contrast Equalization: This step was used to globally rescale the image intensities to standardize a robust measure of overall contrast or intensity variation. Estimation is a two-stage process where the image is first smoothed, and a non-linear function is then applied to compress large intensity values. The resulting images for samples acquired from each of the databases used are shown in Fig. 2. (ii) Generation of Multi-Pose Eye Templates: We randomly selected ten subjects from each dataset. Then, for each subject, we performed face registration, i.e., we loaded a face image, manually marked the coordinates of the eye centers, geometrically normalized the image (using rotation and scaling of the positions onto two fixed points), and cropped the left and right eye templates at a resolution of Finally, we generated database-specific left and right average eye templates. Note that we empirically determined that 10 subjects were sufficient enough to generate the eye templates that can achieve satisfactory eye detection results. (iii) Detection of Eye Regions: We applied template convolution on the normalized face images by, first, centering each of the generated eye templates on the top left corner of each face image, and then, computing the Pearson Product Moment correlation (PPMC) coefficient. After rotating the original face image to various angles (i.e., they were rolled as illustrated in Fig. 4) ranging from -45 o to +45 o in 5 o increments, the procedure was repeated, using both the left and right average eye templates (sample eye templates are illustrated in Fig. 5), for the entire image. The position where each (left or right) of the generated eye templates best matches (i.e., highest correlation coefficient in the image domain) the face image was the estimated position of the template within the image. The PPMC measure is illustrated in Equation 1, where X and Y are the image and template pixel intensity values, respectively, σ X and σ Y are their respective standard deviations, and µ x and µ y are the expected values of x and y, respectively (N is the total number of pixels). N i=1 P P MC = (X i µ X i )(Y i µ Y i ) (1) Nσ X σ Y (iv) Feature Extraction: We used two different feature Fig. 4. Illustration of the three rotational descriptors on face images acquired under variable conditions using different sensors (visible spectrum, SWIR and multi-spectral). Note that the synthetic face images were generated adjusting only the roll descriptor.

4 Fig. 6. (a) Illustration of the relations between true (C l and C r) and estimated pupil positions ( C l and C r); (b) The relative error with respect to the right eye (Fig. (b)). A circle with a radius of 0.25 relative error is drawn around the eye center. This figure is based on the one provided in [14]. Fig. 5. Description of the (a) left and right average eye templates generated from the FRGC face database, and used for testing eye detection on FRGC, and (b) the left and right average eye templates generated from the WVU (SWIR) face dataset, and used for testing eye detection on the same dataset. descriptors, namely the Local Binary (LBP) and the Local Ternary Patterns (LTP) [13]. The LBP patterns in an image were computed by thresholding 3 3 neighborhoods based on the value of the center pixel. Then, the resulting binary pattern was converted to a decimal value. Even though LBP is invariant to monotonic gray-level transformations, one of its drawbacks is that it tends to be sensitive to noise in nearuniform image regions. This is because the binary code is computed by thresholding the exact center of the pixel region. Alternatively, Local Ternary Patterns overcome the abovementioned limitation, and were used in the proposed method. LTPs are 3-valued codes extended from LBP. In LTPs, gray-levels f p in an image region of width ±t (user defined threshold) around f c (the gray level value at the central pixel c of a 3 3 neighbor) were quantized to zero. If the gray-levels were above ±t, they were quantized to +1, and if they were below ±t, they were quantized to -1. Thus, the indicator S (output from LTP) is defined as follows: S(f p f c ) = 1 if f p f c t; 0 if f p f c t; 1 if f p f c t. The difference when using LTPs instead of LBPs is that t can be adjusted to produce different patterns. This threshold also makes the LTP code more resistant to noise. (v) Eye Detection: Each face image was rotated 18 times (right and left, from -45 o to +45 o in 5 o increments), and 18 different eye region candidates were determined for each eye. However, the problem was to determine which left and right detected eye region candidates (out of a total of 36) were the best eye matches. The experimental results showed that the PPMC measure alone cannot be used to solve that problem, and therefore, the LTP feature descriptor was used. First, LTP patterns were computed using (i) the multi-pose eye templates and (ii) the 18 different eye region candidates. Then, we computed the similarity between the extracted features (2) of the eye templates against the eye region candidates. The measure used for that purpose was the chi-squared distance that is defined as follows: χ 2 (n, m) = 1 2 l 1 h n (k) h m (k) h n (k) + h m (k), (3) where h n and h m are the two histogram feature vectors, l is the length of the feature vector, and n and m are two sample vectors extracted from an image in the gallery and probe sets, respectively. (vi) Pupil Detection: The pupil coordinates of the left and right eyes were determined as the lowest pixel intensity values per region within each of the right and left eye regions. To validate the performance of our pupil detection system we used the relative error measure (D p ) based on the distances between the expected (true pupil coordinates acquired by manual annotation), and estimated pupil positions [14]. First, for each eye, we computed the distance between the manually annotated eye center C l, C r R 2, and the estimated eye center C l, Cr R 2 (Fig. 6), i.e., d l and d r for the left and right eye, respectively. Then, we determined the maximum distance between d l and d r (max(d l, d r )). The distance is normalized by dividing it by the distance between the annotated eye centers, denoted as C l C r. The measure is shown in the following equation: D p = max(d l, d r ) C l C r In an average human face, the distance between the inner eye corners equals the width of a single eye. Thus, a relative error of 0.25 equals a distance of half an eye width (Fig. 6(b)). In this paper, a pupil is considered detected if D p 0.25, and is rejected, otherwise. III. EXPERIMENTS A. Face Image Databases In this section, we present a description of the following databases: the UHDB11 (visible spectrum; unconstrained data), the WVU (Near-IR and SWIR spectra; constrained data), and the FRGC2 [12] (visible domain; constrained data). The WVU and FRCG were used to generate the pose-variable (4)

5 were resized to have the same spatial resolution as the IR and Red images. The XenICs camera was used for the acquisition of SWIR face images. The camera has an Indium Gallium Arsenide (InGaAs) Focal Plane Array (FPA) with 30 µm pixel pitch, 98% pixel operability and three-stage thermoelectric cooling. It has a relatively uniform spectral response from nm wavelength (lower SWIR band) across which the InGaAs FPA has a largely uniform quantum efficiency. The spectral response of the camera falls rapidly at wavelengths lower than 950 nm and near 1700 nm. (c) FRGC: We used a randomly selected subset from the controlled dataset of the FRGC database that consists of full frontal facial images acquired under two lighting conditions (two or three studio lights) and with two facial expressions (smiling and neutral). In total, we used the face images of 408 subjects, resulting in 1,632 images. The images were acquired using a 10 MP Canon PowerShot G2 camera [15]. Fig. 7. Depiction of face images from the UHDB11 database (Left Column). Eye/Pupil detection results using the proposed algorithm (Right Column). WVU and FRGC datasets that are composed of random pose variations of the original images (i.e., each image from the original databases was randomly rotated around z axis to different angles ranging from -45 o to +45 o in 5 o increments (Fig. 2). The UHDB11 database was used unaltered (realistic scenario). (a) UHDB11: This database consists of 1,602 face images that were acquired from 23 subjects under variable pose and illumination conditions. For each illumination condition, the subjects faced four different points inside the room (their face was rotated on the Y axis that is the vertical axis through the subjects head). For each Y-axis rotation, three images were also acquired with rotations on the Z axis (that extends from the back of the head to the nose). Thus, the face images of the database were acquired under six illumination conditions, with four Y and three Z rotations. Figure 7 depicts the aforementioned variations for a single subject [11]. (b) WVU: WVU database consist of images that were acquired using a DuncanTech MS3100 multi-spectral and a XenICs camera. The MS3100 was used to create the mutlispectral dataset of the database. The camera consists of three Charge Couple Devices (CCDs) and three band-pass prisms behind the lens in order to simultaneously capture four different wavelength bands. The IR and Red (R) sensors of the multi-spectral camera have spectral response ranges from 400 nm to 1000 nm. The Green (G) channel has a response from 400 nm to 650 nm, and the Blue (B) channel from 400 nm to 550 nm. Note that the IR and Red sensor outputs an image of size The Green and Blue images were recorded on a RGB Bayer pattern sensor and are therefore, one-third the resolution of the other images. Then, Green and Blue images B. Experimental Results The proposed method was evaluated using two different experiments. In the first experiment, we investigated the pupil detection rate and pixel distance accuracy of our method when using the UHDB11, the pose variable FRGC and WVU (multi-spectral and SWIR) databases. Pupil detection performance was computed for each eye and was measured as the number of accurately detected pupils divided by the total number of pupils in each dataset employed. Pupil pixel distance accuracy was measured as the Euclidean pixel distance between the true positions of each pupil center (by manual annotation) and the detected pupil (Table I). In the second experiment, the efficiency of our pupil detection algorithm was tested against the eye detection method proposed in [5] and a commercially available software (G8) provided by L1 Systems ( The manually annotated eye centers were used as ground truth and were compared with the automatically detected pupils. Experimental results are summarized in Table I. When the UHDB11 database was used, the proposed method has an increase in average performance of 48.25% and 20.44% over the benchmark eye detection method and L1 s G8, respectively. Similar results were obtained when the synthetic FRGC database was used, i.e., we had a 52.08% and 12.80% improved performance when compared to the benchmark and G8 methods, respectively. When the synthetic WVU Multispectral database was used, we achieved an increase of the average performance by 4.5% (over the benchmark) and 5% (over G8). When the synthetic WVU-SWIR was used, an increase of 18.5% over the benchmark and 2% over the L1 s G8 methods was achieved. IV. CONCLUSIONS AND FUTURE WORK We studied the problem of multi-band pupil detection using real and synthetic face images captured under controlled and uncontrolled conditions. Our method was designed to work well with challenging data and achieved a significantly high

6 TABLE I COMPARISON OF ALGORITHMS ON UHDB11 AND SIMULATED DATASETS. EUCLIDEAN DISTANCE IS THE DISTANCE IN PIXELS BETWEEN THE MANUALLY ANNOTATED PUPIL CENTERS TO THE AUTOMATICALLY DETECTED ONES. NOTE THAT THE WVU-MULTISPECTRAL DATASET CONSISTS OF 100 FACE IMAGES. SIMILARLY, THE WVU-SWIR DATASET CONSISTS OF 100 FACE IMAGES. Datasets Detection Accuracy Original [5] (%) G8 (%) Proposed (%) UHDB11 Left Eye Right Eye FRGC Left Eye Right Eye Multi-spectral Left Eye Right Eye SWIR Left Eye Right Eye Datasets Euclidean Distance Original [5] (pixels) G8 (pixels) Proposed (pixels) UHDB11 Left Eye Right Eye FRGC Left Eye Right Eye Multi-spectral Left Eye Right Eye SWIR Left Eye Right Eye detection rate. In particular, it yielded an average of 96.38% detection accuracy across all datasets, which is a 49.2% increase in average detection performance when compared to the method proposed by Whitelam et al. [5] (designed to work well only with frontal face images collected under constrained conditions). The commercial software (G8) performed well on data acquired under controlled conditions. However, our method performed consistently better than G8 across all datasets, achieving a 14.4% increase in average pupil detection performance. Another important achievement of our work was its efficiency when using the original face images of the UHDB11 dataset, where none of the face images were synthetically altered with pose and illumination variations (Table I). This was the most challenging scenario to test our method, and we obtained the highest increase in pupil detection accuracy over both the benchmark and G8 algorithms, i.e., the pupil detection accuracy was (on average) above 92%, and outperformed G8 by over 20%. ACKNOWLEDGMENT This work is sponsored in part through a grant from the Office of Naval Research (N ) and the Eckhard Pfeiffer Endowment Fund. We are grateful to all faculty and students at WVU and UH for their valuable participation and assistance with the WVU and UHDB11 databases, respectively. REFERENCES [1] P. Wang, M. Green, Q. Ji, and J. Wayman, Automatic eye detection and its validation, in Proc. on Computer Vision and Pattern Recognition, San Diego, CA, USA, June 2005, pp [2] C. W. Park, J. M. Kwak, H. Park, and Y. S. Moon, An effective method for eye detection based on texture information, in Proc. International Conference on Communications and Information Technology, Washington, DC, USA, November 2007, pp [3] Z. Xingming and Z. Huangyuan, An illumination independent eye detection algorithm, in Proc. International Conference on Pattern Recognition, Hong Kong, China, August 2006, pp [4] S. Asteriadis, N. Nikolaidis, A. Hajdu, and I. Pitas, An eye detection algorithm using pixel to edge information, in Proc. International Symposium on Communications, Control and Signal Processing, Marrakech, Morocco, March [5] C. Whitelam, Z. Jafri, and T. Bourlai, Multispectral Eye Detection : A Preliminary Study, in Proc. on International Conference on Pattern Recognition, Istambul, Turkey, August 2010, pp [6] S. Chen and C. Liu, Eye detection using color information and a new efficient SVM, in Proc. International Conference on Biometrics: Theory, Applications and Systems, Washington, DC, USA, September [7] C. H. Morimoto, D. Koons, A. Amir, and M. Flickner, Pupil detection and tracking using multiple light sources, Image and Vision Computing, vol. 18, no. 4, pp , [8] N. Kalka, T. Bourlai, B. Cukic, and L. Hornak, Cross-spectral Face Recognition in Heterogeneous Environments: A Case Study on Matching Visible to Short-Wave Infrared Imagery, in Proc. International Joint Conference on Biometrics, Washington, DC, USA, [9] T. Bourlai, N. Kalka, D. Cao, B. Decann, Z. Jafri, F. Nicolo, C. Whitelam, J. Zuo, D. Adjeroh, B. Cukic, J. Dawson, L. Hornak, A. Ross, and N. A. Schmid, Ascertaining Human Identity in Night Environments. DVSN, Springer, 2011, pp [10] T. Bourlai and Z. Jafri, Eye detection in the Middle-Wave Infrared Spectrum: Towards Recognition in the Dark, in International Workshop on Information Forensics and Security, Foz do Iguaçu, Brazil, [11] G. Toderici, G. Passalis, S. Zafeiriou, G. Tzimiropoulos, M. Petrou, T. Theoharis, and I. Kakadiaris, Bidirectional relighting for 3D-aided 2D face recognition, in Proc. on Computer Vision and Pattern Recognition, San Francisco, CA, USA, June 2010, pp [12] P. Phillips, P. Flynn, T. Scruggs, K. Boyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, Overview of the face recognition grand challenge, vol. 1, San Diego, CA, USA, June 2005, pp [13] X.Tan and B. Triggs, Enhanced local texture feature sets for face recognition under difficult lighting conditions, Transactions on Image Processing, vol. 19, no. 6, pp , [14] O. Jesorsky, K. J. Kirchberg, and R. W. Frischholz, Robust face detection using the Hausdorff distance. Springer, 2001, pp [15] P. J. Phillips, W. T. Scruggs, A. J. O Toole, P. J. Flynn, K. W. Bowyer, C. L. Schott, and M. Sharpe, FRVT 2006 and ICE 2006 largescale experimental results, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, pp , 2010.

Color Local Texture Features Based Face Recognition

Color Local Texture Features Based Face Recognition Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India

More information

On designing an unconstrained tri-band pupil detection system for human identification

On designing an unconstrained tri-band pupil detection system for human identification Machine Vision and Applications DOI.7/s38-5-7-3 ORIGINAL PAPER On designing an unconstrained tri-band pupil detection system for human identification Cameron Whitelam Thirimachos Bourlai Received: 3 September

More information

Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation

Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation Sowmya. A (Digital Electronics (MTech), BITM Ballari), Shiva kumar k.s (Associate Professor,

More information

Eye Detection in the Middle-Wave Infrared Spectrum: Towards Recognition in the Dark

Eye Detection in the Middle-Wave Infrared Spectrum: Towards Recognition in the Dark Eye Detection in the Middle-Wave Infrared Spectrum: Towards Recognition in the Dark Thirimachos Bourlai 1 and Zain Jafri 2 Lane Department of Computer Science and Electrical Engineering, West Virginia

More information

FACE RECOGNITION USING INDEPENDENT COMPONENT

FACE RECOGNITION USING INDEPENDENT COMPONENT Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major

More information

An Automatic Face Recognition System in the Near Infrared Spectrum

An Automatic Face Recognition System in the Near Infrared Spectrum An Automatic Face Recognition System in the Near Infrared Spectrum Shuyan Zhao and Rolf-Rainer Grigat Technical University Hamburg Harburg Vision Systems, 4-08/1 Harburger Schloßstr 20 21079 Hamburg, Germany

More information

Intensity-Depth Face Alignment Using Cascade Shape Regression

Intensity-Depth Face Alignment Using Cascade Shape Regression Intensity-Depth Face Alignment Using Cascade Shape Regression Yang Cao 1 and Bao-Liang Lu 1,2 1 Center for Brain-like Computing and Machine Intelligence Department of Computer Science and Engineering Shanghai

More information

[Gaikwad *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor

[Gaikwad *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES LBP AND PCA BASED ON FACE RECOGNITION SYSTEM Ashok T. Gaikwad Institute of Management Studies and Information Technology, Aurangabad, (M.S), India ABSTRACT

More information

Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge

Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge Dong Yi, Shengcai Liao, Zhen Lei, Jitao Sang, and Stan Z. Li Center for Biometrics and Security Research, Institute

More information

A New Feature Local Binary Patterns (FLBP) Method

A New Feature Local Binary Patterns (FLBP) Method A New Feature Local Binary Patterns (FLBP) Method Jiayu Gu and Chengjun Liu The Department of Computer Science, New Jersey Institute of Technology, Newark, NJ 07102, USA Abstract - This paper presents

More information

The Impact of Diffuse Illumination on Iris Recognition

The Impact of Diffuse Illumination on Iris Recognition The Impact of Diffuse Illumination on Iris Recognition Amanda Sgroi, Kevin W. Bowyer, and Patrick J. Flynn University of Notre Dame asgroi kwb flynn @nd.edu Abstract Iris illumination typically causes

More information

Robust biometric image watermarking for fingerprint and face template protection

Robust biometric image watermarking for fingerprint and face template protection Robust biometric image watermarking for fingerprint and face template protection Mayank Vatsa 1, Richa Singh 1, Afzel Noore 1a),MaxM.Houck 2, and Keith Morris 2 1 West Virginia University, Morgantown,

More information

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Critique: Efficient Iris Recognition by Characterizing Key Local Variations Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher

More information

Preprocessing and Feature Sets for Robust Face Recognition

Preprocessing and Feature Sets for Robust Face Recognition Preprocessing and Feature Sets for Robust Face Recognition Xiaoyang Tan and Bill Triggs LJK-INRIA, 655 avenue de l Europe, Montbonnot 38330, France Xiaoyang.Tan@inria.fr, Bill.Triggs@imag.fr Abstract Many

More information

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

An Angle Estimation to Landmarks for Autonomous Satellite Navigation 5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian

More information

Holistic and Partial Face Recognition in the MWIR Band using Manual and Automatic Detection of Face-based Features

Holistic and Partial Face Recognition in the MWIR Band using Manual and Automatic Detection of Face-based Features Holistic and Partial Face Recognition in the MWIR Band using Manual and Automatic Detection of Face-based Features Nnamdi Osia and Thirimachos Bourlai West Virginia University PO Box 6201, Morgantown,

More information

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT. Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,

More information

Component-based Face Recognition with 3D Morphable Models

Component-based Face Recognition with 3D Morphable Models Component-based Face Recognition with 3D Morphable Models Jennifer Huang 1, Bernd Heisele 1,2, and Volker Blanz 3 1 Center for Biological and Computational Learning, M.I.T., Cambridge, MA, USA 2 Honda

More information

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing)

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) J.Nithya 1, P.Sathyasutha2 1,2 Assistant Professor,Gnanamani College of Engineering, Namakkal, Tamil Nadu, India ABSTRACT

More information

Generic Face Alignment Using an Improved Active Shape Model

Generic Face Alignment Using an Improved Active Shape Model Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks

Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks Neslihan Kose, Jean-Luc Dugelay Multimedia Department EURECOM Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

On Ear-based Human Identification in the Mid-Wave Infrared Spectrum

On Ear-based Human Identification in the Mid-Wave Infrared Spectrum On Ear-based Human Identification in the Mid-Wave Infrared Spectrum Ayman Abaza a, and Thirimachos Bourlai b a West Virginia High Tech Foundation, 1000 Technology Drive, Fairmont, USA b West Virginia University,

More information

Genetic Model Optimization for Hausdorff Distance-Based Face Localization

Genetic Model Optimization for Hausdorff Distance-Based Face Localization c In Proc. International ECCV 2002 Workshop on Biometric Authentication, Springer, Lecture Notes in Computer Science, LNCS-2359, pp. 103 111, Copenhagen, Denmark, June 2002. Genetic Model Optimization

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments

More information

MORPH-II: Feature Vector Documentation

MORPH-II: Feature Vector Documentation MORPH-II: Feature Vector Documentation Troy P. Kling NSF-REU Site at UNC Wilmington, Summer 2017 1 MORPH-II Subsets Four different subsets of the MORPH-II database were selected for a wide range of purposes,

More information

Fast and Efficient Automated Iris Segmentation by Region Growing

Fast and Efficient Automated Iris Segmentation by Region Growing Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 2, Issue. 6, June 2013, pg.325

More information

Spatial Frequency Domain Methods for Face and Iris Recognition

Spatial Frequency Domain Methods for Face and Iris Recognition Spatial Frequency Domain Methods for Face and Iris Recognition Dept. of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 e-mail: Kumar@ece.cmu.edu Tel.: (412) 268-3026

More information

Face Recognition under varying illumination with Local binary pattern

Face Recognition under varying illumination with Local binary pattern Face Recognition under varying illumination with Local binary pattern Ms.S.S.Ghatge 1, Prof V.V.Dixit 2 Department of E&TC, Sinhgad College of Engineering, University of Pune, India 1 Department of E&TC,

More information

Gabor Volume based Local Binary Pattern for Face Representation and Recognition

Gabor Volume based Local Binary Pattern for Face Representation and Recognition Gabor Volume based Local Binary Pattern for Face Representation and Recognition Zhen Lei 1 Shengcai Liao 1 Ran He 1 Matti Pietikäinen 2 Stan Z. Li 1 1 Center for Biometrics and Security Research & National

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

Face Recognition Under varying Lighting Conditions and Noise Using Texture Based and SIFT Feature Sets

Face Recognition Under varying Lighting Conditions and Noise Using Texture Based and SIFT Feature Sets IJCST Vo l. 3, Is s u e 4, Oc t - De c 2012 ISSN : 0976-8491 (Online) ISSN : 2229-4333 (Print) Face Recognition Under varying Lighting Conditions and Noise Using Texture Based and SIFT Feature Sets 1 Vishnupriya.

More information

Face Quality Assessment System in Video Sequences

Face Quality Assessment System in Video Sequences Face Quality Assessment System in Video Sequences Kamal Nasrollahi, Thomas B. Moeslund Laboratory of Computer Vision and Media Technology, Aalborg University Niels Jernes Vej 14, 9220 Aalborg Øst, Denmark

More information

Decorrelated Local Binary Pattern for Robust Face Recognition

Decorrelated Local Binary Pattern for Robust Face Recognition International Journal of Advanced Biotechnology and Research (IJBR) ISSN 0976-2612, Online ISSN 2278 599X, Vol-7, Special Issue-Number5-July, 2016, pp1283-1291 http://www.bipublication.com Research Article

More information

CHAPTER 5 PALMPRINT RECOGNITION WITH ENHANCEMENT

CHAPTER 5 PALMPRINT RECOGNITION WITH ENHANCEMENT 145 CHAPTER 5 PALMPRINT RECOGNITION WITH ENHANCEMENT 5.1 INTRODUCTION This chapter discusses the application of enhancement technique in palmprint recognition system. Section 5.2 describes image sharpening

More information

Image and Vision Computing

Image and Vision Computing Image and Vision Computing 31 (2013) 640 648 Contents lists available at SciVerse ScienceDirect Image and Vision Computing journal homepage: www.elsevier.com/locate/imavis On ear-based human identification

More information

Face Alignment Under Various Poses and Expressions

Face Alignment Under Various Poses and Expressions Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.

More information

A Novel Method of Face Recognition Using Lbp, Ltp And Gabor Features

A Novel Method of Face Recognition Using Lbp, Ltp And Gabor Features A Novel Method of Face Recognition Using Lbp, Ltp And Gabor Features Koneru. Anuradha, Manoj Kumar Tyagi Abstract:- Face recognition has received a great deal of attention from the scientific and industrial

More information

Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions

Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions Xiaoyang Tan and Bill Triggs INRIA & Laboratoire Jean Kuntzmann, 655 avenue de l Europe, Montbonnot 38330, France

More information

Graph Matching Iris Image Blocks with Local Binary Pattern

Graph Matching Iris Image Blocks with Local Binary Pattern Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of

More information

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception Color and Shading Color Shapiro and Stockman, Chapter 6 Color is an important factor for for human perception for object and material identification, even time of day. Color perception depends upon both

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model TAE IN SEOL*, SUN-TAE CHUNG*, SUNHO KI**, SEONGWON CHO**, YUN-KWANG HONG*** *School of Electronic Engineering

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

More information

International Journal of Computer Techniques Volume 4 Issue 1, Jan Feb 2017

International Journal of Computer Techniques Volume 4 Issue 1, Jan Feb 2017 RESEARCH ARTICLE OPEN ACCESS Facial expression recognition based on completed LBP Zicheng Lin 1, Yuanliang Huang 2 1 (College of Science and Engineering, Jinan University, Guangzhou, PR China) 2 (Institute

More information

Illumination Normalization in Face Recognition Using DCT and Supporting Vector Machine (SVM)

Illumination Normalization in Face Recognition Using DCT and Supporting Vector Machine (SVM) Illumination Normalization in Face Recognition Using DCT and Supporting Vector Machine (SVM) 1 Yun-Wen Wang ( 王詠文 ), 2 Wen-Yu Wang ( 王文昱 ), 2 Chiou-Shann Fuh ( 傅楸善 ) 1 Graduate Institute of Electronics

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

Basic Algorithms for Digital Image Analysis: a course

Basic Algorithms for Digital Image Analysis: a course Institute of Informatics Eötvös Loránd University Budapest, Hungary Basic Algorithms for Digital Image Analysis: a course Dmitrij Csetverikov with help of Attila Lerch, Judit Verestóy, Zoltán Megyesi,

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian

Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian 4th International Conference on Machinery, Materials and Computing Technology (ICMMCT 2016) Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian Hebei Engineering and

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Logical Templates for Feature Extraction in Fingerprint Images

Logical Templates for Feature Extraction in Fingerprint Images Logical Templates for Feature Extraction in Fingerprint Images Bir Bhanu, Michael Boshra and Xuejun Tan Center for Research in Intelligent Systems University of Califomia, Riverside, CA 9252 1, USA Email:

More information

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Andreas Uhl Department of Computer Sciences University of Salzburg, Austria uhl@cosy.sbg.ac.at

More information

Point-Pair Descriptors for 3D Facial Landmark Localisation

Point-Pair Descriptors for 3D Facial Landmark Localisation Point-Pair Descriptors for 3D Facial Landmark Localisation Marcelo Romero and Nick Pears Department of Computer Science The University of York York, UK {mromero, nep}@cs.york.ac.uk Abstract Our pose-invariant

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

UR3D-C: Linear Dimensionality Reduction for Efficient 3D Face Recognition

UR3D-C: Linear Dimensionality Reduction for Efficient 3D Face Recognition UR3D-C: Linear Dimensionality Reduction for Efficient 3D Face Recognition Omar Ocegueda 1, Georgios Passalis 1,2, Theoharis Theoharis 1,2, Shishir K. Shah 1, Ioannis A. Kakadiaris 1 Abstract We present

More information

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION Dipankar Das Department of Information and Communication Engineering, University of Rajshahi, Rajshahi-6205, Bangladesh ABSTRACT Real-time

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Head Frontal-View Identification Using Extended LLE

Head Frontal-View Identification Using Extended LLE Head Frontal-View Identification Using Extended LLE Chao Wang Center for Spoken Language Understanding, Oregon Health and Science University Abstract Automatic head frontal-view identification is challenging

More information

Evaluation of Texture Descriptors for Automated Gender Estimation from Fingerprints

Evaluation of Texture Descriptors for Automated Gender Estimation from Fingerprints Appeared in Proc. of ECCV Workshop on Soft Biometrics, (Zurich, Switzerland), September 2014 Evaluation of Texture Descriptors for Automated Gender Estimation from Fingerprints Ajita Rattani 1, Cunjian

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

A New Gabor Phase Difference Pattern for Face and Ear Recognition

A New Gabor Phase Difference Pattern for Face and Ear Recognition A New Gabor Phase Difference Pattern for Face and Ear Recognition Yimo Guo 1,, Guoying Zhao 1, Jie Chen 1, Matti Pietikäinen 1 and Zhengguang Xu 1 Machine Vision Group, Department of Electrical and Information

More information

Color Content Based Image Classification

Color Content Based Image Classification Color Content Based Image Classification Szabolcs Sergyán Budapest Tech sergyan.szabolcs@nik.bmf.hu Abstract: In content based image retrieval systems the most efficient and simple searches are the color

More information

Periocular Biometrics: When Iris Recognition Fails

Periocular Biometrics: When Iris Recognition Fails Periocular Biometrics: When Iris Recognition Fails Samarth Bharadwaj, Himanshu S. Bhatt, Mayank Vatsa and Richa Singh Abstract The performance of iris recognition is affected if iris is captured at a distance.

More information

Image enhancement for face recognition using color segmentation and Edge detection algorithm

Image enhancement for face recognition using color segmentation and Edge detection algorithm Image enhancement for face recognition using color segmentation and Edge detection algorithm 1 Dr. K Perumal and 2 N Saravana Perumal 1 Computer Centre, Madurai Kamaraj University, Madurai-625021, Tamilnadu,

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Histograms of Oriented Gradients

Histograms of Oriented Gradients Histograms of Oriented Gradients Carlo Tomasi September 18, 2017 A useful question to ask of an image is whether it contains one or more instances of a certain object: a person, a face, a car, and so forth.

More information

IMAGE ENHANCEMENT in SPATIAL DOMAIN by Intensity Transformations

IMAGE ENHANCEMENT in SPATIAL DOMAIN by Intensity Transformations It makes all the difference whether one sees darkness through the light or brightness through the shadows David Lindsay IMAGE ENHANCEMENT in SPATIAL DOMAIN by Intensity Transformations Kalyan Kumar Barik

More information

Iris Recognition for Eyelash Detection Using Gabor Filter

Iris Recognition for Eyelash Detection Using Gabor Filter Iris Recognition for Eyelash Detection Using Gabor Filter Rupesh Mude 1, Meenakshi R Patel 2 Computer Science and Engineering Rungta College of Engineering and Technology, Bhilai Abstract :- Iris recognition

More information

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM 1 PHYO THET KHIN, 2 LAI LAI WIN KYI 1,2 Department of Information Technology, Mandalay Technological University The Republic of the Union of Myanmar

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Chapter 3: Intensity Transformations and Spatial Filtering

Chapter 3: Intensity Transformations and Spatial Filtering Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing

More information

Short Run length Descriptor for Image Retrieval

Short Run length Descriptor for Image Retrieval CHAPTER -6 Short Run length Descriptor for Image Retrieval 6.1 Introduction In the recent years, growth of multimedia information from various sources has increased many folds. This has created the demand

More information

An Adaptive Threshold LBP Algorithm for Face Recognition

An Adaptive Threshold LBP Algorithm for Face Recognition An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent

More information

SIFT - scale-invariant feature transform Konrad Schindler

SIFT - scale-invariant feature transform Konrad Schindler SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective

More information

A Novel Algorithm for Color Image matching using Wavelet-SIFT

A Novel Algorithm for Color Image matching using Wavelet-SIFT International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 A Novel Algorithm for Color Image matching using Wavelet-SIFT Mupuri Prasanth Babu *, P. Ravi Shankar **

More information

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Ralph Ma, Amr Mohamed ralphma@stanford.edu, amr1@stanford.edu Abstract Much research has been done in the field of automated

More information

Facial Ethnicity Classification based on Boosted Local Texture and Shape Descriptions

Facial Ethnicity Classification based on Boosted Local Texture and Shape Descriptions Facial Ethnicity Classification based on Boosted Local Texture and Shape Descriptions Huaxiong Ding, Di Huang, IEEE Member, Yunhong Wang, IEEE Member, Liming Chen, IEEE Member, Abstract Ethnicity is a

More information

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters EECS 556 Image Processing W 09 Image enhancement Smoothing and noise removal Sharpening filters What is image processing? Image processing is the application of 2D signal processing methods to images Image

More information

CHAPTER 3 FACE DETECTION AND PRE-PROCESSING

CHAPTER 3 FACE DETECTION AND PRE-PROCESSING 59 CHAPTER 3 FACE DETECTION AND PRE-PROCESSING 3.1 INTRODUCTION Detecting human faces automatically is becoming a very important task in many applications, such as security access control systems or contentbased

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels

Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIENCE, VOL.32, NO.9, SEPTEMBER 2010 Hae Jong Seo, Student Member,

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

NAME :... Signature :... Desk no. :... Question Answer

NAME :... Signature :... Desk no. :... Question Answer Written test Tuesday 19th of December 2000. Aids allowed : All usual aids Weighting : All questions are equally weighted. NAME :................................................... Signature :...................................................

More information

Computer Vision. The image formation process

Computer Vision. The image formation process Computer Vision The image formation process Filippo Bergamasco (filippo.bergamasco@unive.it) http://www.dais.unive.it/~bergamasco DAIS, Ca Foscari University of Venice Academic year 2016/2017 The image

More information

Large-scale Datasets: Faces with Partial Occlusions and Pose Variations in the Wild

Large-scale Datasets: Faces with Partial Occlusions and Pose Variations in the Wild Large-scale Datasets: Faces with Partial Occlusions and Pose Variations in the Wild Tarik Alafif, Zeyad Hailat, Melih Aslan and Xuewen Chen Computer Science Department, Wayne State University Detroit,

More information

NEAR-IR BROADBAND POLARIZER DESIGN BASED ON PHOTONIC CRYSTALS

NEAR-IR BROADBAND POLARIZER DESIGN BASED ON PHOTONIC CRYSTALS U.P.B. Sci. Bull., Series A, Vol. 77, Iss. 3, 2015 ISSN 1223-7027 NEAR-IR BROADBAND POLARIZER DESIGN BASED ON PHOTONIC CRYSTALS Bogdan Stefaniţă CALIN 1, Liliana PREDA 2 We have successfully designed a

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering

More information

Face Recognition with Local Binary Patterns

Face Recognition with Local Binary Patterns Face Recognition with Local Binary Patterns Bachelor Assignment B.K. Julsing University of Twente Department of Electrical Engineering, Mathematics & Computer Science (EEMCS) Signals & Systems Group (SAS)

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

Circularity and Self-Similarity Analysis for the Precise Location of the Pupils

Circularity and Self-Similarity Analysis for the Precise Location of the Pupils 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Circularity and Self-Similarity Analysis for the Precise Location of the Pupils Marco Leo, Cosimo Distante, Dario

More information

Real-Time Model-Based Hand Localization for Unsupervised Palmar Image Acquisition

Real-Time Model-Based Hand Localization for Unsupervised Palmar Image Acquisition Real-Time Model-Based Hand Localization for Unsupervised Palmar Image Acquisition Ivan Fratric 1, Slobodan Ribaric 1 1 University of Zagreb, Faculty of Electrical Engineering and Computing, Unska 3, 10000

More information

Semi-Supervised PCA-based Face Recognition Using Self-Training

Semi-Supervised PCA-based Face Recognition Using Self-Training Semi-Supervised PCA-based Face Recognition Using Self-Training Fabio Roli and Gian Luca Marcialis Dept. of Electrical and Electronic Engineering, University of Cagliari Piazza d Armi, 09123 Cagliari, Italy

More information

Effects of multi-scale velocity heterogeneities on wave-equation migration Yong Ma and Paul Sava, Center for Wave Phenomena, Colorado School of Mines

Effects of multi-scale velocity heterogeneities on wave-equation migration Yong Ma and Paul Sava, Center for Wave Phenomena, Colorado School of Mines Effects of multi-scale velocity heterogeneities on wave-equation migration Yong Ma and Paul Sava, Center for Wave Phenomena, Colorado School of Mines SUMMARY Velocity models used for wavefield-based seismic

More information

A Keypoint Descriptor Inspired by Retinal Computation

A Keypoint Descriptor Inspired by Retinal Computation A Keypoint Descriptor Inspired by Retinal Computation Bongsoo Suh, Sungjoon Choi, Han Lee Stanford University {bssuh,sungjoonchoi,hanlee}@stanford.edu Abstract. The main goal of our project is to implement

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information