Pupil Localization Algorithm based on Hough Transform and Harris Corner Detection

Similar documents
Line, edge, blob and corner detection

Image Processing

DETECTION OF DETERMINED EYE FEATURES IN DIGITAL IMAGE

Algorithm Optimization for the Edge Extraction of Thangka Images

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian

Implementation Of Harris Corner Matching Based On FPGA

Comparison between Various Edge Detection Methods on Satellite Image

An Adaptive Threshold LBP Algorithm for Face Recognition

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Filtering Images. Contents

Research on QR Code Image Pre-processing Algorithm under Complex Background

Rectangle Positioning Algorithm Simulation Based on Edge Detection and Hough Transform

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

A Fast Method of Vehicle Logo Location Honglin Li

Local Image preprocessing (cont d)

Image Segmentation Image Thresholds Edge-detection Edge-detection, the 1 st derivative Edge-detection, the 2 nd derivative Horizontal Edges Vertical

A Rapid Automatic Image Registration Method Based on Improved SIFT

International Conference on Electromechanical Control Technology and Transportation (ICECTT 2015)

A New Feature Local Binary Patterns (FLBP) Method

Anno accademico 2006/2007. Davide Migliore

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d

Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Edge detection. Gradient-based edge operators

Eye Detection Based-on Color and Shape Features

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation

Edge and local feature detection - 2. Importance of edge detection in computer vision

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface

Research on online inspection system of emulsion explosive packaging defect based on machine vision

Pedestrian Detection with Improved LBP and Hog Algorithm

Practical Image and Video Processing Using MATLAB

Eyes extraction from facial images using edge density

Study on Image Position Algorithm of the PCB Detection

An algorithm of lips secondary positioning and feature extraction based on YCbCr color space SHEN Xian-geng 1, WU Wei 2

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Research of Traffic Flow Based on SVM Method. Deng-hong YIN, Jian WANG and Bo LI *

Research on Scoreboard Detection and Localization in Basketball Video

A NEW OBJECTIVE CRITERION FOR IRIS LOCALIZATION

Image features. Image Features

New Algorithm and Indexing to Improve the Accuracy and Speed in Iris Recognition

Project Report for EE7700

Implementation of Canny Edge Detection Algorithm on FPGA and displaying Image through VGA Interface

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

An adaptive container code character segmentation algorithm Yajie Zhu1, a, Chenglong Liang2, b

Vector Bank Based Multimedia Codec System-on-a-Chip (SoC) Design

IRIS SEGMENTATION OF NON-IDEAL IMAGES

Modeling Body Motion Posture Recognition Using 2D-Skeleton Angle Feature

A Fast Circular Edge Detector for the Iris Region Segmentation

Tracking facial features using low resolution and low fps cameras under variable light conditions

Sobel Edge Detection Algorithm

Edge Detection Using Streaming SIMD Extensions On Low Cost Robotic Platforms

Segmentation I: Edges and Lines

Research on Evaluation Method of Product Style Semantics Based on Neural Network

Time Series Clustering Ensemble Algorithm Based on Locality Preserving Projection

An Efficient Iris Recognition Using Correlation Method

Computer Aided Drafting, Design and Manufacturing Volume 26, Number 2, June 2016, Page 18

Application of Geometry Rectification to Deformed Characters Recognition Liqun Wang1, a * and Honghui Fan2

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

A New Technique of Extraction of Edge Detection Using Digital Image Processing

EN1610 Image Understanding Lab # 3: Edges

Research and application of volleyball target tracking algorithm based on surf corner detection

A Kind of Fast Image Edge Detection Algorithm Based on Dynamic Threshold Value

Chapter 3 Image Registration. Chapter 3 Image Registration

The Elimination Eyelash Iris Recognition Based on Local Median Frequency Gabor Filters

Edge Detection. EE/CSE 576 Linda Shapiro

Face Quality Assessment System in Video Sequences

Automatic Mouth Localization Using Edge Projection

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

Motion Estimation and Optical Flow Tracking

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

Efficient Path Finding Method Based Evaluation Function in Large Scene Online Games and Its Application

A threshold decision of the object image by using the smart tag

Surface Defect Edge Detection Based on Contourlet Transformation

A Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection

Stereo Vision Image Processing Strategy for Moving Object Detecting

Research on the Wood Cell Contour Extraction Method Based on Image Texture and Gray-scale Information.

Real-time Corner and Polygon Detection System on FPGA. Chunmeng Bi and Tsutomu Maruyama University of Tsukuba

Texture Sensitive Image Inpainting after Object Morphing

Facial Feature Extraction Based On FPD and GLCM Algorithms

EE640 FINAL PROJECT HEADS OR TAILS

Other Linear Filters CS 211A

Rotation Invariant Finger Vein Recognition *

E0005E - Industrial Image Analysis

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Example 1: Regions. Image Segmentation. Example 3: Lines and Circular Arcs. Example 2: Straight Lines. Region Segmentation: Segmentation Criteria

Example 2: Straight Lines. Image Segmentation. Example 3: Lines and Circular Arcs. Example 1: Regions

Face Detection and Recognition in an Image Sequence using Eigenedginess

A Real-time Detection for Traffic Surveillance Video Shaking

An FPGA based Minutiae Extraction System for Fingerprint Recognition

Image Enhancement Techniques for Fingerprint Identification

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

Three-dimensional reconstruction of binocular stereo vision based on improved SURF algorithm and KD-Tree matching algorithm

Eye Localization Using Color Information. Amit Chilgunde

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Color Local Texture Features Based Face Recognition

Transcription:

Pupil Localization Algorithm based on Hough Transform and Harris Corner Detection 1 Chongqing University of Technology Electronic Information and Automation College Chongqing, 400054, China E-mail: zh_lian@cqut.edu.cn Yongxiu Zhou 2 Chongqing University of Technology Electronic Information and Automation College Chongqing, 400054, China E-mail: 240499463@qq.com Zixiang Gao 3 Chongqing University of Technology Electronic Information and Automation College Chongqing, 400054, China E-mail: 397300572@qq.com Daxiao Chen 4 Chongqing University of Technology Electronic Information and Automation College Chongqing, 400054, China E-mail: 1327258672 @qq.com In order to realize the pupil s positioning algorithm and solve the problem of complexity in the present stage, this paper hereby puts forward a kind of accurate location of the pupil from coarse to precise algorithm. Firstly, the position of the eye is located in the position of the human eye by grayscale integral projection; secondly, the precise position of the eye pupil center is detected by using the Hough transform. Finally the corner points are detected by Harris corner detection algorithm. The proposed algorithm is proved to be simple and accurate by MATLAB. ISCC 2015 18-19, December, 2015 Guangzhou, China 1 First Author 2 Secend Author And Correspongding Author 3 Third Author 4 Fourth Author Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0). http://pos.sissa.it/

1. Introduction The study of the eye and its movement is the key to understand the human visual mechanism, understand the human's emotion and behavior as well as the human eyes movement. Eye detection and tracking is a necessary step in terms of the face recognition, facial expression recognition, eye movement analysis, iris recognition and other technologies. It involves many subjects such as image processing, computer vision, pattern recognition and so on. In recent years, many eyes location algorithm methods are presented, in which the typical method of anthropometry based on standard[1], the detection method based on skin color model[2] and the statistical learning methods based on training data[3]. The one based on statistical learning algorithm is a hot spot in current eye detection algorithm because this kind of method is of high applicability, can have a more in-depth study in the field of artificial intelligence and pattern recognition. On the basis of the application of statistical learning algorithm for eye detection is mainly Adaboost algorithm[4] and deep learning algorithm[5], the two algorithms have exerted sound effect in the human eye measurement field; however, these algorithms are basically positioning for pupil image which is located in the center of eye pupil and accurate position cannot detect the exact rotation of the pupil. In order to locate and track the pupil accurately, this paper presents an accurate positioning algorithm: (1) Using the gray level integration algorithm to find the approximate position of the eye position in the human face. (2) Using the Hough transform to locate the precise position of the eye pupil center. (3) Using Harris corner detection algorithm to find the corner of the eye. (4) Determine the exact position of the pupil by the coordinate location of the pupil and the eye corner. As every step of the algorithm can be realized quickly, with the traditional eye detection algorithm compared, the algorithm is simple and fast positioning. The eye detection is based on the extracted face images, this paper uses Adaboost algorithm to extract the face images [6]. 2. Rough Extraction Of Human Eye At this stage, the human eyes detection method can be divided into the statistics-based method and the image processing method. The method based on statistics can also be called the processing method of pattern recognition based on learning. The main algorithm is Adaboost algorithm and learning deep neural network algorithm. The characteristics of this kind of method is the intelligent algorithm. The Adaboost algorithm has good effect on low quality images with the various positions of eyes. But the learning method based on a large number of training samples is needed, and the training process and classifier complexity in the process of human rough detection, the detection accuracy is not high; but there is a high demand in terms of the speed of detection, so this paper adopts the method based on the image processing. 2.1 Gray Level Gradient Integral Projection In Vertical Direction 2

As eye area in face has the maximum changes of gray level on vertical direction, using the gray integral projection can easily find out the eyes in the face vertical position of the vertical facial expression for the gray integral projection: W P y ( x )= I ( x, y ) (2.1) y=1 I ( x, y) is the gray value of the image line x, column y, W is the width of the image, P y ( x ) is the gray integral value of the line x of the image, the curve is shown in Fig. 1 (a). As shown in Fig. 1 (a), ordinary gray integral projection cannot accurately find the position of the human eye in the vertical direction, because it is only the simple sum of gray value, and cannot reflect the change of gray value of each line; as a result, we calculate the gradient value of each row of the image and calculate the gray gradient integral projection. The gradient value of the image is obtained by using the average value of the pixels in the image. The algorithm uses the gradient operator and the image convolution: (1)Computing l, the length of scale-invariant gradient operator L: In the formula, r o und represents rounding function operator. l=r oun d ( W 100) 2+1 (2.2) (2)Construct the gradient scale invariant operator L, where n represents the serial number of operators: (2.3) (3)Using the gradient operator to get the gradient map I ',after convolution the original image I: I '=( L * I ) (2.4) (4)Computing the gray level gradient integral projection of face in vertical direction P ' y ( x ) : P y ' W ( x )= I ' ( x, y) (2.5) y=1 The gray gradient integral projection curve as shown in Fig. 1(b). It can be seen that the gray gradient of eye position is significantly larger than other positions of the gradient integral value, the maximum value of the curve can be located in the vertical direction of the eye. As the eye location is an interval, it is generally believed that the width of the eye in the vertical direction of the face is 1/5. At the maximum, use the coordinate plus the length 1/10 of the face, and then cut the length 1/10 of the face. The range is the range of the eye in the face. Figure 1: (a) The gray level integral projection in vertical direction (b) The gray vertical gradient integral projection in vertical direction 3

2.2 Gray Level Gradient Integral Projection In Horizontal Direction In the horizontal direction, the image is extracted from the edge value of the image. There are a lot of algorithms to extract the edge value. At the present stage, there are Sobel algorithm, Prewitt algorithm, Laplacian of Gaussian Roberts algorithm, Zero-Cross algorithm and Canny algorithm. The basic idea of the algorithms is the composition operator and the image convolution and then recalculate the image gray value; finally, select the appropriate threshold to determine the image edge points; the difference lies in the different algorithm to select different convolution operators. In this thesis, use Sobel algorithm to realize the function of edge extraction. Sobel operator: S x =( 1 0 1 2 0 2 1 0 1) S y =( 1 2 1 0 0 0 1 2 1) (2.6) First, use the original image convolution with Sx and Sy to get horizontal and vertical gradient values of each pixel of the original image Gx and Gy: G x =S x * I, G y = I * S y (2.7) Then calculate the new gray value of each pixel: G= G x 2 +G y 2 (2.8) Finally, select the appropriate threshold to decide whether the pixel is edge point. The resulting binary images are shown in Fig. 2. As the gray value of the image is much higher than that of other places, the integral projection curve of Fig. 2 can be obtained, and the left and right boundaries of the face are determined by the two peak positions of the curves. Concrete realization method: (1) Using binary image to do the integral processing on horizontal direction (add the value of each column), the integral curve as shown in Fig. 2. (2) Divide the face into two halves, find the maximum values of the right and the left side respectively, marked as y1 and y2. (3) Find the horizontal coordinates x1 and x2 which corresponding to y1 and y2. x1 and x2 are the left and right boundaries of the human face. Figure 2: Point Integral Projection of Horizontal Direction In the face of the eyes were determined by integral projection position in vertical and horizontal directions, the eye position roughly as shown in Fig. 3, for next step pupil precise localization. 4

Figure 3: Eye Position Rough Renderings 3. Pupil Precise Localization A rough image of the human eye is divided into left and right images of the left and right two images respectively, and then the obtained images are extracted by THE Sobel operator. Get results as shown in Fig. 4. Figure 4: (a) the left edge value extraction image (b) the right edge value extraction image From Fig. 4, the pupil edge is a regular circle, the center can be obtained by determining the pupil edge of the pupil center location. This paper uses the Hough transform to find the center of the pupil. The basic principle of Hough transform for the center: according to the mathematical expression of the circle: ( x a ) 2 +( y b) 2 =r 2 (3.1) Determine the center coordinates (a, b) and the radius r, you can obtain the coordinates of all points (x, y) on the circumference. In turn, know the point (x, y) on the circumference of a circle equation can be calculated and represented by (a, b, r). Find the circle equation (a, b, r) which corresponds to the most points on the picture, you can find the circular center coordinates of the pupil in the picture. Specific implementation is shown as follows: (1)Establish a three-dimensional array (A, B, R) to count the number of points (x, y) corresponding to each circle. Where A and B is the height and width of the image, R is the smaller value of A/2 and B/2. (2)Traverse the image (x, y), take each point (x, y) which possible belong to the circle equation (a, b, r) store in the array (A, B, R). (3)Find the maximum value in the array (A, B, R), (a, b, r) which is corresponding to the pupil circle equation; coordinate(a, b) is coordinate corresponding to the pupil center. The final pupil center is shown in Fig. 5. Figure 5: Location Map of the Pupil Center 4. Calculate the Pupil Coordinates 5

Upon finding the coordinates of the center position of the pupil, we have to find proper coordinate axis to express the pupil location coordinates. With the establishment of coordinate axis and the location of the pupil is relatively constant, this paper selects the midpoint of the inner corner of the eye corner as the origin of coordinates and the connection as X axis coordinate system. The pupil radius is unit 1 and the pupil position is mapped to the new coordinate system. There are many corner detection methods, such as Harris corner detection algorithm, Susan algorithm and CSS(Curvature Scale Space) algorithm. This paper uses Harris algorithm to detect the position of the corner of the eye. According to the basic principle of Harris algorithm, the points can be divided into three types: the flat region, which the gray value changes of the point in the horizontal and vertical directions are not large; the edge region, the gray value changes of the point is larger at the horizontal or vertical direction; the corner, the gray value changes of the points in the horizontal direction and the vertical direction is very large. According to this method, the corner can be distinguished from of the picture. Specific implementation methods are shown as follows: (1)Select the appropriate area I as the corner of the eye to the detection area. (2)Structure the gradient operator in the horizontal direction Hx and the vertical direction of the gradient operator Hy. The composition of the gradient operator can be selected from the ordinary Harris corner extraction Type (4.1) and the improved Harris corner extraction operator Type (4.2): H x =( 2, 1,0,1,2) H y =[ 2, 1,0,1,2] (4.1) H x =( 1 0 1 1 0 1 1 0 1) y=( 1 1 1 H 0 0 0 1 1 1 ) (4.2) (3)Use the horizontal direction gradient operator Hx and the vertical direction gradient operator Hy convolution with the image I to get the horizontal gradient matrix Ix and the vertical gradient matrix Iy: I x =H x * I I y = H y * I (4.3) (4)Use Gaussian smoothing operator of image convolution matrix M once again: M =( I 2 x I x I y 2 I x I y I y ) (4.4) (5)According to the matrix M calculation of angle function R: R=det (M ) k * t r 2 ( M ) (4.5) Where k is the value of experience, usually 0.04~0.06, det (M ) said the determinant of the matrix M and k t r 2 (M ) is the trace of matrix M. (6)Find the points which satisfy the two conditions as the corners: R value is greater than a predetermined threshold; R value is the maximum value of a given region. (7)In general, there are more than one points can be find. As the gray value of the corner of the eye is smaller in the region so that the coordinates of the gray value of the corner points are chosen as the coordinates of the corner points. Finally get the corner point position as shown in Fig. 6. 6

Figure 6: Corner Point Positioning Map By transforming the corner of the eye point coordinates to the coordinates of the pupil coordinates, finally get the pupil s precise coordinates of the location, as shown in Fig. 7. Compared to the original picture, although the camera position relative to the face is not a horizontal position, with the coordinate transformation, the center of the pupil can be transformed to the same horizontal direction. Figure 7: Pupil Precise Location Coordinates 5. Experimental Results The experiments will be carried out by the the algorithm in MATLAB programming verification, the computer CPU is Intel (R) Core (TM) i7-4500u, CPU Clock Speed is 1.80GHz, RAM is 8.00GB, the operating system is 64 bits. The images uses ordinary camera to get the image and choose different pupil poses of the face of the test. The final experimental results are shown in Fig. 8. The average processing time is less than 0.3s and all tests can be accurately located. Figure 8: Different Pupil Position and Corner Point Positioning Map 6. Conclusion This paper presents an algorithm for locating the eye pupil by rough to precise. Accurate positioning of the pupil by using gray integral algorithm, Hough transform and Harris corner detection algorithm. On the matlab algorithm, the algorithm is proved to be simple and accurate. 7

After this paper, we will study the pupil tracking algorithm and use this paper to get accurate position of the pupil so as to accurate track the pupil. At the same time in the experimental process, there may be incorrect positioning of the Hough transform to detect the pupil center when the pupils are covered by eyelid. It is necessary to solve the optimization problems in the future research. References [1] Jianxin Wu, Zhihua Zhou. Efficient face candidates selector for face detection [J]. Pattern Recognition, 2003, 36(5): 1175-1186. [2] Yuseok Ban, Sang-Ki Kim, Sooyeon Kim, Kar-Ann Toh, Sangyoun Lee. Face detection based on skin color likelihood [J].Pattern Recognition, 2014, 47(4):1573-1585. [3] Mahir Faik Karaaba, Lambert Schomaker, Marco Wiering. Machine learning for multi-view eyepair detection [J]. Engineering Applications of Artificial Intelligence, 2014, 33(8): 69-79. [4] Lingmin Long. Research on face detection and eye location algorithm based on Adaboost [D]. Chengdu: University of Electronic Science and technology, 2008. (In Chinese) [5] Miaozhen Lin. Research on face recognition based on deep learning [D]. Dalian: Dalian University of Technology, 2013. (In Chinese) [6] Cuihuan Du, Hong Zhu, Liming Luo, Jie Liu, Xiangyang Huang. Face detection in video based on AdaBoost algorithm and skin model [J]. The Journal of China Universities of Posts and Telecommunications, 2013, 20(1):6-9. (In Chinese) 8