A REAL-TIME FACIAL FEATURE BASED HEAD TRACKER

Size: px
Start display at page:

Download "A REAL-TIME FACIAL FEATURE BASED HEAD TRACKER"

Transcription

1 A REAL-TIME FACIAL FEATURE BASED HEAD TRACKER Jari Hannuksela, Janne Heikkilä and Matti Pietikäinen {jari.hannuksela, jth, Machine Vision Group, Infotech Oulu P.O. Box 45, FIN-914 University of Oulu, Finland ABSTRACT This paper presents a fast and efficient head tracking approach. Skin detection, gray-scale morphology and a geometrical face model are applied to detect roughly the face region and extract facial features automatically. A novel Kalman filtering framework is utilized for tracking and estimation of the 3-D pose of the moving head. An application to cursor control on a computer display is presented. Experiments with real image sequences show that the system is able to extract and track facial features reliably. Pose estimation accuracy was tested with synthetic data and good preliminary results were obtained. The real-time performance achieved indicates that the proposed system can be applied in platforms where computational resources are limited. 1. INTRODUCTION Proactive computing technology aims to the development of more intelligent and more friendly human-computer interaction (HCI) methods. In the future computers will interact with people in a more natural way. To achieve this kind of advanced interaction, the system should understand human behaviour and respond appropriately for example by observing one s facial expressions and gestures. Machine vision provides an excellent way of realizing such an intelligent human-computer interface that enables a user to control a computer without any physical contact with devices such as keyboards, mice and displays. The estimation of head position and orientation (pose) allows recognition of simple gestures such as nodding and head shaking. The pose also makes it possible to estimate roughly the human gaze direction and to determine which part of the screen the user is looking at. Instead of using a mouse, the user can control the desktop of the computer or a smart phone and activate the desired event using head movements. A number of real-time head tracking and pose estimation methods have been proposed such as [1, 2, 3, 4]. On the other hand, various methods have been proposed for facial feature extraction [5], which is a crucial stage in a feature based tracker. However, the current real-time methods need a lot of computational resources or special hardware, and certain functionalities such as automatic initialization are often missing. Also, there is still a need to improve the speed, robustness and accuracy of such methods. Our purpose is to develop a more flexible approach which can be applied in resource limited embedded systems such as smart phones but still retaining good pose estimation accuracy. We propose a fast real-time head tracker based on extraction of facial features and continuous estimation of head pose. The implementation consists of initialization and tracking stages. In the initialization facial features are extracted automatically whenever a human face is detected. For proper face candidate detection, a near-frontal view is needed. After success in the first stage, tracking is started. The head pose is initialized by a perspective three point (P3P) algorithm. During tracking, a Kalman filter estimates the head pose utilizing a simple head model containing the eyes and the mouth. The approximate gaze direction is also estimated from the orientation of the head. To demonstrate the practicality of the approach, gaze direction information is applied to cursor control on a computer display. 2. FACIAL FEATURE EXTRACTION Facial features are extracted in three steps. First, the facelike regions are found by skin color analysis and then the shapes of the regions are verified. Next, the facial feature candidates are searched for within those regions. Finally, the candidates are evaluated to find features of interest. a) Input image b) Skin detection result Figure 1: Skin detection. 267

2 2.1. Face-like region detection Color information has been widely used in face detection and tracking, because processing of color is faster than of any other features. We detect possible face-like regions in an input image (Fig. 1a) utilizing skin detection proposed by Martinkauppi et al. [6]. They have found Normalized Color Coordinates (NCC) combined with the skin locus most appropriate for skin detection under varying illumination. First, the image presented in the RGB color space is converted to the NCC space r, g and b. Because r+g+b = 1, we only use two chromaticities r and b for detection in a histogram where the skin color occupies a small cluster. The cluster is found by training with facial images in different illuminations. If r and b of a pixel fall into the area of the skin locus, the pixel belongs to skin. In practise detection is implemented using a look-up table. The result is enhanced using a morphological binary closing with a mask size of 3 x 3 pixels (Fig. 1b). After the skin regions have been detected, they are possible faces. To distinguish faces and other skin-like areas, connected component analysis is performed, and only large components of the skin color are kept. The size of the bounding rectangle for the skin region should be at least 1% of the image size Facial feature candidate extraction Once a possible face is detected, the features are searched for within the facial region. Based on the observation that facial features are usually darker than their surroundings, we use a morphological valley detector. The morphological operators are well suited for rough feature extraction due to their fast implementation and robust nature. For example, morphology has been utilized for eye region detection [7]. We apply an operation called black top-hat to gray scale image. At first, a closing operation is performed and then the original image is subtracted from the result. The valley image V is defined as V (x, y) = f(x, y) B f(x, y), (1) where f(x, y) is the image tone at a point (x, y) and B is a small structuring element. The size of B is 3 x 3 pixels. A pixel at (x, y) belongs to a feature candidate if the following equations are true f(x, y) < T L and V (x, y) > T V, (2) where T L and T V are pre-defined thresholds. The detection result can be noisy and there exists some isolated pixels. Therefore we improve the result by utilizing morphological binary closing with a mask size of 3 x 3 pixels. Then, connected component analysis is performed and centers of mass of the objects are used as feature. Figure 2: Facial feature extraction Best constellation search After the possible facial features are detected, a similar method as proposed by Jeng et al. [8] is applied to evaluate feature constellations. Based on experimental evaluation we have modified their face model and the search procedure to be more appropriate for our purposes. The original model does not match very well for western people. Also, two nostrils are used instead of the nose. We propose a geometrical face model including eyes, eyebrows, nostrils and mouth as shown in Fig. 3. The line passing through the centers of the eyes is called the base line. The d mouth, d nostril and d eyebrow indicate the distances from features to the base line. Two features form a c 1 c 2 deyebrow d nostrill baseline ax + by + c = dmouth Figure 3: Geometry of the face model. possible eye pair if they locate at the upper half, and the distance D between them is under 8% of the width of the face bounding box. Also, the base line angle should be under 3 degrees. These conditions reduce significantly the number of potential eyes. Based on D, other reference distances D nostril =.7D, D eyebrow =.35D and D mouth = 1.1D are defined. The model distances are found from measurements with several real faces. For 268

3 each eye pair, the other facial features are searched for. The possible mouth for an eye pair is searched from a region that is located within the distance of 1.1D from the base line. If that region contains a feature which locates between the boundary lines c 1 and c 2, it is treated as a mouth for this possible eye pair. In order to rank many candidates, a special evaluation function is utilized for each facial feature: camera Y C v X C image (u, v) Z C u X D Z H N H P 2 P 3 X H P 1 Y H head E feature = exp( 1( d feature D feature ) 2 ), (3) D P D display where feature = {mouth, nostril, eyebrow}. The evaluation function for the eye pair is Z D Y D E eyepair = exp( 1( D.4BB width ) 2 ), (4) D where BB width is the width of the face bounding box. The total evaluation value is a weighted sum of the values for each facial feature. The weights for the eye pair, mouth, nostrils and eyebrows are.4,.3,.1 and.5, respectively. The weight is based on the importance of a given feature for face detetction. The eyes are considered the most important features. The constellation which has the largest evaluation value, is assumed to be a face. Fig. 2 shows an example of facial feature extraction where results for valley detection is on the left, candidates on the middle and the best constellation on the right. 3. HEAD TRACKING The pose of a human head can be estimated by extracting the facial feature correspondences between successive images. However, the measurements are usually affected by noise that may produce large estimation uncertainties. Visual tracking of the facial features in a video sequence with a proper filtering technique can improve the result significantly. We use Kalman filtering to estimate the 3-D pose of a moving head and to track facial features Head pose and motion model In order to track the head, we propose a simple rigid head model including the mouth (P 1 ) and the eyes (P 2 and P 3 ) as shown in Fig. 4. The 3-D pose of the head can be solved using only these three points if the corresponding 2-D image observations are known. The center of the head coordinate system is set to the center of gravity of these three points. The in the model are defined as follows: P 1 = [4α,, ] T, P 2 = [ 2α, 3, ] T, P 3 = [ 2α, 3, ] T, Figure 4: Coordinate systems involved. where the scaling factor α = d mouth /D. We model the head pose and motion using 1st order dynamics which allows prediction and filtering of the pose from a sequence of images. The head is treated as a rigid object that moves with constant velocity. Accelerations are modelled as system noise because the head motion is assumed to be smooth. The motion model is x k+1 = Φ k x k + Γ k ε k, (5) where the parameters to be estimated are presented by the state vector x k at time instant k, and Φ k is the state transition matrix. The state transition matrix relates the state at time instant k to the state at time instant k + 1. Γ k ε k models the uncertainty of the motion model. The process noise ε k is assumed to be Gaussian distributed with an expected value E{ε k } = and the covariance matrix Q k = E{ε k ε T k }. Γ k is the process noise transition matrix. The state vector x k consists of the position (x k, y k, z k ) and orientation (ω k, ϕ k, κ k ) of the head with respect to the camera and the corresponding velocities at time instant k. It is defined as follows x k = [x k, x k, y k, y k, z k, z k, ω k, ω k, ϕ k, ϕ k, κ k, κ k ] T. The initial 3-D pose of the head is determined by using the perspective-three-point (P3P) algorithm [9]. However, the solution is not unique and there exist up to four possible solutions for the pose. In order to evaluate all possible poses, we solve the system in closed-form [1]. We choose the pose where the z-axis of the head coordinate system intersects nearest to the center of the display. In the beginning, the velocities are set to zero. The time step between two successive images is normalized to 1. We approximate the variances of the process noise from the maximum accelerations allowed. 269

4 3.2. Measurements We use a perspective camera model to relate the object P i to image (u i, v i ) of three facial features during tracking as shown in Fig. 4. The camera is assumed to be pre-calibrated. The 3-D pose of the head is computed using these points. However, by using only three points the solution is not unique as mentioned in previous section. Our idea is to evaluate all the possible solutions and choose the one nearest to the previous pose to be the measurement at a particular time instant. Fig. 5 shows a block diagram of the tracker. The measurement model to map the state vector to the pose measurements is defined as z k = Hx k + η k, (6) where H is the observation matrix and it is defined as 1 1 H = The measurement noise η k models uncertainty in the pose measurements and it is assumed to be Gaussian distributed with an expected value E{η k } = and the covariance matrix R k = E{η k η T k }. The pose noise covariance R k can be derived from the image measurement noise that is assumed to be Gaussian distributed with zero-mean and a predefined variance. Because we use a nonlinear perspective camera model linearization is needed. Linearization is performed using the partial derivatives of the measurements with respect to the state variables of x k to obtain the Jacobian matrix J. R k is obtained as follows R k = J 1 C k J T, (7) where C k is the image measurement noise covariance matrix. The actual image measurement is performed in a small region around the predicted location of the feature, which decreases the computational load. The size for the region is adjusted dynamically by projecting the estimation uncertainty to the image. The features are extracted by utilizing valley detection similarly as in the initialization stage (Section 2.2). If there are several components within a search region, then the nearest one to the predicted position is chosen Tracker The Kalman filter algorithm estimates the pose recursively repeating two stages: prediction and correction [11]. In the first stage, the pose and locations of the features at Figure 5: Block diagram of the tracker. the next time instant are predicted based on the previous pose estimate and the dynamical model. In the correction stage the predicted pose is adjusted by using the measured pose. The feature measurements can occasionally cause errors that make tracking to fail. If the feature position is not within the uncertainty limits derived from the error covariance matrix or feature is lost then the predicted pose is used instead of the measurement. The error cannot exist longer than three frames or the initialization is started again. 4. EXPERIMENTAL RESULTS We evaluated the system s ability to track the facial features with several real image sequences containing large head motions and pose changes. A digital video camera was placed above the display of the computer. The length of recorded videos was 2 frames. The user in the image tries to control the cursor on the computer display. The pixel resolution was 32 x 24 and the frame rate 15 fps. Because the ground truth was not known, we assessed the operation visually. The tracking was successful with most of the sequences. Fig. 6 shows an example result of the feature locations during tracking. The tracked features; the mouth and the eyes are marked with red dots. Although, the sequence contains a man with a beard, the system still works properly. During testing we met major problems with only one sequence that contains a person with glasses. Certain types of spectacle rims and bad illumination can cause the system to fail in tracking. We evaluated the theoretical pose estimation accuracy with simulated data, due to the lack of ground truth for the head pose. The data of 1 frames generated contains head poses corresponding to a real situation where the user is trying to control the cursor on the monitor display. The 27

5 frame #1 frame #3 frame #5 frame #7 frame #9 frame #11 frame #13 frame #15 frame #17 frame #19 Figure 6: Facial feature tracking example. initial pose was [19.11, 8.65, 685.9, 2.34,.2, 1.59]. translation [mm] 5 translation in xaxi s 5 translation in yaxi s 6 translation [mm] translation [mm] 4 2 translation in zaxi s frame # rotation [rad] rotation [rad] rotation about xaxi s 2. 4 rotation about yaxi s rotation about zaxi s 1.8 rotation [rad] frame # Figure 7: Head pose estimation experiment. The focal length was f = 15 mm and the conversion factor from mm to pixels was 5. The measurements are generated by projecting the head model points to the image plane using the ground truth pose. We added random noise with a variance of 4 pixels to the measurements in order to simulate the true error. Fig. 7 shows a sample result. The ground truth is drawn with a solid line and the estimated pose with a dashed line. It can be clearly seen that the error in the z-translation is biggest. The obvious reason is that the depth information is more uncertain due to the perspective camera. In order to get more reliable results the average of 1 simulations was measured. The root mean square errors (RMS) obtained for the translations along x-, y-, and z-axes directions and rotations about x-, y-, and z-axes were 1.61 mm, 1.3 mm, 6.23 mm,.8 rad,.4 rad and.3 rad, respectively. The system was implemented using C++ to validate the real-time performance. The processing speed was measured on a PC with a 2.2 GHz CPU and 512 Mb memory. The times are the average for of 1 executions in typical situations. The total time for initialization was about 1 ms and for tracking 2 ms per frame. This provides theoretical maximum frame rate of 5 fps in tracking. However, due to the limitations of the video acquisition, we used a frame rate of 15 fps. It should be remembered that the processing speed also depends on the face sizes and the features. Based on the experiments, the system clearly fulfills the real-time demands. 5. APPLICATION: USER INTERFACE CONTROL The proposed approach was applied to cursor control on a computer display. The rough gaze direction is estimated from the orientation of the head. The direction of the gaze is the normal defined by three facial features extracted: the mouth (P 1 ) and the centers of the eyes (P 2 and P 3 ) as shown in Fig. 4. In order to solve the gaze position in the display, the gaze vector has to be transformed from the head coordinate system to the display coordinate system. First, the gaze vector is transformed to the camera coordinate system using the estimated 3-D pose. After that, transformation to display is performed. It is assumed that the position and orientation of the display in the camera coordinate system are known in advance. Then, the intersection of the gaze vector with the display can be solved. After the position P D on the display is obtained, the user can control the cursor in the graphical user interface. The correct operation of our gaze direction estimation method was verified with many real image sequences. In these sequences a similar setup has been used than in facial feature tracking tests. In these experiments the user makes some dedicated movements. Fig. 8 shows the trajectory when the user alternates his attention from the right side to the left side on the computer display. There are some- 271

6 times noisy feature measurements that cause errorneous gaze position estimates. In this case more filtering is actually needed. Experiments show that the system is capable of detecting gaze position on the display. The variation in the trajectory is quite big but it should be remembered that we cannot say what is the exact gaze position. However, the movement seems to be roughly correct Figure 8: An example trajectory obtained with real image sequence. 6. CONCLUSIONS We have presented a new facial feature based head tracking approach using skin detection, gray-scale morphology and a geometrical face model. The 3-D pose of the moving head is estimated using Kalman filtering. The advantage of the system is the low computational cost that makes it usable for platforms such as mobile devices where computational resources are limited. Although very good premilinary pose estimation results were achieved, it should be remembered that the accuracy values obtained are more or less theoretical, which apply under certain constraints. The main factors causing problems for the system are users with glasses, occlusions, and shadows. In the future work, the feature extraction methods will be further investigated in order to find more stable features, which do not necessarily have to be predefined facial features. 7. ACKNOWLEDGEMENTS The financial support of the Academy of Finland and the National Technology Agency of Finland is gratefully acknowledged. 8. REFERENCES [1] A. Azarbayejani, T. Starner, B. Horowitz, and A. Pentland, Visually controlled graphics, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 15, no. 6, pp , June [2] M. La Cascia, S. Sclaroff, and V. Athitsos, Fast, reliable head tracking under varying illumination: An approach based on registration of texture-mapped 3d models, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 22, no. 4, pp , Apr. 2. [3] T. S. Jebara and A. Pentland, Parametrized structure from motion for 3d adaptive feedback tracking of faces, in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, June 1997, pp [4] A. Gee and R. Cipolla, Fast visual tracking by temporal consensus, Image and Vision Computing, vol. 14, pp , [5] M. H. Yang, D. Kriegman, and N. Ahuja, Detecting faces in images: A survey, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp , Jan. 22. [6] B. Martinkauppi, M. Soriano, and M. Pietikäinen, Detection of skin color under changing illumination: A comparative study, in Proc. of the 12th International Conf. on Image Analysis and Processing, Sept. 23, pp , Mantova, Italy. [7] K. W. Wong, K. M. Lam, and W. C. Siu, A robust scheme for live detection of human faces in color images, Signal Processing: Image Communication, vol. 18, no. 2, pp , Feb. 23. [8] S. H. Jeng, H. Y. M. Liao, C. C. Han, M. Y. Chern, and Y. T. Liu, Facial feature detection using geometrical face model: An efficient approach, Pattern Recognition, vol. 31, no. 3, pp , Mar [9] M. A. Fischler and R. C. Bolles, Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, Comm. of ACM: Graphics and Image Processing, vol. 24, no. 6, pp , June [1] S. Linnainmaa, D. Harwood, and L. S. Davis, Pose determination of a three-dimensional object using triangle pairs, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 1, no. 5, pp , September [11] R. E. Kalman, A new approach to linear filtering and prediction problems, Trans. of the ASME- Journal of Basic Engineering, vol. Series D, no. 82, pp ,

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION

A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION A. Hadid, M. Heikkilä, T. Ahonen, and M. Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering

More information

Face Detection and Recognition in an Image Sequence using Eigenedginess

Face Detection and Recognition in an Image Sequence using Eigenedginess Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras

More information

A Texture-based Method for Detecting Moving Objects

A Texture-based Method for Detecting Moving Objects A Texture-based Method for Detecting Moving Objects Marko Heikkilä University of Oulu Machine Vision Group FINLAND Introduction The moving object detection, also called as background subtraction, is one

More information

Occlusion Robust Multi-Camera Face Tracking

Occlusion Robust Multi-Camera Face Tracking Occlusion Robust Multi-Camera Face Tracking Josh Harguess, Changbo Hu, J. K. Aggarwal Computer & Vision Research Center / Department of ECE The University of Texas at Austin harguess@utexas.edu, changbo.hu@gmail.com,

More information

Adaptive Skin Color Classifier for Face Outline Models

Adaptive Skin Color Classifier for Face Outline Models Adaptive Skin Color Classifier for Face Outline Models M. Wimmer, B. Radig, M. Beetz Informatik IX, Technische Universität München, Germany Boltzmannstr. 3, 87548 Garching, Germany [wimmerm, radig, beetz]@informatik.tu-muenchen.de

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian

Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian 4th International Conference on Machinery, Materials and Computing Technology (ICMMCT 2016) Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian Hebei Engineering and

More information

Tracking facial features using low resolution and low fps cameras under variable light conditions

Tracking facial features using low resolution and low fps cameras under variable light conditions Tracking facial features using low resolution and low fps cameras under variable light conditions Peter Kubíni * Department of Computer Graphics Comenius University Bratislava / Slovakia Abstract We are

More information

Statistical Approach to a Color-based Face Detection Algorithm

Statistical Approach to a Color-based Face Detection Algorithm Statistical Approach to a Color-based Face Detection Algorithm EE 368 Digital Image Processing Group 15 Carmen Ng Thomas Pun May 27, 2002 Table of Content Table of Content... 2 Table of Figures... 3 Introduction:...

More information

Gaze interaction (2): models and technologies

Gaze interaction (2): models and technologies Gaze interaction (2): models and technologies Corso di Interazione uomo-macchina II Prof. Giuseppe Boccignone Dipartimento di Scienze dell Informazione Università di Milano boccignone@dsi.unimi.it http://homes.dsi.unimi.it/~boccignone/l

More information

Haresh D. Chande #, Zankhana H. Shah *

Haresh D. Chande #, Zankhana H. Shah * Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Dynamic skin detection in color images for sign language recognition

Dynamic skin detection in color images for sign language recognition Dynamic skin detection in color images for sign language recognition Michal Kawulok Institute of Computer Science, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland michal.kawulok@polsl.pl

More information

Determining pose of a human face from a single monocular image

Determining pose of a human face from a single monocular image Determining pose of a human face from a single monocular image Jian-Gang Wang 1, Eric Sung 2, Ronda Venkateswarlu 1 1 Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore 119613 2 Nanyang

More information

Real Time Face Tracking and Pose Estimation Using an Adaptive Correlation Filter for Human-Robot Interaction

Real Time Face Tracking and Pose Estimation Using an Adaptive Correlation Filter for Human-Robot Interaction Real Time Face Tracking and Pose Estimation Using an Adaptive Correlation Filter for Human-Robot Interaction Vo Duc My and Andreas Zell Abstract In this paper, we present a real time algorithm for mobile

More information

Postprint.

Postprint. http://www.diva-portal.org Postprint This is the accepted version of a paper presented at 14th International Conference of the Biometrics Special Interest Group, BIOSIG, Darmstadt, Germany, 9-11 September,

More information

Robust Fingertip Tracking with Improved Kalman Filter

Robust Fingertip Tracking with Improved Kalman Filter Robust Fingertip Tracking with Improved Kalman Filter Chunyang Wang and Bo Yuan Intelligent Computing Lab, Division of Informatics Graduate School at Shenzhen, Tsinghua University Shenzhen 518055, P.R.

More information

Eugene Borovikov. Human Head Pose Estimation by Facial Features Location. UMCP Human Head Pose Estimation by Facial Features Location

Eugene Borovikov. Human Head Pose Estimation by Facial Features Location. UMCP Human Head Pose Estimation by Facial Features Location Human Head Pose Estimation by Facial Features Location Abstract: Eugene Borovikov University of Maryland Institute for Computer Studies, College Park, MD 20742 4/21/1998 We describe a method for estimating

More information

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing)

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) J.Nithya 1, P.Sathyasutha2 1,2 Assistant Professor,Gnanamani College of Engineering, Namakkal, Tamil Nadu, India ABSTRACT

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Hand Gesture Extraction by Active Shape Models

Hand Gesture Extraction by Active Shape Models Hand Gesture Extraction by Active Shape Models Nianjun Liu, Brian C. Lovell School of Information Technology and Electrical Engineering The University of Queensland, Brisbane 4072, Australia National ICT

More information

Mouse Pointer Tracking with Eyes

Mouse Pointer Tracking with Eyes Mouse Pointer Tracking with Eyes H. Mhamdi, N. Hamrouni, A. Temimi, and M. Bouhlel Abstract In this article, we expose our research work in Human-machine Interaction. The research consists in manipulating

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

Real time eye detection using edge detection and euclidean distance

Real time eye detection using edge detection and euclidean distance Vol. 6(20), Apr. 206, PP. 2849-2855 Real time eye detection using edge detection and euclidean distance Alireza Rahmani Azar and Farhad Khalilzadeh (BİDEB) 2 Department of Computer Engineering, Faculty

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Eye detection, face detection, face recognition, line edge map, primary line segment Hausdorff distance.

Eye detection, face detection, face recognition, line edge map, primary line segment Hausdorff distance. Eye Detection using Line Edge Map Template Mihir Jain, Suman K. Mitra, Naresh D. Jotwani Dhirubhai Institute of Information and Communication Technology, Near Indroda Circle Gandhinagar,India mihir_jain@daiict.ac.in,

More information

Detecting and Identifying Moving Objects in Real-Time

Detecting and Identifying Moving Objects in Real-Time Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary

More information

Abstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component

Abstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component A Fully Automatic System To Model Faces From a Single Image Zicheng Liu Microsoft Research August 2003 Technical Report MSR-TR-2003-55 Microsoft Research Microsoft Corporation One Microsoft Way Redmond,

More information

Computer and Machine Vision

Computer and Machine Vision Computer and Machine Vision Lecture Week 10 Part-2 Skeletal Models and Face Detection March 21, 2014 Sam Siewert Outline of Week 10 Lab #4 Overview Lab #5 and #6 Extended Lab Overview SIFT and SURF High

More information

Optical Flow-Based Person Tracking by Multiple Cameras

Optical Flow-Based Person Tracking by Multiple Cameras Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

3D Active Appearance Model for Aligning Faces in 2D Images

3D Active Appearance Model for Aligning Faces in 2D Images 3D Active Appearance Model for Aligning Faces in 2D Images Chun-Wei Chen and Chieh-Chih Wang Abstract Perceiving human faces is one of the most important functions for human robot interaction. The active

More information

CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning

CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning Justin Chen Stanford University justinkchen@stanford.edu Abstract This paper focuses on experimenting with

More information

Static Gesture Recognition with Restricted Boltzmann Machines

Static Gesture Recognition with Restricted Boltzmann Machines Static Gesture Recognition with Restricted Boltzmann Machines Peter O Donovan Department of Computer Science, University of Toronto 6 Kings College Rd, M5S 3G4, Canada odonovan@dgp.toronto.edu Abstract

More information

A Fast Moving Object Detection Technique In Video Surveillance System

A Fast Moving Object Detection Technique In Video Surveillance System A Fast Moving Object Detection Technique In Video Surveillance System Paresh M. Tank, Darshak G. Thakore, Computer Engineering Department, BVM Engineering College, VV Nagar-388120, India. Abstract Nowadays

More information

On Modeling Variations for Face Authentication

On Modeling Variations for Face Authentication On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu

More information

Fingertips Tracking based on Gradient Vector

Fingertips Tracking based on Gradient Vector Int. J. Advance Soft Compu. Appl, Vol. 7, No. 3, November 2015 ISSN 2074-8523 Fingertips Tracking based on Gradient Vector Ahmad Yahya Dawod 1, Md Jan Nordin 1, and Junaidi Abdullah 2 1 Pattern Recognition

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

From Gaze to Focus of Attention

From Gaze to Focus of Attention From Gaze to Focus of Attention Rainer Stiefelhagen, Michael Finke, Jie Yang, Alex Waibel stiefel@ira.uka.de, finkem@cs.cmu.edu, yang+@cs.cmu.edu, ahw@cs.cmu.edu Interactive Systems Laboratories University

More information

A Robust Facial Feature Point Tracker using Graphical Models

A Robust Facial Feature Point Tracker using Graphical Models A Robust Facial Feature Point Tracker using Graphical Models Serhan Coşar, Müjdat Çetin, Aytül Erçil Sabancı University Faculty of Engineering and Natural Sciences Orhanlı- Tuzla, 34956 İstanbul, TURKEY

More information

Task analysis based on observing hands and objects by vision

Task analysis based on observing hands and objects by vision Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In

More information

Color-based Face Detection using Combination of Modified Local Binary Patterns and embedded Hidden Markov Models

Color-based Face Detection using Combination of Modified Local Binary Patterns and embedded Hidden Markov Models SICE-ICASE International Joint Conference 2006 Oct. 8-2, 2006 in Bexco, Busan, Korea Color-based Face Detection using Combination of Modified Local Binary Patterns and embedded Hidden Markov Models Phuong-Trinh

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Eye tracking by image processing for helping disabled people. Alireza Rahimpour

Eye tracking by image processing for helping disabled people. Alireza Rahimpour An Introduction to: Eye tracking by image processing for helping disabled people Alireza Rahimpour arahimpo@utk.edu Fall 2012 1 Eye tracking system: Nowadays eye gaze tracking has wide range of applications

More information

MIME: A Gesture-Driven Computer Interface

MIME: A Gesture-Driven Computer Interface MIME: A Gesture-Driven Computer Interface Daniel Heckenberg a and Brian C. Lovell b a Department of Computer Science and Electrical Engineering, The University of Queensland, Brisbane, Australia, 4072

More information

Image Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images

Image Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images Image Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images 1 Anusha Nandigam, 2 A.N. Lakshmipathi 1 Dept. of CSE, Sir C R Reddy College of Engineering, Eluru,

More information

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method Parvin Aminnejad 1, Ahmad Ayatollahi 2, Siamak Aminnejad 3, Reihaneh Asghari Abstract In this work, we presented a novel approach

More information

A face recognition system based on local feature analysis

A face recognition system based on local feature analysis A face recognition system based on local feature analysis Stefano Arca, Paola Campadelli, Raffaella Lanzarotti Dipartimento di Scienze dell Informazione Università degli Studi di Milano Via Comelico, 39/41

More information

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall 2008 October 29, 2008 Notes: Midterm Examination This is a closed book and closed notes examination. Please be precise and to the point.

More information

A Texture-based Method for Detecting Moving Objects

A Texture-based Method for Detecting Moving Objects A Texture-based Method for Detecting Moving Objects M. Heikkilä, M. Pietikäinen and J. Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500

More information

AAM Based Facial Feature Tracking with Kinect

AAM Based Facial Feature Tracking with Kinect BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 3 Sofia 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0046 AAM Based Facial Feature Tracking

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Dynamic Stroke Information Analysis for Video-Based Handwritten Chinese Character Recognition

Dynamic Stroke Information Analysis for Video-Based Handwritten Chinese Character Recognition Dynamic Stroke Information Analysis for Video-Based Handwritten Chinese Character Recognition Feng Lin and Xiaoou Tang Department of Information Engineering The Chinese University of Hong Kong Shatin,

More information

A Study on Similarity Computations in Template Matching Technique for Identity Verification

A Study on Similarity Computations in Template Matching Technique for Identity Verification A Study on Similarity Computations in Template Matching Technique for Identity Verification Lam, S. K., Yeong, C. Y., Yew, C. T., Chai, W. S., Suandi, S. A. Intelligent Biometric Group, School of Electrical

More information

Global fitting of a facial model to facial features for model based video coding

Global fitting of a facial model to facial features for model based video coding Global fitting of a facial model to facial features for model based video coding P M Hillman J M Hannah P M Grant University of Edinburgh School of Engineering and Electronics Sanderson Building, King

More information

Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme

Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme Costas Panagiotakis and Anastasios Doulamis Abstract In this paper, an unsupervised, automatic video human members(human

More information

PERSON FOLLOWING AND MOBILE ROBOT VIA ARM-POSTURE DRIVING USING COLOR AND STEREOVISION. Bogdan Kwolek

PERSON FOLLOWING AND MOBILE ROBOT VIA ARM-POSTURE DRIVING USING COLOR AND STEREOVISION. Bogdan Kwolek PERSON FOLLOWING AND MOBILE ROBOT VIA ARM-POSTURE DRIVING USING COLOR AND STEREOVISION Bogdan Kwolek Rzeszów University of Technology, W. Pola 2, 35-959 Rzeszów, Poland Abstract: This paper describes an

More information

Pupil Localization Algorithm based on Hough Transform and Harris Corner Detection

Pupil Localization Algorithm based on Hough Transform and Harris Corner Detection Pupil Localization Algorithm based on Hough Transform and Harris Corner Detection 1 Chongqing University of Technology Electronic Information and Automation College Chongqing, 400054, China E-mail: zh_lian@cqut.edu.cn

More information

A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space

A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space Naoyuki ICHIMURA Electrotechnical Laboratory 1-1-4, Umezono, Tsukuba Ibaraki, 35-8568 Japan ichimura@etl.go.jp

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

MULTI-VIEW FACE DETECTION AND POSE ESTIMATION EMPLOYING EDGE-BASED FEATURE VECTORS

MULTI-VIEW FACE DETECTION AND POSE ESTIMATION EMPLOYING EDGE-BASED FEATURE VECTORS MULTI-VIEW FACE DETECTION AND POSE ESTIMATION EMPLOYING EDGE-BASED FEATURE VECTORS Daisuke Moriya, Yasufumi Suzuki, and Tadashi Shibata Masakazu Yagi and Kenji Takada Department of Frontier Informatics,

More information

Background subtraction in people detection framework for RGB-D cameras

Background subtraction in people detection framework for RGB-D cameras Background subtraction in people detection framework for RGB-D cameras Anh-Tuan Nghiem, Francois Bremond INRIA-Sophia Antipolis 2004 Route des Lucioles, 06902 Valbonne, France nghiemtuan@gmail.com, Francois.Bremond@inria.fr

More information

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives

More information

Motion Detection. Final project by. Neta Sokolovsky

Motion Detection. Final project by. Neta Sokolovsky Motion Detection Final project by Neta Sokolovsky Introduction The goal of this project is to recognize a motion of objects found in the two given images. This functionality is useful in the video processing

More information

Detecting motion by means of 2D and 3D information

Detecting motion by means of 2D and 3D information Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,

More information

Human Face Classification using Genetic Algorithm

Human Face Classification using Genetic Algorithm Human Face Classification using Genetic Algorithm Tania Akter Setu Dept. of Computer Science and Engineering Jatiya Kabi Kazi Nazrul Islam University Trishal, Mymenshing, Bangladesh Dr. Md. Mijanur Rahman

More information

OBJECT SORTING IN MANUFACTURING INDUSTRIES USING IMAGE PROCESSING

OBJECT SORTING IN MANUFACTURING INDUSTRIES USING IMAGE PROCESSING OBJECT SORTING IN MANUFACTURING INDUSTRIES USING IMAGE PROCESSING Manoj Sabnis 1, Vinita Thakur 2, Rujuta Thorat 2, Gayatri Yeole 2, Chirag Tank 2 1 Assistant Professor, 2 Student, Department of Information

More information

Full-Motion Recovery from Multiple Video Cameras Applied to Face Tracking and Recognition

Full-Motion Recovery from Multiple Video Cameras Applied to Face Tracking and Recognition Full-Motion Recovery from Multiple Video Cameras Applied to Face Tracking and Recognition Josh Harguess, Changbo Hu, J. K. Aggarwal Computer & Vision Research Center / Department of ECE The University

More information

A Survey of Various Face Detection Methods

A Survey of Various Face Detection Methods A Survey of Various Face Detection Methods 1 Deepali G. Ganakwar, 2 Dr.Vipulsangram K. Kadam 1 Research Student, 2 Professor 1 Department of Engineering and technology 1 Dr. Babasaheb Ambedkar Marathwada

More information

EE368 Project: Visual Code Marker Detection

EE368 Project: Visual Code Marker Detection EE368 Project: Visual Code Marker Detection Kahye Song Group Number: 42 Email: kahye@stanford.edu Abstract A visual marker detection algorithm has been implemented and tested with twelve training images.

More information

3d Pose Estimation. Algorithms for Augmented Reality. 3D Pose Estimation. Sebastian Grembowietz Sebastian Grembowietz

3d Pose Estimation. Algorithms for Augmented Reality. 3D Pose Estimation. Sebastian Grembowietz Sebastian Grembowietz Algorithms for Augmented Reality 3D Pose Estimation by Sebastian Grembowietz - 1 - Sebastian Grembowietz index introduction 3d pose estimation techniques in general how many points? different models of

More information

FACIAL RECOGNITION BASED ON THE LOCAL BINARY PATTERNS MECHANISM

FACIAL RECOGNITION BASED ON THE LOCAL BINARY PATTERNS MECHANISM FACIAL RECOGNITION BASED ON THE LOCAL BINARY PATTERNS MECHANISM ABSTRACT Alexandru Blanda 1 This work presents a method of facial recognition, based on Local Binary Models. The idea of using this algorithm

More information

Simultaneous Pose and Correspondence Determination using Line Features

Simultaneous Pose and Correspondence Determination using Line Features Simultaneous Pose and Correspondence Determination using Line Features Philip David, Daniel DeMenthon, Ramani Duraiswami, and Hanan Samet Department of Computer Science, University of Maryland, College

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Centre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB

Centre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB HIGH ACCURACY 3-D MEASUREMENT USING MULTIPLE CAMERA VIEWS T.A. Clarke, T.J. Ellis, & S. Robson. High accuracy measurement of industrially produced objects is becoming increasingly important. The techniques

More information

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment

More information

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science,

More information

Algorithm research of 3D point cloud registration based on iterative closest point 1

Algorithm research of 3D point cloud registration based on iterative closest point 1 Acta Technica 62, No. 3B/2017, 189 196 c 2017 Institute of Thermomechanics CAS, v.v.i. Algorithm research of 3D point cloud registration based on iterative closest point 1 Qian Gao 2, Yujian Wang 2,3,

More information

Gesture Recognition using Temporal Templates with disparity information

Gesture Recognition using Temporal Templates with disparity information 8- MVA7 IAPR Conference on Machine Vision Applications, May 6-8, 7, Tokyo, JAPAN Gesture Recognition using Temporal Templates with disparity information Kazunori Onoguchi and Masaaki Sato Hirosaki University

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 2 Issue 11, November 2015. Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Centroid Principles. Object s center of gravity or center of mass. Graphically labeled as

Centroid Principles. Object s center of gravity or center of mass. Graphically labeled as Centroids Centroid Principles Object s center of gravity or center of mass. Graphically labeled as Centroid Principles Point of applied force caused by acceleration due to gravity. Object is in state of

More information

Image-based Fraud Detection in Automatic Teller Machine

Image-based Fraud Detection in Automatic Teller Machine IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.11, November 2006 13 Image-based Fraud Detection in Automatic Teller Machine WenTao Dong and YoungSung Soh Department of

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

Artifacts and Textured Region Detection

Artifacts and Textured Region Detection Artifacts and Textured Region Detection 1 Vishal Bangard ECE 738 - Spring 2003 I. INTRODUCTION A lot of transformations, when applied to images, lead to the development of various artifacts in them. In

More information

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory Computers and Mathematics with Applications 61 (2011) 2101 2106 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa An

More information

Improvement of Accuracy for 2D Marker-Based Tracking Using Particle Filter

Improvement of Accuracy for 2D Marker-Based Tracking Using Particle Filter 17th International Conference on Artificial Reality and Telexistence 2007 Improvement of Accuracy for 2D Marker-Based Tracking Using Particle Filter Yuko Uematsu Hideo Saito Keio University 3-14-1 Hiyoshi,

More information

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

More information

A deformable model driven method for handling clothes

A deformable model driven method for handling clothes A deformable model driven method for handling clothes Yasuyo Kita Fuminori Saito Nobuyuki Kita Intelligent Systems Institute, National Institute of Advanced Industrial Science and Technology (AIST) AIST

More information

Active Appearance Models

Active Appearance Models IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 23, NO. 6, JUNE 2001 681 Active Appearance Models Timothy F. Cootes, Gareth J. Edwards, and Christopher J. Taylor AbstractÐWe describe

More information

Motion Detection Algorithm

Motion Detection Algorithm Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection

More information

Face Quality Assessment System in Video Sequences

Face Quality Assessment System in Video Sequences Face Quality Assessment System in Video Sequences Kamal Nasrollahi, Thomas B. Moeslund Laboratory of Computer Vision and Media Technology, Aalborg University Niels Jernes Vej 14, 9220 Aalborg Øst, Denmark

More information

Automatic Tracking of Moving Objects in Video for Surveillance Applications

Automatic Tracking of Moving Objects in Video for Surveillance Applications Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering

More information