FACIAL MOVEMENT BASED PERSON AUTHENTICATION

Similar documents
Facial Expression Detection Using Implemented (PCA) Algorithm

Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian

IBM Research Report. Automatic Neutral Face Detection Using Location and Shape Features

Facial Emotion Recognition using Eye

Spoofing Face Recognition Using Neural Network with 3D Mask

Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis

AUTOMATIC VIDEO INDEXING

Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks

C.R VIMALCHAND ABSTRACT

Fast trajectory matching using small binary images

Texture Image Segmentation using FCM

Chapter 10. Conclusion Discussion

Meticulously Detailed Eye Model and Its Application to Analysis of Facial Image

Real time facial expression recognition from image sequences using Support Vector Machines

Face Recognition Using K-Means and RBFN

FACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS

Accelerometer Gesture Recognition

Facial expression recognition using shape and texture information

Facial Action Detection from Dual-View Static Face Images

Facial Expression Recognition Using Non-negative Matrix Factorization

A Facial Motion Capture System Based on Neural Network Classifier Using RGB-D Data

Facial Processing Projects at the Intelligent Systems Lab

Digital Vision Face recognition

Topics for thesis. Automatic Speech-based Emotion Recognition

Color-based Face Detection using Combination of Modified Local Binary Patterns and embedded Hidden Markov Models

Short Survey on Static Hand Gesture Recognition

Gender Classification Technique Based on Facial Features using Neural Network

From Gaze to Focus of Attention

Automatic Countenance Recognition Using DCT-PCA Technique of Facial Patches with High Detection Rate

Region Segmentation for Facial Image Compression

An Approach for Face Recognition System Using Convolutional Neural Network and Extracted Geometric Features

Evaluation of Expression Recognition Techniques

Human Face Recognition Using Image Processing PCA and Neural Network

Modelling Human Perception of Facial Expressions by Discrete Choice Models

Advances in Face Recognition Research

Human Face Classification using Genetic Algorithm

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory

Detection of Mouth Movements and Its Applications to Cross-Modal Analysis of Planning Meetings

Facial Expression Analysis

A REAL-TIME FACIAL FEATURE BASED HEAD TRACKER

Age Invariant Face Recognition Aman Jain & Nikhil Rasiwasia Under the Guidance of Prof R M K Sinha EE372 - Computer Vision and Document Processing

Audio Visual Speech Recognition

IDE-3D: Predicting Indoor Depth Utilizing Geometric and Monocular Cues

D DAVID PUBLISHING. 3D Modelling, Simulation and Prediction of Facial Wrinkles. 1. Introduction

Dynamic facial expression recognition using a behavioural model

A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods

arxiv: v1 [cs.cv] 4 May 2018

A Real Time Facial Expression Classification System Using Local Binary Patterns

Person Authentication from Video of Faces: A Behavioral and Physiological Approach Using Pseudo Hierarchical Hidden Markov Models

IMPLEMENTATION OF SPATIAL FUZZY CLUSTERING IN DETECTING LIP ON COLOR IMAGES

City, University of London Institutional Repository

Static Gesture Recognition with Restricted Boltzmann Machines

Sign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features

Face Cyclographs for Recognition

Face ID Security. November 2017

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

Face and Facial Expression Detection Using Viola-Jones and PCA Algorithm

A Compute-efficient Algorithm for Robust Eyebrow Detection

Parallel Architecture & Programing Models for Face Recognition

Face Recognition & Detection Using Image Processing

Facial Expression Representation Based on Timing Structures in Faces

A Facial Expression Classification using Histogram Based Method

Face recognition using Singular Value Decomposition and Hidden Markov Models

Using temporal seeding to constrain the disparity search range in stereo matching

Collision-free Path Planning Based on Clustering

Automatic visual recognition for metro surveillance

Hidden Markov Models. Slides adapted from Joyce Ho, David Sontag, Geoffrey Hinton, Eric Xing, and Nicholas Ruozzi

Motion Estimation for Video Coding Standards

Multimodal Biometric System in Secure e- Transaction in Smart Phone

Color Local Texture Features Based Face Recognition

Face Detection Using Color Based Segmentation and Morphological Processing A Case Study

Performance Evaluation of the Eigenface Algorithm on Plain-Feature Images in Comparison with Those of Distinct Features

Computer Animation Visualization. Lecture 5. Facial animation

RETRIEVAL OF FACES BASED ON SIMILARITIES Jonnadula Narasimha Rao, Keerthi Krishna Sai Viswanadh, Namani Sandeep, Allanki Upasana

Face anti-spoofing using Image Quality Assessment

FACIAL EXPRESSION DETECTION AND RECOGNITION SYSTEM

Facial Animation System Design based on Image Processing DU Xueyan1, a

A HYBRID APPROACH BASED ON PCA AND LBP FOR FACIAL EXPRESSION ANALYSIS

Research on the New Image De-Noising Methodology Based on Neural Network and HMM-Hidden Markov Models

Mouse Pointer Tracking with Eyes

Object Detection System

Face Liveness Detection Using Euler Method Based On Diffusion Speed Calculation

Data Mining Final Project Francisco R. Ortega Professor: Dr. Tao Li

Towards Audiovisual TTS

Recognition of Human Body Movements Trajectory Based on the Three-dimensional Depth Data

EFFECTIVE METHODOLOGY FOR DETECTING AND PREVENTING FACE SPOOFING ATTACKS

Face Recognition for Different Facial Expressions Using Principal Component analysis

An Introduction to the Machine Perception Laboratory.

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information

Real Time Recognition of Non-Driving Related Tasks in the Context of Highly Automated Driving

A Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions

Sample images can be independently regenerated from face recognition templates

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM

Statistical Approach to a Color-based Face Detection Algorithm

Study of Face-Recognition Parameters and its finding using DSP Software

3D Facial Action Units Recognition for Emotional Expression

CHAPTER - 2 LITERATURE REVIEW. In this section of literature survey, the following topics are discussed:

A Two-stage Scheme for Dynamic Hand Gesture Recognition

Time Analysis of Pulse-based Face Anti-Spoofing in Visible and NIR

Dynamic Human Shape Description and Characterization

Transcription:

FACIAL MOVEMENT BASED PERSON AUTHENTICATION Pengqing Xie Yang Liu (Presenter) Yong Guan Iowa State University Department of Electrical and Computer Engineering

OUTLINE Introduction Literature Review Methodology Experiment Conclusion and Future Work

INTRODUCTION Face recognition has received significant attention these days especially in the field of authentication. Most of the face recognition techniques are achieved by extracting static facial features such as eyes vertical position, mouth width, nose height, etc. (Figure 1.) Can we exploit dynamic features during recognition? Figure 1. Facebook s Deep face Project

INTRODUCTION Brain activity studies revealed that for face perception, not only face-specific brain areas were involved, but a coherent neural activity was stored devoting to motion perception and gaze control. In addition to basic static features related to shape and color, a dynamic signature can also be utilized to augment face recognition. We believe that dynamic features represent facial behaviors which make them impossible to be replicated and thus more distinctive.

INTRODUCTION Since most face recognition algorithms only analyze static features, they cannot distinguish between a real face and a counterfeited face. Researchers had shown that the face recognition software offered on laptops from Lenovo, Toshiba and Asus can be fooled by manipulating a digital image. This is known as the playback attack which can be avoided if 3D dynamic features were employed. Figure 2. Face recognition software detecting a face displayed on a computer screen.

LITERATURE REVIEW Dynamic face recognition Clusters facial expressions into Hidden Markov Models (HMMs) and estimates the emission probabilities of each state Not applicable to our approach since we are interested in consecutive facial movements Face liveness detection against playback attack Considers movements of eyes, eyebrows, nose, mouth, and lip 2D-based method will still suffer from video-based spoofing attack Our approach utilizes universal facial movements, thus eliminating the need of specialized liveness detection unit.

METHODOLOGY A 3D face model (CANDIDE-3) consisting of shape units (SUs) is constructed to characterize facial features (Figure 3.) A facial expression is decomposed into action units (AUs) as defined by the Facial Action Coding System (FACS) which encodes individual facial muscle movement from changes in facial appearance Each AU lies within the range of -1 to +1, thus face recognition is reduced to matching two time series. Figure 3. The CANDIDE-3 3D face model.

METHODOLOGY Longest Common Subsequence (LCSS) is elected for matching time series Compared to other methods (Euclidian distance and dynamic time warping) which are sensitive to noise and time variance, LCSS is able to achieve a more robust result LCSS offers flexibility in matching by setting the time and space thresholds The authentication scheme can be described as: Enrollment: Based on LCSS similarity measurements, a model is computed from a certain number of AU trajectories to represent the identity model. Time and space thresholds are adaptively computed and stored along with the model to guarantee robustness. Validation: The incoming AU trajectories are collected and compared with existing legitimate models, similarities resulted by each AU trajectory matching are combined to determine the final authentication decision

EXPERIMENT Five subjects were invited to participate in our experiment. Each subject was instructed to pronounce WE 30 times in front of Kinect camera within 4 seconds For each sample, six action units were extracted where each of them forms a time series. 20 samples were used for enrollment 10 samples were used for cross-validation For each AU, a time series model was constructed and stored. For authentication, six LCSS similarities obtained from each AU were measured and fused together to give a combined result

EXPERIMENT Preliminary experiment results: 30 samples of six AUs were acquired from each subject. Figure 4 shows the Lip Stretcher(LS) obtained from one subject. Figure 5 shows the combined similarity result for each subject. Subject 1 was chosen to be the probe model, and was compared with itself with the 10 samples used for cross-validation (Figure 5.) Figure 4. 30 samples of LS obtained from one subject. Figure 5. 10 combined similarity measurement for each subjects.

CONCLUSION AND FUTURE WORK We proposed an authentication method using spatial temporal facial movements. We had seen how it could prevent playback attack by employing 3D dynamic features. Our preliminary experiment had shown promising results. Since the facial expression in our experiment is carried out artificially, it is believed that up slopes (neutral to maximal) facial movements may be similar for all subjects whereas down slopes (maximal to neutral) are distinctive. Our future work involves developing an automated facial authentication system to be used in most security systems.

QUESTIONS?