Gesture Recognition Technique:A Review

Similar documents
Hand Gesture Recognition Method for Human Machine Interface

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction

Task analysis based on observing hands and objects by vision

Mouse Simulation Using Two Coloured Tapes

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Comparative Study of Hand Gesture Recognition Techniques

An Interactive Technique for Robot Control by Using Image Processing Method

Hand Gesture Recognition Based On The Parallel Edge Finger Feature And Angular Projection

A Real-Time Hand Gesture Recognition for Dynamic Applications

VIRTUAL TRAIL ROOM. South Asian Journal of Engineering and Technology Vol.3, No.5 (2017) 87 96

Data-driven Approaches to Simulation (Motion Capture)

Human Arm Simulation Using Kinect

Hand Gesture Recognition. By Jonathan Pritchard

Computer and Machine Vision

Gesture Recognition: Hand Pose Estimation. Adrian Spurr Ubiquitous Computing Seminar FS

Operating an Application Using Hand Gesture Recognition System

COMS W4735: Visual Interfaces To Computers. Final Project (Finger Mouse) Submitted by: Tarandeep Singh Uni: ts2379

Human Upper Body Pose Estimation in Static Images

High-Fidelity Augmented Reality Interactions Hrvoje Benko Researcher, MSR Redmond

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Animation COM3404. Richard Everson. School of Engineering, Computer Science and Mathematics University of Exeter

Static Gesture Recognition with Restricted Boltzmann Machines

Animation. CS 465 Lecture 22

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING

Keywords Binary Linked Object, Binary silhouette, Fingertip Detection, Hand Gesture Recognition, k-nn algorithm.

Mouse Pointer Tracking with Eyes

DIVERSE TECHNIQUES FOR GEOMETRIC FEATURES EXTRACTION AND IDENTIFICATION OF GESTURE RECOGNITION

Vision-Based Hand Detection for Registration of Virtual Objects in Augmented Reality

Gesture based PTZ camera control

Detecting Fingertip Method and Gesture Usability Research for Smart TV. Won-Jong Yoon and Jun-dong Cho

Ch 22 Inspection Technologies

Machine Vision and Gesture Recognition Machine Vision Term Paper

3D Modeling for Capturing Human Motion from Monocular Video

Gesture Recognition using Temporal Templates with disparity information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Applications. Systems. Motion capture pipeline. Biomechanical analysis. Graphics research

Short Survey on Static Hand Gesture Recognition

ECG782: Multidimensional Digital Signal Processing

HAND-GESTURE BASED FILM RESTORATION

Basics of Design p. 2 Approaching Design as an Artist p. 4 Knowing Your Character p. 4 Making Decisions p. 4 Categories of Design p.

Adaptive Skin Color Classifier for Face Outline Models

A Real Time Virtual Dressing Room Application using Opencv

High-Fidelity Facial and Speech Animation for VR HMDs

ArchGenTool: A System-Independent Collaborative Tool for Robotic Architecture Design

CS 775: Advanced Computer Graphics. Lecture 17 : Motion Capture

3D Face and Hand Tracking for American Sign Language Recognition

A Robust Hand Gesture Recognition Using Combined Moment Invariants in Hand Shape

Tracking facial features using low resolution and low fps cameras under variable light conditions

A Robust Gesture Recognition Using Depth Data

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Robot vision review. Martin Jagersand

Indian Institute of Technology Kanpur District : Kanpur Team number: 2. Prof. A. K. Chaturvedi SANKET

Real-time gesture recognition for controlling a virtual hand

The Kinect Sensor. Luís Carriço FCUL 2014/15

Webpage: Volume 3, Issue VII, July 2015 ISSN

Grasp Recognition using a 3D Articulated Model and Infrared Images

Digital Vision Face recognition

Modeling Body Motion Posture Recognition Using 2D-Skeleton Angle Feature

Radially Defined Local Binary Patterns for Hand Gesture Recognition

Topics for thesis. Automatic Speech-based Emotion Recognition

A Two-stage Scheme for Dynamic Hand Gesture Recognition

Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Visual Recognition: Image Formation

Construction Tele-robot System With Virtual Reality

Hand Tracking Miro Enev UCDS Cognitive Science Department 9500 Gilman Dr., La Jolla CA

Adaptive Gesture Recognition System Integrating Multiple Inputs

Top joint. Middle joint. Bottom joint. (joint angle (2)) Finger axis. (joint angle (1)) F (finger frame) P (palm frame) R, t (palm pose) Camera frame

AAM Based Facial Feature Tracking with Kinect

Margarita Grinvald. Gesture recognition for Smartphones/Wearables

5 Subdivision Surfaces

Human motion analysis: methodologies and applications

Carmen Alonso Montes 23rd-27th November 2015

Human pose estimation using Active Shape Models

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

3D Modeling of Objects Using Laser Scanning

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) Human Face Detection By YCbCr Technique

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Vision-based hand pose estimation: A review

3D Perception. CS 4495 Computer Vision K. Hawkins. CS 4495 Computer Vision. 3D Perception. Kelsey Hawkins Robotics

Computer Vision/Graphics -- Dr. Chandra Kambhamettu for SIGNEWGRAD 11/24/04

Why study Computer Vision?

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

CONCEPTUAL CONTROL DESIGN FOR HARVESTER ROBOT

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Gesture Recognition using Neural Networks

Design and building motion capture system using transducer Microsoft kinect to control robot humanoid

A Study on Similarity Computations in Template Matching Technique for Identity Verification

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

A New Algorithm for Shape Detection

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

MIME: A Gesture-Driven Computer Interface

Video analysis of human dynamics a survey

Project Report for EE7700

International Journal of Advanced Research in Computer Science and Software Engineering

COMPARATIVE ANALYSIS OF EYE DETECTION AND TRACKING ALGORITHMS FOR SURVEILLANCE

Character Modeling IAT 343 Lab 6. Lanz Singbeil

Transcription:

Gesture Recognition Technique:A Review Nishi Shah 1, Jignesh Patel 2 1 Student, Indus University, Ahmedabad 2 Assistant Professor,Indus University,Ahmadabad Abstract Gesture Recognition means identification and recognition of gestures originated from any type of bodily motion commonly originated from face or hand. It is a technology with the goal of interpreting human gestures via mathematical algorithms; also referred as perceptual user interface (PUI). Gesture recognition enables humans to interact with the machine (HMI) naturally without any physical contact with the machine. This paper focuses on the Gesture recognition concept and different ways of gesture recognition. Keywords Gesture Recognition, Wired Gloves, Stereo Camera, Depth-aware Cameras, Thermal Cameras, Appearance based, 3D Model, Volumetric, Skeletal I. INTRODUCTION In gesture recognition technology, a camera reads the movements of the human body and communicates the data to a computer that uses the gestures as input to control devices. Figure 1:Working of Gesture Recognition This technology has changed the way users interact with computers by eliminating input devices such as joysticks, mice and keyboards and allowing the body to give signals to the computer through gestures. The gestures of the body are read by a camera or sensors attached to a device. In addition to hand and body movement, gesture recognition technology also can be used to read facial, speech expressions (i.e., lip reading), and eye movements. Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Current focus in this field includes emotion recognition from the face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, proxemics, and human behaviours is also the subject of gesture recognition techniques. DOI: 10.23883/IJRTER.2017.3184.YKLQF 550

2.1 Block Diagram: II. GESTURE RECOGNITION Figure 2:Block Diagram of Gesture Recognition Gesture recognition system composed of several stages; these stages are varied according to application used, but, however, the unified outline can be settled as shown in figure. A. Data Acquisition: This step is responsible for collecting the input data which are the hand, face or body gestures and the classifier classifies the input gesture into the feasible classes [1]. B. Gesture Modeling: This step employ the fitting and fusing of the input gesture into the model used; this step may require some pre-processing steps to ensure the successful unification of the gesture [1]. C. Feature Extraction: After successful modeling of input data/gesture, the feature extraction should be smooth since the fitting is considered the most difficult part; these features can be hand/palm/fingertips location, joint angles, or any emotional expression or body movement. The extracted features might be stored in the system at training stage as template. Which have some limited memory should not be overtaken to remember the training data. D. Recognition Stage: This stage is considered to be a final stage for gesture system and the command/meaning of the gesture should be declared [1]. 2.2 Ways of Gesture Recognition: There are different ways of gesture recognition which takes gesture as an input through following ways: A. Wired Gloves B. Stereo Cameras C. Depth-aware cameras D. Thermal camera E. Radar A. Wired Gloves A data glove is an interactive device, resembling a glove worn on the hand, which facilitates tactile sensing and fine-motion control in robotics. These can provide input to the computer about the position and rotation of the hands using magnetic or inertial tracking devices[6]. This uses thin fiber optic cables running down the back of each hand, each with a small crack in it. Light is shone down the cable so when the fingers are bent light leaks out through the cracks. Measuring light loss gives an accurate reading of hand pose. The data Glove captures the position and movement of the fingers and wrist. It has up to 22 sensors, including three bend sensors on each finger, four abduction sensors, plus sensors measuring thumb crossover, palm arch, wrist flexion and wrist abduction. Once hand pose data has been captured by the gloves, gestures can be recognized using a number of different techniques[7]. @IJRTER-2017, All Rights Reserved 551

Figure 3:WiredGloves B. Stereo Cameras A Stereo camera is a camera that has two cameras about the same distance apart as your eyes and takes two pictures at the same time. This allows the camera to simulate human binocular vision, and therefore gives it the ability to capture three-dimensional images, a process known as stereo photography. Stereo cameras may be used for making stereo views and 3D pictures for movies, or for range imaging. Figure 4:View from StereoCamera C. Depth-aware cameras Using specialized cameras such as time-of-flight cameras, one can generate a depth map of what is being seen through the camera at a short range, and use this data to approximate a 3D representation of what is being seen. These sensors supplement today s monocular RGB images with per-pixel depth information (often derived by projecting a pattern of 3D infrared light into a scene). The technology will enable enhanced object, body, facial, and gesture recognition. Figure5:View from Normal Camera(Left), Processed image after getting clicked from Depth-Aware Camera(Right) @IJRTER-2017, All Rights Reserved 552

D. Thermal Cameras An infrared camera is a device that detects infrared radiation (temperature) from the target object and converts it into an electronic signal to generate a thermal picture on a monitor or to make temperature calculations on it. Figure 6:Thermal Camera Figure 7:Image Captured from Thermal Camera E. Radar Radar accurately detects minute hand and finger movements to control devices without making any physical contact with them. Figure 8:Watch using Radar 2.2.1. Comparison between the different input cameras: Figure 9:Comparison between different output of cameras @IJRTER-2017, All Rights Reserved 553

Methods Glove-Based Model Vision-Based Model Cost Higher Lesser User Comfort Lesser Higher than GB Model Hand Anatomy Restriction high Less Calibration Critical Not critical Portability Lesser ability High portability Table 1:Comparision of Iinput Methods III. GESTURE RECOGNITION ALGORITHM Algorithm mainly divided in two parts: 1) Appearance based 2) 3D model based 3.1. Appearance-based Gesture Recognition Appearance based approaches are simpler and easier than 3D model based approaches due to the easier extraction of features in 2D image. In appearance based approaches, the visual appearance of the input hand image is modeled using the feature extracted from the image, which will be compared with the features extracted from stored image. Appearance based hand posture recognition systems aim to detect the hand shape configuration without an explicit knowledge of the hand model [2]. Most of the appearance based methods use training over a large set of training data so as to find a mapping between their database and the inputs. They classify the hand shape according to similarities in image features [2]. Appearance-based methods treat the raw image as a single feature in a high-dimensional space. The common detection method used in this approach is to detect skin colored regions in the image; however this method affects with changing illumination conditions and other background objects with skin like color [4]. At present a lot of study efforts have been grown on approaches that apply invariant features, such as AdaBoost learning algorithm. The use of invariant features enables the identification of regions or points on a particular object, rather than modeling the entire objects. With this method the problem of partial occlusion has been overcome [4]. The major disadvantage of appearance based methods is their need for collecting large sets of training data. However, it is very difficult to cover all possible configurations of the hand due to its highly joined structure. This makes these methods dependent on specialized devices like the Hand Gloves [2]. @IJRTER-2017, All Rights Reserved 554

Figure 10: These binary silhouette (left) or contour(right) images represent typical input for appearancebased algorithms. They are compared with different hand templates and if they match, the correspondent gesture is inferred 3.2. 3D Model-based detection The common characteristic of model based methods is that they all try to minimize the difference between a predefined model and the features extracted from the inputs images. In order to measure the change between the pose of the user s hand and the estimated configuration of the model, first the descriptive primitives should be defined. Examples to these primitives are the locations of the fingertips, the difference between contours and the silhouette features. Model based approaches used 3D model description for modeling and analysing the hand shape. In these approaches search for the kinematic parameters are required by making 2D projection from 3D model of the hand to correspond edges images of the hand, but a lot of hand features might be lost in 2D projection. 3D Model can be classified into volumetric and skeletal models. Volumetric models deal with 3D visual appearance of human hand and are widely used. The main problem with this modeling technique is that it deals with all the parameters of the hand which are huge dimensionality. Skeletal models overcome volumetric hand parameters problem by limiting the set of parameters to model the hand shape from 3D structure. 3.2.1. Volumetric Volumetric approaches have been heavily used in computer animation industry and for computer vision purposes. The models are generally created from complicated 3D surfaces, like NURBS or polygon meshes [6]. The drawback of this method is that is very computational. For the moment, a more interesting approach would be to map simple primitive objects to the person s most important body parts (for example cylinders for the arms and neck, sphere for the head) and analyze the way these interact with each other. Furthermore, some abstract structures like super-quadrics and generalised cylinders may be even more suitable for approximating the body parts. The exciting thing about this approach is that the parameters for these objects are quite simple [6]. Figure 11: A real hand (left) is interpreted as a collection of vertices and lines in the 3D mesh version (right), and the software uses their relative position and interaction in order to infer the gesture. @IJRTER-2017, All Rights Reserved 555

3.2.2. Skeletal Instead of using intensive processing of the 3D models and dealing with a lot of parameters, one can just use a simplified version of joint angle parameters along with lengths. This is known as a skeletal representation of the body, where a virtual skeleton of the person is computed and parts of the body are mapped to few segments. The analysis here is done using the position and orientation of these segments and the relation between each one of them (for example the angle between the joints and the relative position or orientation) [6]. This three dimensional hand model is based on the 3D kinematic hand model with considerable DOF s (Degree of freedom), and try to estimate the hand parameters by comparison between the input images and the possible 2D appearance projected by the 3D hand model. This approach is ideal for realistic interactions in virtual environments [5]. A category of approaches utilize 3D hand models for the detection of hands in images. One of the advantages of these methods is that they can achieve view-independent detection. Advantages of using skeletal models: Algorithms are faster because only key parameters are analyzed Pattern matching against a template database is possible Using key points allows the detection program to focus on the significant parts of the body Figure 12: The skeletal version (right) is effectively modelling the hand (left). This has fewer parameters than the volumetric version and it's easier to compute, making it suitable for real-time gesture analysis systems. IV. ALGORITHM TECHNIQUES FOR RECOGNIZING HANDPOSTURES AND GESTURES The raw data collected from a vision-based or glove-based data collection system, must be analyzed to determine if any postures or gestures have been recognized. Various algorithms are as follows. A. Template Matching: The simplest methods for recognizing hand postures are through Template matching. The template matching is a method to check whether a given data record can be classified as a member of a set of stored data records. Recognizing hand postures using template matching has two parts [3]. The first is to create the template by collecting data values for each posture in the training data set. The second part is to find the posture template most closely matching the current data record by comparing the current sensor readings with the given set. @IJRTER-2017, All Rights Reserved 556

B. Feature Extraction Analysis: The low-level information from the raw data is analysed in order to produce higher-level semantic information and are used to recognize postures and gestures, is defined as Feature Extraction and Analysis. It is a robust way to recognize hand postures and gestures. It can be used to recognize both simple hand postures and gestures and also complex ones as well [3]. C. Active Shapes Model: A technique for locating a feature within a still image is called Active shape models or smart snakes [3]. A contour on the image that is roughly the shape of the feature to be tracked is used. The manipulation of contour is done by moving it iteratively toward nearby edges that deform the contour to fit the feature. Active shape model is applied to each frame and use the position of the feature in that frame as an initial approximation for the next frame. D. Principal Component Analysis : A statistical technique for reducing the dimensionality of a data set in which there are many interrelated variables is called Principal Component Analysis where retaining variation in the dataset. Reduction of data set is by transforming the old data to a new set of variables that are ordered so that the first few variables contain most of the variation present in the original variables [3]. E. Linear Fingertip Models: This is a model that assumes most finger movements which are linear and comprise very little rotational movement. The model uses only the fingertips as input data and permits a model that represents each fingertip trajectory through space as a simple vector. Once the fingertips are detected, their trajectories are calculated. The postures themselves are modelled from a small training set by storing a motion code, the gesture name, and direction and magnitude vectors for each of the fingertips. The postures are recognized if all the direction and magnitude vectors match (within some threshold) a gesture record in the training set [3]. V. CONCLUSION Hand gestures provide an interesting interaction paradigm in a variety of computer applications. The importance of gesture recognition lies in building efficient human machine interaction. The issues related with these recognition techniques are what technology to use for collecting raw data from the bodily motions. Generally, two types of technologies are available for collecting this raw data. The first one is a glove input device, which measures a number of joint angles in the hand. Accuracy of a glove input device depends on the type of bend sensor technology used; usually, the more accurate the glove is, the more expensive it is. The second way of collecting raw data is to use computer vision. In a vision-based solution, one or more camera s placed in the environment record hand movement. By using a hand posture or gesture-based interface, the user does not want to wear the device and be physically attached to the computer. If vision-based solutions can overcome some of their difficulties and disadvantages, they appear to be the best choice for raw data collection. A number of recognition techniques are available such as template matching, feature extraction,active shape models. REFERENCES [1] Miss. Kawade Sonam P, Prof. V. S. Ubale, "Gesture Recognition-A Review", IOSR Journal of Electronics and Communication Engineering (IOSR-JECE), ISSN: 2278-2834, ISBN: 2278-8735. [2] Ayse Naz Erkan, "MODEL BASED THREE DIMENSIONAL HAND POSTURE RECOGNITION FOR HAND TRACKING", Graduate Program in Computer EngineeringBogazici University,2004 [3] R.Pradipa, Ms S.Kavitha, "Hand Gesture Recognition Analysis of Various Techniques, Methods and Their Algorithms",International Journal of Innovative Research in Science, Engineering and Technology, Volume 3, Special Issue 3, March 2014 @IJRTER-2017, All Rights Reserved 557

[4] Noor Adnan Ibraheem, RafiqulZaman Khan Survey on Various Gesture Recognition Technologies and Techniques, International Journal of Computer Applications (0975 8887), Volume 50 No.7, July 2012 [5] Rama B. Dan,P. S. Mohod, Survey on Hand Gesture Recognition Approaches, Rama B. Dan et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 5 (2), 2014, 2050-2052 [6] https://en.wikipedia.org/wiki/gesture_recognition [7] http://whatis.techtarget.com/definition/gesture-recognition [8] http://www.cyberglovesystems.com/ @IJRTER-2017, All Rights Reserved 558