A Two-stage Scheme for Dynamic Hand Gesture Recognition
|
|
- Heather Hilary Johnston
- 6 years ago
- Views:
Transcription
1 A Two-stage Scheme for Dynamic Hand Gesture Recognition James P. Mammen, Subhasis Chaudhuri and Tushar Agrawal Department of Electrical Engg. Indian Institute of Technology, Bombay, India-76 Abstract In this paper a scheme is presented for recognizing hand gestures using the output of a hand tracker which tracks a rectangular window bounding the hand region. A hierarchical scheme for dynamic hand gesture recognition is proposed based on state representation of the dominant feature trajectories using an a priori knowledge of the way in which each gesture is performed.. Introduction Hand gesture recognition is essential in a host of applications like haptic interfaces for large-screen multimedia and virtual reality environments [], robot programming by demonstration, sign language recognition, human computer interaction, telerobotic applications, etc. Previous attempts at recognizing similar gestures have used a variety of methods. Davis and Shah [] used markers for tracking finger tips and used the fingertip trajectories for recognizing seven gestures. In [], the hand s position in the image, velocity, values obtained by eigen analysis, etc. are used as features and words from the ASL are recognized using an HMM based scheme. In [], the concept of motion energy is used to estimate the dominant motion of the hand and the gestures are recognized by fitting finite state models of gestures. In [], a gesture is classified as a sequence of postures using Principal Component Analysis and recognized using Finite State Machines. Multiple cameras are used in [6] to extract the D pose of the human body. Instead of using all the parameters describing the pose as features, the trajectory obtained by a projection into a D eigenspace is used for gesture recognition. In [7], each feature trajectory is split into sub-trajectories and recognition is achieved by maximizing the probability of it being a particular gesture in the eigenspaces of each these sub-trajectories. An incremental recognition strategy that is an extension of the condensation algorithm is proposed in [8] to recognize gestures based on the D hand trajectory. Gestures are modeled as velocity trajectories and the condensation algorithm is used to incrementally match the gesture models to the input data. A robust hand tracker proposed in [9] is used for tracking the hand in order to extract features which can be used for recognizing the gestures. At the first level of classification, the gesture to be recognized is assigned to one of the five classes based on its dominant feature trajectories. In the next level, the gesture is recognized using a sequence of states which are obtained from the dominant feature trajectories. The representation of gestures as a sequence of states overcomes the problem due to variation in the speed at which a gesture is performed, thus avoiding time warping of the data sequence.. Selection of Features The proposed gesture recognition system is intended to be used as an interface for a telerobotic system. Dynamic manipulative hand gestures being most suited to such an application, we select the ten gestures listed in Table to form our gesture vocabulary. The hand tracker mentioned in the previous section tracks the change in position and shape of a rectangular window bounding the palm region of the hand performing the gesture. Hence the coordinates of the centroid of the hand region should serve as good features. The change in area of the hand region can capture the variation in the shape of the palm region, and moreover, it is also indicative of the variation in hand position along the axis of the camera. Thus the feature vector for the n th frame in the video sequence is selected to be f (n) =[X(n) Y (n) A(n)] T In order to have values which can be meaningful from one video sequence to another, we scale the x and y- coordinates of the centroid by the width of the hand region during the start position and the area by the initial area, i.e. A() =. This provides invariance to magnification due to the distance from the camera at which the gesture is performed and also with respect to changes in hand size from person to person. te that the features are not made invariant to the starting position. This is not required in our scheme as the representation of trajectories as a sequence of states will not depend on this. Thus we have X(n) =m(n)=m() Y (n) =m(n)=m() () A(n) =m(n)=m()
2 (a) Figure : Smoothing of the feature trajectories for the Move Right gesture. (a)features as obtained from the tracker, and (b) Feature trajectories after smoothing. (a) Move Up (c) Go Away (b) (b) Move Down (d) Come Closer Figure : (a-d) Smoothed feature trajectories of various gestures. where m pq (n) is the pq th moment of the hand region obtained by tracking in the n th frame. The raw data obtained by tracking may be noisy and hence we smoothen the data using an averaging filter. Fig. (a) shows the raw features as obtained from the tracker for the Move Right (RIGHT) gesture clearly depicting their noisy nature. The result of smoothing this using an averaging filter of length 7 is given in Fig. (b). The feature trajectories of each unknown gesture are smoothed in this manner before any further stages of processing in order to eliminate noise. Smoothed feature trajectories of gestures performed by a single user have been shown in Fig... Initial Classification of Gestures based on the Dominant Features We observe that although features have been selected, the information conveyed by the gestures is not simultaneously captured by all the features. For example, in the case of Move Right (RIGHT) gesture, the information is contained in the horizontal motion which is captured by the Y (n) feature in our case. The other two features do not show meaningful variations in this case. We call these features which capture the information conveyed by the gesture as the dominant features corresponding to the gesture. By defining S X ;S Y and S A to be sets of gestures whose dominant features are X(n);Y(n) and A(n), respectively, we obtain the Venn diagram representation of Fig.. It shows the relationship between the gestures and their dominant features. For example, the Clockwise (CW) and Counterclockwise (CCW) gestures where both X(n)andY (n) convey information are shown to belong to both sets S X and S Y. The seven non-overlapping regions in Fig. show that for features, we may have at most 7 classes, based on which of the features are dominant. Our set of gestures, does not cover all the seven classes. The subset (S X S A ) (S X S Y S A ) is empty in our case. For the Move Left (LEFT), Move Right (RIGHT), Move Up (UP), Move Down (DOWN), Move Counterclockwise (CCW) and Move Clockwise (CW) gestures, either X(n) or Y (n) or both are the dominant features. In the Push (PUSH) and Pull (PULL) gestures, A(n) is the dominant feature as the change in the area of the hand conveys the information. The remaining two gestures, viz., Go Away (AWAY) and Come Closer (CLOSER), show very little motion and small variation in area. Hence at the first level, both are included in the same class where the range of variation in X(n) and Y (n) is small and the range of variation of A(n) is comparable to that of X(n) and/or Y (n). Thus in the first stage of classification we have classes, each containing gestures as shown in Fig.. As the dominant feature or features can be determined based on the range of variation of each feature trajectory, we obtain the decision tree based scheme as shown in Fig. for the first level of classification. In terms of the features this assumes the following form. First we smooth each feature trajectory as mentioned earlier in order to suppress tracking noise. Thereafter we obtain the range of variation of feature i, i =max(i(n)) min(i(n)) where i = X; Y; A. A (n) =CA(n), where C is a factor used to make the range of change in area comparable to that of change in position. In this study we select C =.w we assign the gesture to be recognized to one of the classes
3 UP DOWN SX CW CCW AWAY PUSH PULL SY CLOSER LEFT RIGHT Gesture to be recognized Is variation in X large and variation in Y small? Is variation in Y large and variation in X small? Class I Class II S A Figure : Gesture sets based on dominant features. Are variations in X and Y both large? Class III Gesture Set Is A the feature with max variation and is the ratio of max to min large? Class IV Class: I II III IV V Class V UP DOWN LEFT RIGHT CCW CW PUSH PULL CLOSER AWAY Figure : Initial classification of the gesture set based on dominant features. in the first stage as follows. We define, f i = arg max i i and the classification proceeds as follows. ffl Class I: If ( f i = X )&( X )&( Y < X= ) ffl Class II: If ( f i = Y )&( Y ) &( X < Y =) ffl Class III: If (( f i = X)&( X > )&( Y > X=)) OR (( f i = Y )&( Y > )&( X > Y =)) ffl Class IV: If ( f i = A )&(max(a(n))= min(a(n)) > :) ffl Class V: If none of the above classes where & is the logical AND operator and and are appropriate thresholds for determining whether the motion is significant ( =:; = in this study). A threshold of means that we consider a motion greater than times the initial hand width ( m() ) to be large motion. The threshold for class III is lesser than because of the fact that when there is significant motion in both the horizontal and vertical directions, the range of variation in each direction is naturally reduced compared to the case in which there is large motion in one direction alone. Figure : Decision tree for the first stage of classification.. Trajectory Based Classification We need to analyse the dominant feature trajectories within each class in order to recognize the gestures. We observe that each gesture consists of portions of motion in which the feature dynamics are similar and distinct. As the information characterizing the gesture is contained in the sequence of these portions of similar dynamics, we propose a definition of a gesture which incorporates this information. We define a gesture to be a sequence of states fs; :::; s n g, where the states correspond to portions of the dominant feature trajectory having similar dynamics. For each class a sequence of states can be defined such that it characterizes and distinguishes between the gestures in that class. The resulting sequence of states obtained from the trajectory can be used for gesture recognition as explained below. Class I This class consists of UP and DOWN gestures. Due to the presence of only one dominant feature, viz. X(n), the states are scalars in this case. For the UP gesture, initially the hand shows a small downward movement followed by a long upward motion as shown in Fig. 6. Hence ideally X(n) would initially increase from the time index (frame) up to some index k and then decrease all the way until the end of the sequence, say m. Hence it should be split into two states representing this. We select the states to be the change in the feature over portions of uniform dynamics i.e. portions which consist of motion in the same direction. Thus, ideally this gesture would result
4 X(n) X k m n s Figure 6: Obtaining states from the dominant trajectory s Y S S S S [ -] T [- -] T [- ] T [ ] T (-,-) (-,) Y (,-) (,) X in the states s = X(k) X()and s = X(m) X(k). The frame k would correspond to a maxima where the direction of motion changes and hence changing the sign of the derivative of X(n). In order to determine whether it is the UP gesture, we use its most dominant characteristic, i.e. the upward motion, resulting in the recognition criterion ±s i <. The DOWN gesture is similar except that the direction is reversed. Thus, we may summarize the criterion for classification as follows. If ±s i < :UP; Else : DOWN. Class II The gestures in this class are LEFT and RIGHT. This class and its gestures are analogous to that of class I. The only difference is that instead of X(n) being the dominant feature as in class I, Y (n) is the dominant feature here. As a result the same strategy as in class I can be used. If ±s i < :RIGHT ; Else : LEFT. Class III The gestures in this class are CCW and CW. As this class has two dominant features, X(n) and Y (n), the states are constructed using both of them. The method of selecting the states is depicted in Fig. 7(a). Both the trajectories are split individually at points of maxima or minima and later both of these are merged together to obtain states for the combined D trajectory. Since our purpose is to differentiate between the directions of rotation of the hand, we select the directions of change of X(n) and Y (n) as the elements of the D state vectors. An increase is denoted by and a decrease by. Thus for the trajectories shown in Fig. 7, the states would be s = [ ] T, s = [ ] T, s = [ ] T and s = [ ] T. In the case of CCW and CW gestures, ideally we would expect a sequence of states due to the hand moving in a circle and returning back to the same position. However, the hand may rotate a bit more or less re- (a) Merging for D states (b) Mapping from states to directions Figure 7: Recognition of gestures in Class III sulting in the number of states N s being not equal to exactly in some cases. Hence for recognizing the gesture, we use a scheme based on determining the direction of rotation. We define a mapping f : S! D, i.e. from the D states to directions as shown in Fig. 7(b) and defined as. f ([ ] T ) = ; f ([ ] T ) = ; f([ ] T ) = ; f ([ ] T )=. For a clockwise rotation of the hand the resultant sequence, f (s); f(s); :::;f(s n ) would be a subsequence of the periodic sequence ; ; ; ; ; ; ;::: The direction of rotation would be given by the sign of sum of the cyclic differences P n ff N = i= f (s i)ψf (s i ) where Ψ denotes a K-cyclic difference mapping defined as follows: 8 a; b ffl f;:::;k g; 8 < a Ψ b = : a b if K<a b<k a b = K a b = K In our case K = : Thus we recognize the gesture as follows. If ff N > : CW ; Else : CCW. Class IV This class has two gestures, PUSH and PULL, which have only one dominant feature A (n). Both would ideally result in a single state in which the area either increases or decreases monotonically. Thus, similar to class I, we define the state to be s = A (m) A (). Hence to recognize the gestures we check whether the number of states N s is and differentiate between them based on the direction of change, i.e. increase or decrease of area. The resultant scheme is summarized below.
5 If N s =& (If s > :PUSH ; Class V Else : PULL). Both the gestures of this class, AWAY and CLOSER, are similar in the sense that both have A (n) as a dominant feature showing similar variation. Hence this feature cannot be used to discriminate between them. However, X(n) shows very different characteristics for both. For the CLOSER gesture since there is no horizontal motion of the hand, X(n) remains almost constant, whereas for the AWAY gesture, there is a distinct horizontal movement first towards the left and then towards the right. Y (n) also shows similar behavior for both gesture. Hence we use only X(n) to form the states for the AWAY gesture, using the same method as used for classes I and II which should ideally result in states. The recognition criteria are listed below. If X < Y = :CLOSER ; Else If N s =& s > :AWAY A gesture which does not fall into any of these categories results in an unrecognized gesture.. Experimental Results The hierarchical gesture recognition scheme based on dominant features was tested on a data set of 8 gestures performed by different users. Each gesture sequence is of different length varying form to 6 frames depending on the time taken to perform the gestures. The gestures were captured in a natural office environment with a cluttered background. Out of the 8 gestures, were correctly recognized, while there were two false recognition. This was due to the manner in which the gesture was performed. There was significant motion in both x and y-directions which led to false recognition. There were 6 unrecognized gestures and it was found that this was due to false skin color detection resulting in faulty tracking results and this has nothing to do with the accuracy of the proposed recognition scheme. Table summarizes the results. 6. Conclusions In this paper, a technique for gesture recognition using the data available from a hand tracker developed earlier has been proposed. We use a hierarchical scheme where in the first stage we classify based on the dominant features and in the second stage recognize the gesture based on states describing the dominant feature trajectory. Our recognition technique is unaffected by the rate at which gestures are performed and the initial position of the hand in the image due to the state based approach. The representation of gestures as a sequence of states obtained by splitting the dominant feature trajectories into perceptually important segments results in a fast and simple recognition scheme.. Gestures Instances True False Unrecog. LEFT 6 6 RIGHT 9 8 UP DOWN 9 CCW 6 CW PUSH 8 PULL 9 AWAY CLOSER Total 8 6 References Table : Summary of experimental results [] R. Sharma V. Pavlovic and T. S. Huang, Visual interpretation of hand gestures for human-computer interaction: A review, IEEE Trans. on PAMI, vol. 9, no. 7, pp , 997. [] J. Davis and M. Shah, Visual gesture recognition, in IEE Proc. - Vision, Image, Signal Processing, April 99, vol., pp.. [] J. Weaver T. Starner and A. Pentland, Real-time american sign language recognition using desk and wearable computer based video, in IEEE Trans. on PAMI, Dec 998, pp [] M. Yeasin and S. Chaudhuri, Visual understanding of dynamic hand gestures, Pattern Recognition, vol., pp. 8 87,. [] J. Martin and J.L. Crowley, An appearance-based approach to gesture-recognition, in Proc. of ICIAP, Sep 997. [6] H. Ohno and M. Yamamoto, Gesture recognition using character recognition technique on d eigenspace, in Proc. of IEEE ICCV, Sep 999, vol., pp.. [7] D. Hall Martin and J. L. Crowley, Statistical recognition of parameter trajectories for hand gestures and face expressions, in Proc. of ECCV, 998. [8] M. J. Black and A. D. Jepson, Recognizing temporal trajectories using the condensation algorithm, in Proc. of IEEE Int. Conf on Automatic Face and Gesture Recognition, 998. [9] James P. Mammen, S. Chaudhuri and Tushar Agrawal, Simultaneous tracking of both hands by estimation of erroneous observations, in Proc. of BMVC, Manchester, Sep.
To appear in ECCV-94, Stockholm, Sweden, May 2-6, 1994.
To appear in ECCV-94, Stockholm, Sweden, May 2-6, 994. Recognizing Hand Gestures? James Davis and Mubarak Shah?? Computer Vision Laboratory, University of Central Florida, Orlando FL 3286, USA Abstract.
More informationHand Gesture Recognition using Depth Data
Hand Gesture Recognition using Depth Data Xia Liu Ohio State University Columbus OH 43210 Kikuo Fujimura Honda Research Institute USA Mountain View CA 94041 Abstract A method is presented for recognizing
More informationSimultaneous Tracking of Both Hands by Estimation of Erroneous Observations
Simultaneous Tracking of Both ands by Estimation of Erroneous Observations James P. Mammen, Subhasis haudhuri and Tushar Agrawal SPANN Lab, Department of Electrical Engg., Indian Institute of Technology,
More informationFingertips Tracking based on Gradient Vector
Int. J. Advance Soft Compu. Appl, Vol. 7, No. 3, November 2015 ISSN 2074-8523 Fingertips Tracking based on Gradient Vector Ahmad Yahya Dawod 1, Md Jan Nordin 1, and Junaidi Abdullah 2 1 Pattern Recognition
More informationFace Detection and Recognition in an Image Sequence using Eigenedginess
Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras
More informationHand Gesture Recognition System
Hand Gesture Recognition System Mohamed Alsheakhali, Ahmed Skaik, Mohammed Aldahdouh, Mahmoud Alhelou Computer Engineering Department, The Islamic University of Gaza Gaza Strip, Palestine, 2011 msali@iugaza.edu.ps,
More informationAutomatic Gait Recognition. - Karthik Sridharan
Automatic Gait Recognition - Karthik Sridharan Gait as a Biometric Gait A person s manner of walking Webster Definition It is a non-contact, unobtrusive, perceivable at a distance and hard to disguise
More informationFully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information
Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Ana González, Marcos Ortega Hortas, and Manuel G. Penedo University of A Coruña, VARPA group, A Coruña 15071,
More informationHand Gesture Modelling and Recognition involving Changing Shapes and Trajectories, using a Predictive EigenTracker
Hand Gesture Modelling and Recognition involving Changing Shapes and Trajectories, using a Predictive EigenTracker Kaustubh Srikrishna Patwardhan a Sumantra Dutta Roy a, a Dept. of Electrical Engineering,
More informationKeywords Binary Linked Object, Binary silhouette, Fingertip Detection, Hand Gesture Recognition, k-nn algorithm.
Volume 7, Issue 5, May 2017 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Hand Gestures Recognition
More informationGesture Recognition Using Image Comparison Methods
Gesture Recognition Using Image Comparison Methods Philippe Dreuw, Daniel Keysers, Thomas Deselaers, and Hermann Ney Lehrstuhl für Informatik VI Computer Science Department, RWTH Aachen University D-52056
More informationDetection of Small-Waving Hand by Distributed Camera System
Detection of Small-Waving Hand by Distributed Camera System Kenji Terabayashi, Hidetsugu Asano, Takeshi Nagayasu, Tatsuya Orimo, Mutsumi Ohta, Takaaki Oiwa, and Kazunori Umeda Department of Mechanical
More informationDynamic Time Warping for Binocular Hand Tracking and Reconstruction
Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Javier Romero, Danica Kragic Ville Kyrki Antonis Argyros CAS-CVAP-CSC Dept. of Information Technology Institute of Computer Science KTH,
More informationAutomatic visual recognition for metro surveillance
Automatic visual recognition for metro surveillance F. Cupillard, M. Thonnat, F. Brémond Orion Research Group, INRIA, Sophia Antipolis, France Abstract We propose in this paper an approach for recognizing
More informationGESTURE RECOGNITION SYSTEM
GESTURE RECOGNITION SYSTEM 1 ANUBHAV SRIVASTAVA, 2 PRANSHI AGARWAL, 3 SWATI AGARWAL & 4 USHA SHARMA 1,2,3&4 Electronics and Communication Department, I.T.S. Engineering College, Greater Noida, Uttar Pradesh,
More informationCOMS W4735: Visual Interfaces To Computers. Final Project (Finger Mouse) Submitted by: Tarandeep Singh Uni: ts2379
COMS W4735: Visual Interfaces To Computers Final Project (Finger Mouse) Submitted by: Tarandeep Singh Uni: ts2379 FINGER MOUSE (Fingertip tracking to control mouse pointer) Abstract. This report discusses
More informationMIME: A Gesture-Driven Computer Interface
MIME: A Gesture-Driven Computer Interface Daniel Heckenberg a and Brian C. Lovell b a Department of Computer Science and Electrical Engineering, The University of Queensland, Brisbane, Australia, 4072
More informationHuman Hand Gesture Recognition Using Motion Orientation Histogram for Interaction of Handicapped Persons with Computer
Human Hand Gesture Recognition Using Motion Orientation Histogram for Interaction of Handicapped Persons with Computer Maryam Vafadar and Alireza Behrad Faculty of Engineering, Shahed University Tehran,
More informationA Robust Hand Gesture Recognition Using Combined Moment Invariants in Hand Shape
, pp.89-94 http://dx.doi.org/10.14257/astl.2016.122.17 A Robust Hand Gesture Recognition Using Combined Moment Invariants in Hand Shape Seungmin Leem 1, Hyeonseok Jeong 1, Yonghwan Lee 2, Sungyoung Kim
More informationSubject-Oriented Image Classification based on Face Detection and Recognition
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
More informationHuman Motion Detection and Tracking for Video Surveillance
Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,
More informationMULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION
MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of
More informationI. INTRODUCTION. Figure-1 Basic block of text analysis
ISSN: 2349-7637 (Online) (RHIMRJ) Research Paper Available online at: www.rhimrj.com Detection and Localization of Texts from Natural Scene Images: A Hybrid Approach Priyanka Muchhadiya Post Graduate Fellow,
More informationFAST AND RELIABLE RECOGNITION OF HUMAN MOTION FROM MOTION TRAJECTORIES USING WAVELET ANALYSIS
FAST AND RELIABLE RECOGNITION OF HUMAN MOTION FROM MOTION TRAJECTORIES USING WAVELET ANALYSIS Shu-Fai WONG 1 and Kwan-Yee Kenneth WONG 1 1 Department of Computer Science and Information Systems, The University
More informationScale Space Based Grammar for Hand Detection
Scale Space Based Grammar for Hand Detection by Jan Prokaj A thesis submitted in partial fulfillment of the requirements for the Honors in the Major Program in Computer Science in the College of Engineering
More informationTask analysis based on observing hands and objects by vision
Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In
More informationInvariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction
Invariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction Stefan Müller, Gerhard Rigoll, Andreas Kosmala and Denis Mazurenok Department of Computer Science, Faculty of
More informationA Hand Gesture Recognition Method Based on Multi-Feature Fusion and Template Matching
Available online at www.sciencedirect.com Procedia Engineering 9 (01) 1678 1684 01 International Workshop on Information and Electronics Engineering (IWIEE) A Hand Gesture Recognition Method Based on Multi-Feature
More informationTHE preceding chapters were all devoted to the analysis of images and signals which
Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to
More informationCLASSIFICATION OF BOUNDARY AND REGION SHAPES USING HU-MOMENT INVARIANTS
CLASSIFICATION OF BOUNDARY AND REGION SHAPES USING HU-MOMENT INVARIANTS B.Vanajakshi Department of Electronics & Communications Engg. Assoc.prof. Sri Viveka Institute of Technology Vijayawada, India E-mail:
More informationScale Invariant Detection and Tracking of Elongated Structures
Scale Invariant Detection and Tracking of Elongated Structures Amaury Nègre, James L. Crowley, Christian Laugier To cite this version: Amaury Nègre, James L. Crowley, Christian Laugier. Scale Invariant
More informationMouse Pointer Tracking with Eyes
Mouse Pointer Tracking with Eyes H. Mhamdi, N. Hamrouni, A. Temimi, and M. Bouhlel Abstract In this article, we expose our research work in Human-machine Interaction. The research consists in manipulating
More informationA Real-Time Hand Gesture Recognition for Dynamic Applications
e-issn 2455 1392 Volume 2 Issue 2, February 2016 pp. 41-45 Scientific Journal Impact Factor : 3.468 http://www.ijcter.com A Real-Time Hand Gesture Recognition for Dynamic Applications Aishwarya Mandlik
More informationRadially Defined Local Binary Patterns for Hand Gesture Recognition
Radially Defined Local Binary Patterns for Hand Gesture Recognition J. V. Megha 1, J. S. Padmaja 2, D.D. Doye 3 1 SGGS Institute of Engineering and Technology, Nanded, M.S., India, meghavjon@gmail.com
More informationA Performance Evaluation of HMM and DTW for Gesture Recognition
A Performance Evaluation of HMM and DTW for Gesture Recognition Josep Maria Carmona and Joan Climent Barcelona Tech (UPC), Spain Abstract. It is unclear whether Hidden Markov Models (HMMs) or Dynamic Time
More informationTilt correction of images used for surveillance
Tilt correction of images used for surveillance Pallav Nandi Chaudhuri1 Samriddha Dey2 and Subhadip Bhattacharya3 1 23 Computer Science and Engineering Techno India Banipur Applied Electronics and Instrumentation
More informationLocating Salient Object Features
Locating Salient Object Features K.N.Walker, T.F.Cootes and C.J.Taylor Dept. Medical Biophysics, Manchester University, UK knw@sv1.smb.man.ac.uk Abstract We present a method for locating salient object
More informationA new approach to reference point location in fingerprint recognition
A new approach to reference point location in fingerprint recognition Piotr Porwik a) and Lukasz Wieclaw b) Institute of Informatics, Silesian University 41 200 Sosnowiec ul. Bedzinska 39, Poland a) porwik@us.edu.pl
More informationResearch on Recognition and Classification of Moving Objects in Mixed Traffic Based on Video Detection
Hu, Qu, Li and Wang 1 Research on Recognition and Classification of Moving Objects in Mixed Traffic Based on Video Detection Hongyu Hu (corresponding author) College of Transportation, Jilin University,
More informationAdaptative Elimination of False Edges for First Order Detectors
Adaptative Elimination of False Edges for First Order Detectors Djemel ZIOU and Salvatore TABBONE D~partement de math~matiques et d'informatique, universit~ de Sherbrooke, Qc, Canada, J1K 2R1 Crin/Cnrs
More informationDynamic Stroke Information Analysis for Video-Based Handwritten Chinese Character Recognition
Dynamic Stroke Information Analysis for Video-Based Handwritten Chinese Character Recognition Feng Lin and Xiaoou Tang Department of Information Engineering The Chinese University of Hong Kong Shatin,
More informationA Computer Vision System for Graphical Pattern Recognition and Semantic Object Detection
A Computer Vision System for Graphical Pattern Recognition and Semantic Object Detection Tudor Barbu Institute of Computer Science, Iaşi, Romania Abstract We have focused on a set of problems related to
More informationOptical Flow-Based Person Tracking by Multiple Cameras
Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and
More informationHAND-GESTURE BASED FILM RESTORATION
HAND-GESTURE BASED FILM RESTORATION Attila Licsár University of Veszprém, Department of Image Processing and Neurocomputing,H-8200 Veszprém, Egyetem u. 0, Hungary Email: licsara@freemail.hu Tamás Szirányi
More informationI D I A P RECOGNITION OF ISOLATED COMPLEX MONO- AND BI-MANUAL 3D HAND GESTURES R E S E A R C H R E P O R T
R E S E A R C H R E P O R T I D I A P RECOGNITION OF ISOLATED COMPLEX MONO- AND BI-MANUAL 3D HAND GESTURES Agnès Just a Olivier Bernier b Sébastien Marcel a IDIAP RR 03-63 FEBRUARY 2004 TO APPEAR IN Proceedings
More informationContactless Hand Gesture Recognition System
www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 3 Issue 11 November, 2014 Page No. 9238-9242 Contactless Hand Gesture Recognition System Puja D. Kumbhare Email-id:
More informationUnsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme
Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme Costas Panagiotakis and Anastasios Doulamis Abstract In this paper, an unsupervised, automatic video human members(human
More informationOccluded Facial Expression Tracking
Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento
More informationExpanding gait identification methods from straight to curved trajectories
Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods
More informationComparing Gesture Recognition Accuracy Using Color and Depth Information
Comparing Gesture Recognition Accuracy Using Color and Depth Information Paul Doliotis, Alexandra Stefan, Christopher McMurrough, David Eckhard, and Vassilis Athitsos Computer Science and Engineering Department,
More informationMotion Estimation for Video Coding Standards
Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression
More informationDetection of a Single Hand Shape in the Foreground of Still Images
CS229 Project Final Report Detection of a Single Hand Shape in the Foreground of Still Images Toan Tran (dtoan@stanford.edu) 1. Introduction This paper is about an image detection system that can detect
More informationStatic Gesture Recognition with Restricted Boltzmann Machines
Static Gesture Recognition with Restricted Boltzmann Machines Peter O Donovan Department of Computer Science, University of Toronto 6 Kings College Rd, M5S 3G4, Canada odonovan@dgp.toronto.edu Abstract
More informationObject Detection in Video Streams
Object Detection in Video Streams Sandhya S Deore* *Assistant Professor Dept. of Computer Engg., SRES COE Kopargaon *sandhya.deore@gmail.com ABSTRACT Object Detection is the most challenging area in video
More informationRECOGNITION OF ISOLATED COMPLEX MONO- AND BI-MANUAL 3D HAND GESTURES USING DISCRETE IOHMM
RECOGNITION OF ISOLATED COMPLEX MONO- AND BI-MANUAL 3D HAND GESTURES USING DISCRETE IOHMM Agnès Just, Sébastien Marcel Institut Dalle Molle d Intelligence Artificielle Perceptive (IDIAP) CH-1920 Martigny
More informationCOMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS
COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS Shubham Saini 1, Bhavesh Kasliwal 2, Shraey Bhatia 3 1 Student, School of Computing Science and Engineering, Vellore Institute of Technology, India,
More informationKeyVideoObjectPlaneSelectionbyMPEG-7VisualShapeDescriptor for Summarization and Recognition of Hand Gestures
KeyVideoObjectPlaneSelectionbyMPEG-7VisualShapeDescriptor for Summarization and Recognition of Hand Gestures M.K. Bhuyan D. Ghosh P.K. Bora ECEDepartment ECEDepartment ECEDepartment IIT Guwahati IIT Guwahati
More informationA Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection
International Journal of Research Studies in Science, Engineering and Technology Volume 2, Issue 5, May 2015, PP 49-57 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) A Robust Method for Circle / Ellipse
More informationFine Classification of Unconstrained Handwritten Persian/Arabic Numerals by Removing Confusion amongst Similar Classes
2009 10th International Conference on Document Analysis and Recognition Fine Classification of Unconstrained Handwritten Persian/Arabic Numerals by Removing Confusion amongst Similar Classes Alireza Alaei
More informationObject Detection in a Fixed Frame
Object Detection in a Fixed Frame Debjyoti Bagchi 1, Dipaneeta Roy Chowdhury 2 and Satarupa Das 2 1 Assistant Professor, Calcutta Institute Of Engineering And Management Kolkata, India, 2 Calcutta Institute
More informationIndian Multi-Script Full Pin-code String Recognition for Postal Automation
2009 10th International Conference on Document Analysis and Recognition Indian Multi-Script Full Pin-code String Recognition for Postal Automation U. Pal 1, R. K. Roy 1, K. Roy 2 and F. Kimura 3 1 Computer
More informationRecognition of Finger Pointing Direction Using Color Clustering and Image Segmentation
September 14-17, 2013, Nagoya University, Nagoya, Japan Recognition of Finger Pointing Direction Using Color Clustering and Image Segmentation Hidetsugu Asano 1, Takeshi Nagayasu 2, Tatsuya Orimo 1, Kenji
More informationCOMPARATIVE ANALYSIS OF EYE DETECTION AND TRACKING ALGORITHMS FOR SURVEILLANCE
Volume 7 No. 22 207, 7-75 ISSN: 3-8080 (printed version); ISSN: 34-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu COMPARATIVE ANALYSIS OF EYE DETECTION AND TRACKING ALGORITHMS FOR SURVEILLANCE
More informationA Novel Criterion Function in Feature Evaluation. Application to the Classification of Corks.
A Novel Criterion Function in Feature Evaluation. Application to the Classification of Corks. X. Lladó, J. Martí, J. Freixenet, Ll. Pacheco Computer Vision and Robotics Group Institute of Informatics and
More informationA System for Joining and Recognition of Broken Bangla Numerals for Indian Postal Automation
A System for Joining and Recognition of Broken Bangla Numerals for Indian Postal Automation K. Roy, U. Pal and B. B. Chaudhuri CVPR Unit; Indian Statistical Institute, Kolkata-108; India umapada@isical.ac.in
More informationDynamic skin detection in color images for sign language recognition
Dynamic skin detection in color images for sign language recognition Michal Kawulok Institute of Computer Science, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland michal.kawulok@polsl.pl
More informationEffects Of Shadow On Canny Edge Detection through a camera
1523 Effects Of Shadow On Canny Edge Detection through a camera Srajit Mehrotra Shadow causes errors in computer vision as it is difficult to detect objects that are under the influence of shadows. Shadow
More informationThe SIFT (Scale Invariant Feature
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical
More informationRESTORATION OF DEGRADED DOCUMENTS USING IMAGE BINARIZATION TECHNIQUE
RESTORATION OF DEGRADED DOCUMENTS USING IMAGE BINARIZATION TECHNIQUE K. Kaviya Selvi 1 and R. S. Sabeenian 2 1 Department of Electronics and Communication Engineering, Communication Systems, Sona College
More informationHand shape based gesture recognition in hardware
Available online at www.scholarsresearchlibrary.com Archives of Applied Science Research, 2013, 5 (3):261-269 (http://scholarsresearchlibrary.com/archive.html) ISSN 0975-508X CODEN (USA) AASRC9 Hand shape
More informationAN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing)
AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) J.Nithya 1, P.Sathyasutha2 1,2 Assistant Professor,Gnanamani College of Engineering, Namakkal, Tamil Nadu, India ABSTRACT
More informationA Real-time Application of Hand Gesture Recognition in Video Games
A Real-time Application of Hand Gesture Recognition in Video Games Ying Li yil026@cs.ucsd.edu March 10, 2011 Abstract Hand gesture recognition has many useful applications, and one of those is video game.
More informationN.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction
Volume, Issue 8, August ISSN: 77 8X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Combined Edge-Based Text
More informationHMM and IOHMM for the Recognition of Mono- and Bi-Manual 3D Hand Gestures
HMM and IOHMM for the Recognition of Mono- and Bi-Manual 3D Hand Gestures Agnès Just 1, Olivier Bernier 2, Sébastien Marcel 1 Institut Dalle Molle d Intelligence Artificielle Perceptive (IDIAP) 1 CH-1920
More informationDetermination of 3-D Image Viewpoint Using Modified Nearest Feature Line Method in Its Eigenspace Domain
Determination of 3-D Image Viewpoint Using Modified Nearest Feature Line Method in Its Eigenspace Domain LINA +, BENYAMIN KUSUMOPUTRO ++ + Faculty of Information Technology Tarumanagara University Jl.
More informationFace Detection Using Color Based Segmentation and Morphological Processing A Case Study
Face Detection Using Color Based Segmentation and Morphological Processing A Case Study Dr. Arti Khaparde*, Sowmya Reddy.Y Swetha Ravipudi *Professor of ECE, Bharath Institute of Science and Technology
More informationQMUL-ACTIVA: Person Runs detection for the TRECVID Surveillance Event Detection task
QMUL-ACTIVA: Person Runs detection for the TRECVID Surveillance Event Detection task Fahad Daniyal and Andrea Cavallaro Queen Mary University of London Mile End Road, London E1 4NS (United Kingdom) {fahad.daniyal,andrea.cavallaro}@eecs.qmul.ac.uk
More informationScale Invariant Segment Detection and Tracking
Scale Invariant Segment Detection and Tracking Amaury Nègre 1, James L. Crowley 1, and Christian Laugier 1 INRIA, Grenoble, France firstname.lastname@inrialpes.fr Abstract. This paper presents a new feature
More informationIndian Institute of Technology Kanpur District : Kanpur Team number: 2. Prof. A. K. Chaturvedi SANKET
CSIDC 2003 Interim Report Country : India University : Indian Institute of Technology Kanpur District : Kanpur 208016 Team number: 2 Mentor: Prof. A. K. Chaturvedi akc@iitk.ac.in +91-512-2597613 SANKET:
More informationFault-Tolerant Wormhole Routing Algorithms in Meshes in the Presence of Concave Faults
Fault-Tolerant Wormhole Routing Algorithms in Meshes in the Presence of Concave Faults Seungjin Park Jong-Hoon Youn Bella Bose Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science
More informationAn Edge-Based Approach to Motion Detection*
An Edge-Based Approach to Motion Detection* Angel D. Sappa and Fadi Dornaika Computer Vison Center Edifici O Campus UAB 08193 Barcelona, Spain {sappa, dornaika}@cvc.uab.es Abstract. This paper presents
More informationLinear Discriminant Analysis in Ottoman Alphabet Character Recognition
Linear Discriminant Analysis in Ottoman Alphabet Character Recognition ZEYNEB KURT, H. IREM TURKMEN, M. ELIF KARSLIGIL Department of Computer Engineering, Yildiz Technical University, 34349 Besiktas /
More informationDETECTION OF CHANGES IN SURVEILLANCE VIDEOS. Longin Jan Latecki, Xiangdong Wen, and Nilesh Ghubade
DETECTION OF CHANGES IN SURVEILLANCE VIDEOS Longin Jan Latecki, Xiangdong Wen, and Nilesh Ghubade CIS Dept. Dept. of Mathematics CIS Dept. Temple University Temple University Temple University Philadelphia,
More informationAnalysis of Image and Video Using Color, Texture and Shape Features for Object Identification
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features
More informationMATLAB Based Interactive Music Player using XBOX Kinect
1 MATLAB Based Interactive Music Player using XBOX Kinect EN.600.461 Final Project MATLAB Based Interactive Music Player using XBOX Kinect Gowtham G. Piyush R. Ashish K. (ggarime1, proutra1, akumar34)@jhu.edu
More informationAn Efficient Character Segmentation Based on VNP Algorithm
Research Journal of Applied Sciences, Engineering and Technology 4(24): 5438-5442, 2012 ISSN: 2040-7467 Maxwell Scientific organization, 2012 Submitted: March 18, 2012 Accepted: April 14, 2012 Published:
More informationImage Matching Using Run-Length Feature
Image Matching Using Run-Length Feature Yung-Kuan Chan and Chin-Chen Chang Department of Computer Science and Information Engineering National Chung Cheng University, Chiayi, Taiwan, 621, R.O.C. E-mail:{chan,
More informationLayout Segmentation of Scanned Newspaper Documents
, pp-05-10 Layout Segmentation of Scanned Newspaper Documents A.Bandyopadhyay, A. Ganguly and U.Pal CVPR Unit, Indian Statistical Institute 203 B T Road, Kolkata, India. Abstract: Layout segmentation algorithms
More informationEye Detection by Haar wavelets and cascaded Support Vector Machine
Eye Detection by Haar wavelets and cascaded Support Vector Machine Vishal Agrawal B.Tech 4th Year Guide: Simant Dubey / Amitabha Mukherjee Dept of Computer Science and Engineering IIT Kanpur - 208 016
More informationLogical Templates for Feature Extraction in Fingerprint Images
Logical Templates for Feature Extraction in Fingerprint Images Bir Bhanu, Michael Boshra and Xuejun Tan Center for Research in Intelligent Systems University of Califomia, Riverside, CA 9252 1, USA Email:
More informationClustering and Visualisation of Data
Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some
More informationExploring Curve Fitting for Fingers in Egocentric Images
Exploring Curve Fitting for Fingers in Egocentric Images Akanksha Saran Robotics Institute, Carnegie Mellon University 16-811: Math Fundamentals for Robotics Final Project Report Email: asaran@andrew.cmu.edu
More informationHuman Action Recognition Using Independent Component Analysis
Human Action Recognition Using Independent Component Analysis Masaki Yamazaki, Yen-Wei Chen and Gang Xu Department of Media echnology Ritsumeikan University 1-1-1 Nojihigashi, Kusatsu, Shiga, 525-8577,
More informationGesture Recognition using Temporal Templates with disparity information
8- MVA7 IAPR Conference on Machine Vision Applications, May 6-8, 7, Tokyo, JAPAN Gesture Recognition using Temporal Templates with disparity information Kazunori Onoguchi and Masaaki Sato Hirosaki University
More informationAircraft Tracking Based on KLT Feature Tracker and Image Modeling
Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Khawar Ali, Shoab A. Khan, and Usman Akram Computer Engineering Department, College of Electrical & Mechanical Engineering, National University
More informationKinematics of Machines Prof. A. K. Mallik Department of Mechanical Engineering Indian Institute of Technology, Kanpur. Module 10 Lecture 1
Kinematics of Machines Prof. A. K. Mallik Department of Mechanical Engineering Indian Institute of Technology, Kanpur Module 10 Lecture 1 So far, in this course we have discussed planar linkages, which
More informationObject Recognition 1
Object Recognition 1 The Margaret Thatcher Illusion by Peter Thompson Lighting affects appearance The Margaret Thatcher Illusion by Peter Thompson 2 Recognition Problems Face Detection What is it? Object
More informationObject Recognition 1
Object Recognition 1 2 Lighting affects appearance The Margaret Thatcher Illusion by Peter Thompson 3 The Margaret Thatcher Illusion by Peter Thompson 4 Recognition Problems What is it? Object detection
More informationFeatures Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)
Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so
More information3D Hand Pose Estimation by Finding Appearance-Based Matches in a Large Database of Training Views
3D Hand Pose Estimation by Finding Appearance-Based Matches in a Large Database of Training Views Vassilis Athitsos and Stan Sclaroff Computer Science Department Boston University 111 Cummington Street
More information