HUMAN-ROBOT INTERACTION WITH DEPTH-BASED GESTURE RECOGNITION

Size: px
Start display at page:

Download "HUMAN-ROBOT INTERACTION WITH DEPTH-BASED GESTURE RECOGNITION"

Transcription

1 HUMAN-ROBOT INTERACTION WITH DEPTH-BASED GESTURE RECOGNITION Gabriele Pozzato, Stefano Michieletto, Emanuele Menegatti IAS-Lab University of Padova Padova, Italy {michieletto, Fabio Dominio, Giulio Marin, Ludovico Minto Simone Milani, Pietro Zanuttigh Multimedia Technology and Telecommunication Lab University of Padova Padova, Italy {dominiof,maringiu,mintolud, Abstract Human robot interaction is a very heterogeneous research field and it is attracting a growing interest. A key building block for a proper interaction between humans and robots is the automatic recognition and interpretation of gestures performed by the user. Consumer depth cameras (like MS Kinect) have made possible an accurate and reliable interpretation of human gestures. In this paper a novel framework for gesturebased human-robot interaction is proposed. Both hand gestures and full-body gestures are recognized through the use of depth information, and a human-robot interaction scheme based on these gestures is proposed. In order to assess the feasibility of the proposed scheme, the paper presents a simple application based on the well-known rock-scissors-paper game. Keywords Human-Robot Interaction, Gesture Recognition, Depth, Kinect, SVM I. INTRODUCTION Human-Robot Interaction (HRI) is a field of investigation that is recently gaining particular interest, involving different research areas from engineering to psychology [1]. The development of robotics, artificial intelligence and humancomputer interaction systems has brought to the attention of researchers new relevant issues, such as understanding how people perceive the interaction with robots [2], how robots should look like [3] and in what manners they could be useful [4]. In this work we focused on the communication interface between humans and robots. The aim is to apply novel approaches for gesture recognition based on the analysis of depth information in order to allow users to naturally interact with robots [5]. This is a significant and sometimes undervalued perspective for human-robot interaction. Hand gesture recognition using vision-based approaches is an intriguing topic that has attracted a large interest in the last years [6]. It can be applied in many different fields including robotics and in particular human-robot interaction. A human-to-machine communication can be established by using a predefined set of poses or gestures recognized by a vision system. It is possible to associate a precise meaning (or a command the robot will perform) to each sign. This permits making the interplay with robots more intuitive and enhancing the human-robot interaction up to the level of a human-human one. In this case also the robot plays a different role. During social interactions it must have an active part in order to really engage the human user. For example, recognizing body language, a robot can understand the mood of the human user adapting its behavior to it [7]. Until a few years ago, most approaches were based on videos acquired by standard cameras [6], [8]; however, relying on a simple 2D representation is a great limitation since that representation is not always sufficient to capture the complex movements and inter-occlusions of the hand shape. The recent introduction of low cost consumer depth cameras, e.g., Time-Of-Flight (ToF) cameras and MS Kinect TM [9], has opened the way to a new family of algorithms that exploit the depth information for hand gesture recognition. The improved accuracy in modelling three-dimensional body poses has permitted removing the need for physical devices in human-robot interaction such as gloves, keyboards or other controllers. HRI system based on depth cameras apply machinelearning techniques to a set of relevant features extracted from the depth data. The approach in [10] computes a descriptor from silhouettes and cell occupancy features; this is then fed to a classifier based on action graphs. Both the approaches in [11] and [12] use volumetric shape descriptors processed by a classifier based on Support Vector Machines (SVM). Another possibility is to exploit the histograms of the distance of hand edge points from the hand center in order to recognize the gestures [13], [14], [15]. Different types of features can also be combined together [16]. The paper is organized in the following way. Sections II presents the proposed hand gesture recognition strategy, while Section III describes the body-pose detection algorithm. These tools are integrated in a human-robot interaction scheme, which is described in Section IV. A case study referring to a simple rock-paper-scissors game is presented in Section V. Finally Section VI draws the conclusions. II. HAND GESTURE RECOGNITION FROM DEPTH DATA Fig. 1 shows a general overview of the employed hand gesture recognition approach. The approach is based on the method proposed in [16]. In the first step, the hand is extracted

2 Depth data Extraction of the hand region Hand Color data PCA Circle fitting on the palm Extraction of hand parts Finger Palm Wrist (discarded) Distance features extraction Curvature features extraction Fig. 1. Training data Multi-class SVM Pipeline of the proposed approach. Recognized gesture from the depth acquired by the Kinect. Then, a set of relevant features is extracted from the hand shape. Finally a multi-class Support Vector Machine classifier is applied to the extracted features in order to recognize the performed gesture. A. Hand Recognition In the first step the hand is extracted from the color and depth data provided by the Kinect. Calibration information is used to obtain a set of 3D points X i from the depth data. The analysis starts by extracting the point in the depth map closest to the user (X c ). The points that have a depth value close to the one of X c and a 3D distance from it within a threshold of about 20[cm] are extracted and used as a first estimate of the candidate hand region. A further check on hand color and size is then performed to avoid to recognize possible objects positioned at the same distance as the hand. At this point the highest density region is found by applying a Gaussian filter with a large standard deviation to the mask representing the extracted points. Finally, starting from the highest density point, a circle is fitted on the hand mask such that its area roughly approximates the area corresponding to the palm (see Fig. 2). Principal Component Analysis (PCA) is then applied to the detected points in order to obtain an estimate of the hand orientation. Finally the hand points are divided into the palm region (set P, i.e., the points inside the circle), the fingers (set F, i.e., the points outside the circle in the direction of the axis pointing to the fingertips found by the PCA), and the wrist region (set W, i.e., the points outside the circle in the opposite direction w.r.t. the fingertips). B. Feature Extraction Two different types of features are computed from data extracted in the previous step. The first set of features consists of a histogram of the distances of the hand points from the hand center. Basically, the center of the circle is used as reference point and an histogram is built, as described in [11], by considering for each angular direction the maximum of the point distances from the center, i.e.: L(θ q ) = max d X i (1) X i I(θ q) d) e) f) Fig. 2. Extraction of the hand: a) Acquired color image; b) Acquired depth map; c) Extracted hand (the closest sample is depicted in green); d) Output of the Gaussian filter applied on the mask corresponding to H with the maximum highlighted in red; e) Circle fitted on the hand; f) Palm (blue), finger (red) and wrist (green) regions subdivision. where I(θ q ) is the angular sector of the hand corresponding to the direction θ q and d Xi is the distance between point X i and the hand center. The computed histogram is compared with a set of reference histograms L r g(θ), one for each gesture g. The maximum of the correlation between the current histogram L(θ q ) and a shifted version of the reference histogram L r g(θ) is used in order to compute the precise hand orientation and refine the results of the PCA. A set of angular regions I(θ g,j ) associated to each finger j = 1,..., 5 in the gesture g is defined on the histogram, and the maximum is selected as feature value inside each region, normalized by the middle finger s length: fg,j d L g (θ) = max (2) I(θ g,j) L max where g = 1,..., G for an alphabet of G gestures. Note that the computation is performed for each of the candidate gesture, thus obtaining a different feature value for each of the candidate gestures. The second feature set is based on the curvature of the hand contour. This descriptor is based on a multi-scale integral operator [17], [18] and is computed as described in [16] and [15]. The algorithm takes as input the edges of the palm and fingers regions and the binary mask representing the hand region on the depth map. A set of circular masks with increasing radius centered on each edge sample is built (we used S = 25 masks with radius varying from 0.5 cm to 5 cm, notice that the radius corresponds to the scale level at which the computation is performed). The ratio between the number of falling inside the hand shape and the size of the mask is computed for each circular mask. The values of the ratios represent the local curvature of the edge and range from 0 for a convex section of the edge to 1 for a concave one, with 0.5 corresponding to a straight edge. The [0, 1] interval is then quantized into N bins and the feature values fb,s c are computed by counting the number of edge having a curvature value inside bin b at scale level s.

3 Gesture 2 Distances Gesture 1 Gesture 3 Gest.4 Curvatures Scale level 1 Scale level 2 Scale level 3 Scale level 4 Scale level 5 d O = [ q 1 1 q 2 1 q 3 1 q q 1 N q 2 N q 3 N q 4 N], (4) d T OT = [ d 1 P d 1 O... d N P d N O ]. (5) Fig. 3. Feature vectors extracted from the two devices. The curvature values are finally normalized by the number of edge and the feature vector F c with B S entries is built. The multi-scale descriptor is made of B S entries C i, i = 1,..., B S, where B is the number of bins and S is the number of employed scale levels. C. Hand Gesture Classification The feature extraction approach provides two feature vectors, each one describing relevant properties of the hand. In order to recognize the performed gestures, the two vectors are concatenated and sent to a multi-class Support Vector Machine classifier. The target is to assign the feature vectors to G classes (with G = 3 for the sample rock-scissorpaper game of the experimental results) corresponding to the various gestures of the considered database. A multi-class SVM classifier based on the one-against-one approach has been used, i.e., a set of G(G 1)/2 binary SVM classifiers are used to test each class against each other and each output is chosen as a vote for a certain gesture. The gesture with the maximum number of votes is selected as the output of the classifier. In particular we used the SVM implementation in the LIBSVM package [19], together with a non-linear Gaussian Radial Basis Function (RBF) kernel tuned by means of grid search and cross-validation on a sample training set. III. BODY GESTURE RECOGNITION FROM SKELETAL TRACKING In this work skeletal tracking capabilities come from a software development kit released by OpenNI. In particular, the skeletal tracking algorithm is implemented in a freeware middleware called NITE built on top of the OpenNI SDK. NITE uses the information provided by a RGB-D sensor to estimate the position of several joints of the human body at a frame rate of 30 fps that is a good trade-off between speed and precision. Differently from approaches based only on RGB images, these kind of sensors based also on depth data, allow the tracker to be more robust with respect to illumination changes. The skeleton information provided by the NITE middleware consists of N = 15 joints from head to foot. Each joint is described by its position (a point in 3D space) and orientation (using quaternions). On these data, we perform two kinds of normalization: firstly we scale the joints positions in order to normalize the skeleton size, thus achieving invariance among different people, then we normalize every feature to zero mean and unit variance. Starting from the normalized data, we extract three kinds of descriptors: a first skeleton descriptor (d P ) is made of the set of joints positions concatenated one to each other, the second one (d O ) contains the normalized joints orientations, finally, we considered also the concatenation of both positions and orientations (d T OT ) of each joint: d P = [x 1 y 1 z 1... x N y N z N ], (3) These descriptors have been provided as features for the classification process. A Gaussian Mixture Model (GMM)- based classifier has been used in order to compare the set of predefined body gestures to the ones performed by the user. The described technique could be extended by considering a sequence of descriptors equally spaced in time in order to apply the resulting features to human activity recognition [20], [21] and broaden the interaction capabilities of our framework. IV. ROBOT CONTROL AND SYSTEM INTEGRATION The information provided by both hand and skeletal gesture recognition systems are analyzed in order to obtain a proper robot motion. Robot behavior really depends on the application, but the complete framework can be separated in different layers in order to easily extend the basic system to new tasks. We created three independent layers: gesture recognition, decision making, robot controller. These layers are connected one to each other by means of a very diffuse robotic framework: Robot Operating System. Robot Operating System (ROS) [22] is an open-source, meta-operating system that provides, together with standard services peculiar to an operating system (e.g., hardware abstraction, low-level device control, message-passing between processes, package management), tools and libraries useful for typical robotics applications (e.g., navigation, motion planning, image and 3D data processing). The primary goal of ROS is to support the reuse of code in robotics research and, therefore, it presents a simplified but lightweight design with clean functional interfaces. During a human-robot interaction task, the three layers play different roles. The gesture recognition layer deals with the acquisition of information from a camera in order to recognize gestures performed by users. The decision making layer is on a higher level of abstraction. Information is processed by ROS and classified by an AI algorithm in order to make the robot take its decision. Finally, the robot controller layer is responsible of motion. This layer physically modifies the behavior of the robot sending commands to the servomotors. Even though they are strictly independent also in what concerns implementation, the three levels can interact sending or receiving messages through ROS. This means that we can use different robots, AI algorithms or vision systems by maintaining the same interface. V. CASE STUDY: A SIMPLE ROCK-PAPER-SCISSORS GAME In order to test the developed framework a very simple even popular game has been used, namely rock-paper-scissors. Users are asked to play against a robotic opponent on several rounds. At each round, both the players (human and robot) have to perform one of the 3 gestures depicted in Fig. 4. The winner is chosen according to the well-known rules of the game. Game settings and moves can be selected by human users without any remote controls or keyboards: only the

4 Fig. 4. Rock Paper Scissors Sample depth images for the 3 gestures. Fig. 6. LEGO Mindstorms NXT performing the three gestures: a) rock, b) paper and c) scissors. Fig. 5. Poses: a) corresponds to 3 games, b) to 2 games and c) to 1 game. Each game is composed by three rounds. gesture recognition system is involved, so that the interaction will result more natural and user-friendly as possible. Users are able to select the number of games to play by assuming one of the pose shown in Fig. 5 once they are recognized by the system. To each position corresponds a number varying from 1 to 3. Each game is composed by three rounds. Once a round is started, the hand gesture recognition algorithm looks at the move played by the human. At the same time, the decision making system chooses the robot gesture by using an Artificial Intelligence (AI) algorithm we developed [23]. The algorithm is based on the use of Gaussian Mixture Model (GMM). The idea under this approach is using a history of three previous rounds in order to forecast the next human move. This is a very important aspect in the natural interaction between human and robot, in fact, people are stimulated in confronting with a skilled opponent. Finally, the robot controller translates high level commands to motor movements in order to make a LEGO Mindstorms NXT robotic platform perform the proper gesture (Fig. 6). Notice that the approach of Section II, that has been developed for more complex settings with a larger number of gestures, has very good performances in this sample application. In order to evaluate its effectiveness we asked to 14 different people to perform the 3 gestures 10 times each for a total of = 420 different executions of the gestures. Then we performed a set of tests, each time training the system on 13 people and leaving out one person for the testing. In this way (similar to the cross-validation approach) the performances are evaluated for each person without any training from gestures performed by the same tested user, as it will happen in a real setting where a new user starts interacting with the robot. The proposed approach has achieved an average accuracy of 99.28% with this testing protocol. VI. CONCLUSIONS In this paper a framework for gesture-based human-robot interaction has been proposed. Two different approaches for the exploitation of depth data from low-cost cameras have been proposed for both the recognition of hand and full-body gestures. Depth-based gesture recognition schemes allow a more natural interaction with the robot and a sample application based on the simple rock-scissor-paper game has been presented. Further research will be devoted to the application of the proposed framework in more complex scenarios. ACKNOWLEDGEMENTS Thanks to the Seed project Robotic 3D video (R3D) of the Department of Information Engineering and to the University project 3D Touch-less Interaction of the University of Padova for supporting this work. REFERENCES [1] Cory D Kidd, Human-robot interaction: Recent experiments and future work, in Invited paper and talk at the University of Pennsylvania Department of Communications Digital Media Conference, 2003, pp [2] Cory D Kidd and Cynthia Breazeal, Effect of a robot on user perceptions, in Intelligent Robots and Systems, 2004.(IROS 2004). Proceedings IEEE/RSJ International Conference on. IEEE, 2004, vol. 4, pp [3] Cynthia Breazeal and Brian Scassellati, How to build robots that make friends and influence people, in Intelligent Robots and Systems, IROS 99. Proceedings IEEE/RSJ International Conference on. IEEE, 1999, vol. 2, pp [4] Rachel Gockley, Allison Bruce, Jodi Forlizzi, Marek Michalowski, Anne Mundell, Stephanie Rosenthal, Brennan Sellner, Reid Simmons, Kevin Snipes, Alan C Schultz, et al., Designing robots for longterm social interaction, in Intelligent Robots and Systems, 2005.(IROS 2005) IEEE/RSJ International Conference on. IEEE, 2005, pp [5] Juan Pablo Wachs, Mathias Kölsch, Helman Stern, and Yael Edan, Vision-based hand-gesture applications, Communications of the ACM, vol. 54, no. 2, pp , [6] Juan Pablo Wachs, Mathias Kölsch, Helman Stern, and Yael Edan, Vision-based hand-gesture applications, Commun. ACM, vol. 54, no. 2, pp , Feb [7] Derek McColl and Goldie Nejat, Affect detection from body language during social hri, in RO-MAN, 2012 IEEE. IEEE, 2012, pp [8] D.I. Kosmopoulos, A. Doulamis, and N. Doulamis, Gesture-based video summarization, in Image Processing, ICIP IEEE International Conference on, 2005, vol. 3, pp. III [9] C. Dal Mutto, P. Zanuttigh, and G. M. Cortelazzo, Time-of-Flight Cameras and Microsoft Kinect, SpringerBriefs in Electrical and Computer Engineering. Springer, [10] Alexey Kurakin, Zhengyou Zhang, and Zicheng Liu, A real-time system for dynamic hand gesture recognition with a depth sensor, in Proc. of EUSIPCO, [11] P. Suryanarayan, A. Subramanian, and D. Mandalapu, Dynamic hand pose recognition using depth data, in Proc. of ICPR, aug. 2010, pp [12] Jiang Wang, Zicheng Liu, Jan Chorowski, Zhuoyuan Chen, and Ying Wu, Robust 3d action recognition with random occupancy patterns, in Proc. of ECCV, 2012.

5 [13] Zhou Ren, Junsong Yuan, and Zhengyou Zhang, Robust hand gesture recognition based on finger-earth mover s distance with a commodity depth camera, in Proc. of ACM Conference on Multimedia. 2011, pp , ACM. [14] Zhou Ren, Jingjing Meng, and Junsong Yuan, Depth camera based hand gesture recognition and its applications in human-computerinteraction, in Proc. of ICICS, 2011, pp [15] Fabio Dominio, Mauro Donadeo, Giulio Marin, Pietro Zanuttigh, and Guido Maria Cortelazzo, Hand gesture recognition with depth data, in Proceedings of the 4th ACM/IEEE international workshop on Analysis and retrieval of tracked events and motion in imagery stream. ACM, 2013, pp [16] Fabio Dominio, Mauro Donadeo, and Pietro Zanuttigh, Combining multiple depth-based descriptors for hand gesture recognition, Pattern Recognition Letters, [17] S. Manay, D. Cremers, Byung-Woo Hong, A.J. Yezzi, and S. Soatto, Integral invariants for shape matching, IEEE Trans. on PAMI, vol. 28, no. 10, pp , [18] Neeraj Kumar, Peter N. Belhumeur, Arijit Biswas, David W. Jacobs, W. John Kress, Ida Lopez, and Joo V. B. Soares, Leafsnap: A computer vision system for automatic plant species identification, in Proc. of ECCV, October [19] Chih-Chung Chang and Chih-Jen Lin, LIBSVM: A library for support vector machines, ACM Trans. on Intelligent Systems and Technology, vol. 2, pp. 27:1 27:27, [20] Matteo Munaro, Gioia Ballin, Stefano Michieletto, and Emanuele Menegatti, 3d flow estimation for human action recognition from colored point clouds, Biologically Inspired Cognitive Architectures, [21] Matteo Munaro, Stefano Michieletto, and Emanuele Menegatti, An evaluation of 3d motion flow and 3d pose estimation for human action recognition, in RSS Workshops: RGB-D: Advanced Reasoning with Depth Cameras., [22] Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, Tully Foote, Jeremy Leibs, Rob Wheeler, and Andrew Y Ng, Ros: an open-source robot operating system, in ICRA workshop on open source software, 2009, vol. 3. [23] Gabriele Pozzato, Stefano Michieletto, and Emanuele Menegatti, Towards smart robots: rock-paper-scissors gaming versus human players, in Proceedings of the Workshop Popularize Artificial Intelligence (PAI2013), 2013, pp

Human-robot interaction with depth-based gesture recognition

Human-robot interaction with depth-based gesture recognition Human-robot interaction with depth-based gesture recognition Gabriele Pozzato, Stefano Michieletto and Emanuele Menegatti IAS-Lab University of Padova Padova, Italy Email: pozzato.g@gmail.com, {michieletto,

More information

Hand gesture recognition with Leap Motion and Kinect devices

Hand gesture recognition with Leap Motion and Kinect devices Hand gesture recognition with Leap Motion and devices Giulio Marin, Fabio Dominio and Pietro Zanuttigh Department of Information Engineering University of Padova, Italy Abstract The recent introduction

More information

Hand gesture recognition with depth data

Hand gesture recognition with depth data Hand gesture recognition with depth data Fabio Dominio, Mauro Donadeo, Giulio Marin, Pietro Zanuttigh and Guido Maria Cortelazzo Department of Information Engineering University of Padova, Italy Abstract

More information

Hand gesture recognition with jointly calibrated Leap Motion and depth sensor

Hand gesture recognition with jointly calibrated Leap Motion and depth sensor Hand gesture recognition with jointly calibrated Leap Motion and depth sensor Giulio Marin, Fabio Dominio and Pietro Zanuttigh Department of Information Engineering University of Padova, Italy Abstract

More information

Combining multiple depth-based descriptors for hand gesture recognition

Combining multiple depth-based descriptors for hand gesture recognition Combining multiple depth-based descriptors for hand gesture recognition Fabio Dominio, Mauro Donadeo, Pietro Zanuttigh Department of Information Engineering, University of Padova, Italy Abstract Depth

More information

Ensemble to improve gesture recognition

Ensemble to improve gesture recognition Ensemble to improve gesture recognition Loris Nanni 1, Alessandra Lumini 2, Fabio Dominio 1, Mauro Donadeo 1, Pietro Zanuttigh 1 1 Department of Information Engineering - University of Padova, Via Gradenigo,

More information

Human Detection, Tracking and Activity Recognition from Video

Human Detection, Tracking and Activity Recognition from Video Human Detection, Tracking and Activity Recognition from Video Mihir Patankar University of California San Diego Abstract - Human detection, tracking and activity recognition is an important area of research

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

Time-of-Flight and Structured Light Depth Cameras

Time-of-Flight and Structured Light Depth Cameras Time-of-Flight and Structured Light Depth Cameras Pietro Zanuttigh Giulio Marin Carlo Dal Mutto Fabio Dominio Ludovico Minto Guido Maria Cortelazzo Time-of-Flight and Structured Light Depth Cameras Technology

More information

arxiv: v1 [cs.cv] 12 Nov 2017

arxiv: v1 [cs.cv] 12 Nov 2017 Hand Gesture Recognition with Leap Motion Youchen Du, Shenglan Liu, Lin Feng, Menghui Chen, Jie Wu Dalian University of Technology arxiv:1711.04293v1 [cs.cv] 12 Nov 2017 Abstract The recent introduction

More information

3D Hand Shape Analysis for Palm and Fingers Identification

3D Hand Shape Analysis for Palm and Fingers Identification 3D Hand Shape Analysis for Palm and Fingers Identification Ludovico Minto, Giulio Marin, Pietro Zanuttigh Department of Information Engineering, University of Padova, Padova, Italy Abstract This paper

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

Implementation of a Smart Ward System in a Hi-Tech Hospital by Using a Kinect Sensor Camera

Implementation of a Smart Ward System in a Hi-Tech Hospital by Using a Kinect Sensor Camera Volume: 4 sue: 5 531-536 Implementation of a Smart Ward System in a Hi-Tech Hospital by Using a Kinect Sensor Camera Jyothilakshmi P a, K R Rekha b, K R Nataraj ab Research Scholar, Jain University, Bangalore,

More information

A Novel User Interface for OpenPTrack based on Web Technologies

A Novel User Interface for OpenPTrack based on Web Technologies Workshop on Robot Perception of Humans Baden-Baden, Germany, June 11, 2018. In conjunction with IAS-15 A Novel User Interface for OpenPTrack based on Web Technologies Nicola Castaman 1, Jeff Burke 2, and

More information

Adaptive Gesture Recognition System Integrating Multiple Inputs

Adaptive Gesture Recognition System Integrating Multiple Inputs Adaptive Gesture Recognition System Integrating Multiple Inputs Master Thesis - Colloquium Tobias Staron University of Hamburg Faculty of Mathematics, Informatics and Natural Sciences Technical Aspects

More information

A Robust Hand Gesture Recognition Using Combined Moment Invariants in Hand Shape

A Robust Hand Gesture Recognition Using Combined Moment Invariants in Hand Shape , pp.89-94 http://dx.doi.org/10.14257/astl.2016.122.17 A Robust Hand Gesture Recognition Using Combined Moment Invariants in Hand Shape Seungmin Leem 1, Hyeonseok Jeong 1, Yonghwan Lee 2, Sungyoung Kim

More information

INTELLIGENT AUTONOMOUS SYSTEMS LAB

INTELLIGENT AUTONOMOUS SYSTEMS LAB Matteo Munaro 1,3, Alex Horn 2, Randy Illum 2, Jeff Burke 2, and Radu Bogdan Rusu 3 1 IAS-Lab at Department of Information Engineering, University of Padova 2 Center for Research in Engineering, Media

More information

Hand Gesture Recognition Based On The Parallel Edge Finger Feature And Angular Projection

Hand Gesture Recognition Based On The Parallel Edge Finger Feature And Angular Projection Hand Gesture Recognition Based On The Parallel Edge Finger Feature And Angular Projection Zhou Yimin 1,2, Jiang Guolai 1, Xu Guoqing 1 and Lin Yaorong 3 1 Shenzhen Institutes of Advanced Technology, Chinese

More information

Hand part classification using single depth images

Hand part classification using single depth images Hand part classification using single depth images Myoung-Kyu Sohn, Dong-Ju Kim and Hyunduk Kim Department of IT Convergence, Daegu Gyeongbuk Institute of Science & Technology (DGIST), Daegu, South Korea

More information

3D HAND LOCALIZATION BY LOW COST WEBCAMS

3D HAND LOCALIZATION BY LOW COST WEBCAMS 3D HAND LOCALIZATION BY LOW COST WEBCAMS Cheng-Yuan Ko, Chung-Te Li, Chen-Han Chung, and Liang-Gee Chen DSP/IC Design Lab, Graduated Institute of Electronics Engineering National Taiwan University, Taiwan,

More information

Histogram of 3D Facets: A Characteristic Descriptor for Hand Gesture Recognition

Histogram of 3D Facets: A Characteristic Descriptor for Hand Gesture Recognition Histogram of 3D Facets: A Characteristic Descriptor for Hand Gesture Recognition Chenyang Zhang, Xiaodong Yang, and YingLi Tian Department of Electrical Engineering The City College of New York, CUNY {czhang10,

More information

Combination of Depth and Texture Descriptors for Gesture. Recognition

Combination of Depth and Texture Descriptors for Gesture. Recognition Combination of Depth and Texture Descriptors for Gesture Recognition Loris Nanni 1, Alessandra Lumini 2, Fabio Dominio 1, Mauro Donadeo 1, Pietro Zanuttigh 1 1 Department of Information Engineering - University

More information

Scene Segmentation by Color and Depth Information and its Applications

Scene Segmentation by Color and Depth Information and its Applications Scene Segmentation by Color and Depth Information and its Applications Carlo Dal Mutto Pietro Zanuttigh Guido M. Cortelazzo Department of Information Engineering University of Padova Via Gradenigo 6/B,

More information

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

Video Inter-frame Forgery Identification Based on Optical Flow Consistency Sensors & Transducers 24 by IFSA Publishing, S. L. http://www.sensorsportal.com Video Inter-frame Forgery Identification Based on Optical Flow Consistency Qi Wang, Zhaohong Li, Zhenzhen Zhang, Qinglong

More information

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall 2008 October 29, 2008 Notes: Midterm Examination This is a closed book and closed notes examination. Please be precise and to the point.

More information

Tri-modal Human Body Segmentation

Tri-modal Human Body Segmentation Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

Object Purpose Based Grasping

Object Purpose Based Grasping Object Purpose Based Grasping Song Cao, Jijie Zhao Abstract Objects often have multiple purposes, and the way humans grasp a certain object may vary based on the different intended purposes. To enable

More information

Shape Descriptor using Polar Plot for Shape Recognition.

Shape Descriptor using Polar Plot for Shape Recognition. Shape Descriptor using Polar Plot for Shape Recognition. Brijesh Pillai ECE Graduate Student, Clemson University bpillai@clemson.edu Abstract : This paper presents my work on computing shape models that

More information

Human Activity Recognition Using Multidimensional Indexing

Human Activity Recognition Using Multidimensional Indexing Human Activity Recognition Using Multidimensional Indexing By J. Ben-Arie, Z. Wang, P. Pandit, S. Rajaram, IEEE PAMI August 2002 H. Ertan Cetingul, 07/20/2006 Abstract Human activity recognition from a

More information

A Robust Gesture Recognition Using Depth Data

A Robust Gesture Recognition Using Depth Data A Robust Gesture Recognition Using Depth Data Hironori Takimoto, Jaemin Lee, and Akihiro Kanagawa In recent years, gesture recognition methods using depth sensor such as Kinect sensor and TOF sensor have

More information

Object Recognition and Pose Estimation by means of Cross-Correlation of Mixture of Projected Gaussian

Object Recognition and Pose Estimation by means of Cross-Correlation of Mixture of Projected Gaussian Object Recognition and Pose Estimation by means of Cross-Correlation of Mixture of Projected Gaussian Mauro Antonello 1, Sukhan Lee 2, Naguib Ahmed 2, and Emanuele Menegatti 1 1 University of Padova, department

More information

Color Local Texture Features Based Face Recognition

Color Local Texture Features Based Face Recognition Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India

More information

Human Arm Simulation Using Kinect

Human Arm Simulation Using Kinect Human Arm Simulation Using Kinect Nikunj Agarwal 1, Priya Bajaj 2, Jayesh Pal 3, Piyush Kushwaha 4 1,2,3,4 Student, Computer Science & Engineering Department, IMS Engineering College, Ghaziabad, Uttar

More information

A Real-Time Hand Gesture Recognition for Dynamic Applications

A Real-Time Hand Gesture Recognition for Dynamic Applications e-issn 2455 1392 Volume 2 Issue 2, February 2016 pp. 41-45 Scientific Journal Impact Factor : 3.468 http://www.ijcter.com A Real-Time Hand Gesture Recognition for Dynamic Applications Aishwarya Mandlik

More information

Physical Privacy Interfaces for Remote Presence Systems

Physical Privacy Interfaces for Remote Presence Systems Physical Privacy Interfaces for Remote Presence Systems Student Suzanne Dazo Berea College Berea, KY, USA suzanne_dazo14@berea.edu Authors Mentor Dr. Cindy Grimm Oregon State University Corvallis, OR,

More information

Visual Perception Sensors

Visual Perception Sensors G. Glaser Visual Perception Sensors 1 / 27 MIN Faculty Department of Informatics Visual Perception Sensors Depth Determination Gerrit Glaser University of Hamburg Faculty of Mathematics, Informatics and

More information

Content based Image Retrieval Using Multichannel Feature Extraction Techniques

Content based Image Retrieval Using Multichannel Feature Extraction Techniques ISSN 2395-1621 Content based Image Retrieval Using Multichannel Feature Extraction Techniques #1 Pooja P. Patil1, #2 Prof. B.H. Thombare 1 patilpoojapandit@gmail.com #1 M.E. Student, Computer Engineering

More information

MIME: A Gesture-Driven Computer Interface

MIME: A Gesture-Driven Computer Interface MIME: A Gesture-Driven Computer Interface Daniel Heckenberg a and Brian C. Lovell b a Department of Computer Science and Electrical Engineering, The University of Queensland, Brisbane, Australia, 4072

More information

Accelerometer Gesture Recognition

Accelerometer Gesture Recognition Accelerometer Gesture Recognition Michael Xie xie@cs.stanford.edu David Pan napdivad@stanford.edu December 12, 2014 Abstract Our goal is to make gesture-based input for smartphones and smartwatches accurate

More information

Implementation of Kinetic Typography by Motion Recognition Sensor

Implementation of Kinetic Typography by Motion Recognition Sensor Implementation of Kinetic Typography by Motion Recognition Sensor Sooyeon Lim, Sangwook Kim Department of Digital Media Art, Kyungpook National University, Korea School of Computer Science and Engineering,

More information

Arm-hand Action Recognition Based on 3D Skeleton Joints Ling RUI 1, Shi-wei MA 1,a, *, Jia-rui WEN 1 and Li-na LIU 1,2

Arm-hand Action Recognition Based on 3D Skeleton Joints Ling RUI 1, Shi-wei MA 1,a, *, Jia-rui WEN 1 and Li-na LIU 1,2 1 International Conference on Control and Automation (ICCA 1) ISBN: 97-1-9-39- Arm-hand Action Recognition Based on 3D Skeleton Joints Ling RUI 1, Shi-wei MA 1,a, *, Jia-rui WEN 1 and Li-na LIU 1, 1 School

More information

Online Kinect Handwritten Digit Recognition Based on Dynamic Time Warping and Support Vector Machine

Online Kinect Handwritten Digit Recognition Based on Dynamic Time Warping and Support Vector Machine Journal of Information & Computational Science 12:1 (215) 413 422 January 1, 215 Available at http://www.joics.com Online Kinect Handwritten Digit Recognition Based on Dynamic Time Warping and Support

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important

More information

Hand Gesture Extraction by Active Shape Models

Hand Gesture Extraction by Active Shape Models Hand Gesture Extraction by Active Shape Models Nianjun Liu, Brian C. Lovell School of Information Technology and Electrical Engineering The University of Queensland, Brisbane 4072, Australia National ICT

More information

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

Cognitive Algorithms for 3D Video Acquisition and Transmission

Cognitive Algorithms for 3D Video Acquisition and Transmission Department of Information Engineering, University of Padova Cognitive Algorithms for 3D Video Acquisition and Transmission Simone Milani Politecnico di Milano / University of Padova milani@elet.polimi.it,

More information

Recognition of Finger Pointing Direction Using Color Clustering and Image Segmentation

Recognition of Finger Pointing Direction Using Color Clustering and Image Segmentation September 14-17, 2013, Nagoya University, Nagoya, Japan Recognition of Finger Pointing Direction Using Color Clustering and Image Segmentation Hidetsugu Asano 1, Takeshi Nagayasu 2, Tatsuya Orimo 1, Kenji

More information

Human Activity Recognition Based on Weighted Sum Method and Combination of Feature Extraction Methods

Human Activity Recognition Based on Weighted Sum Method and Combination of Feature Extraction Methods International Journal of Intelligent Information Systems 2018; 7(1): 9-14 http://www.sciencepublishinggroup.com/j/ijiis doi: 10.11648/j.ijiis.20180701.13 ISSN: 2328-7675 (Print); ISSN: 2328-7683 (Online)

More information

Feature Point Extraction using 3D Separability Filter for Finger Shape Recognition

Feature Point Extraction using 3D Separability Filter for Finger Shape Recognition Feature Point Extraction using 3D Separability Filter for Finger Shape Recognition Ryoma Yataka, Lincon Sales de Souza, and Kazuhiro Fukui Graduate School of Systems and Information Engineering, University

More information

Combined Shape Analysis of Human Poses and Motion Units for Action Segmentation and Recognition

Combined Shape Analysis of Human Poses and Motion Units for Action Segmentation and Recognition Combined Shape Analysis of Human Poses and Motion Units for Action Segmentation and Recognition Maxime Devanne 1,2, Hazem Wannous 1, Stefano Berretti 2, Pietro Pala 2, Mohamed Daoudi 1, and Alberto Del

More information

TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK

TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK 1 Po-Jen Lai ( 賴柏任 ), 2 Chiou-Shann Fuh ( 傅楸善 ) 1 Dept. of Electrical Engineering, National Taiwan University, Taiwan 2 Dept.

More information

Preprocessing of a Fingerprint Image Captured with a Mobile Camera

Preprocessing of a Fingerprint Image Captured with a Mobile Camera Preprocessing of a Fingerprint Image Captured with a Mobile Camera Chulhan Lee 1, Sanghoon Lee 1,JaihieKim 1, and Sung-Jae Kim 2 1 Biometrics Engineering Research Center, Department of Electrical and Electronic

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method Parvin Aminnejad 1, Ahmad Ayatollahi 2, Siamak Aminnejad 3, Reihaneh Asghari Abstract In this work, we presented a novel approach

More information

BendIT An Interactive Game with two Robots

BendIT An Interactive Game with two Robots BendIT An Interactive Game with two Robots Tim Niemueller, Stefan Schiffer, Albert Helligrath, Safoura Rezapour Lakani, and Gerhard Lakemeyer Knowledge-based Systems Group RWTH Aachen University, Aachen,

More information

Edge Enhanced Depth Motion Map for Dynamic Hand Gesture Recognition

Edge Enhanced Depth Motion Map for Dynamic Hand Gesture Recognition 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Edge Enhanced Depth Motion Map for Dynamic Hand Gesture Recognition Chenyang Zhang and Yingli Tian Department of Electrical Engineering

More information

INTERACTIVE 3D ANIMATION SYSTEM BASED ON TOUCH INTERFACE AND EFFICIENT CREATION TOOLS. Anonymous ICME submission

INTERACTIVE 3D ANIMATION SYSTEM BASED ON TOUCH INTERFACE AND EFFICIENT CREATION TOOLS. Anonymous ICME submission INTERACTIVE 3D ANIMATION SYSTEM BASED ON TOUCH INTERFACE AND EFFICIENT CREATION TOOLS Anonymous ICME submission ABSTRACT Recently importance of tablet devices with touch interface increases significantly,

More information

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES D. Beloborodov a, L. Mestetskiy a a Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University,

More information

Masked Face Detection based on Micro-Texture and Frequency Analysis

Masked Face Detection based on Micro-Texture and Frequency Analysis International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Masked

More information

Preliminary Local Feature Selection by Support Vector Machine for Bag of Features

Preliminary Local Feature Selection by Support Vector Machine for Bag of Features Preliminary Local Feature Selection by Support Vector Machine for Bag of Features Tetsu Matsukawa Koji Suzuki Takio Kurita :University of Tsukuba :National Institute of Advanced Industrial Science and

More information

A Novel Hand Posture Recognition System Based on Sparse Representation Using Color and Depth Images

A Novel Hand Posture Recognition System Based on Sparse Representation Using Color and Depth Images 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) November 3-7, 2013. Tokyo, Japan A Novel Hand Posture Recognition System Based on Sparse Representation Using Color and Depth

More information

Skeleton based Human Action Recognition using Kinect

Skeleton based Human Action Recognition using Kinect Skeleton based Human Action Recognition using Kinect Ayushi Gahlot Purvi Agarwal Akshya Agarwal Vijai Singh IMS Engineering college, Amit Kumar Gautam ABSTRACT This paper covers the aspects of action recognition

More information

Color Content Based Image Classification

Color Content Based Image Classification Color Content Based Image Classification Szabolcs Sergyán Budapest Tech sergyan.szabolcs@nik.bmf.hu Abstract: In content based image retrieval systems the most efficient and simple searches are the color

More information

6. Applications - Text recognition in videos - Semantic video analysis

6. Applications - Text recognition in videos - Semantic video analysis 6. Applications - Text recognition in videos - Semantic video analysis Stephan Kopf 1 Motivation Goal: Segmentation and classification of characters Only few significant features are visible in these simple

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

Robust Fingertip Tracking with Improved Kalman Filter

Robust Fingertip Tracking with Improved Kalman Filter Robust Fingertip Tracking with Improved Kalman Filter Chunyang Wang and Bo Yuan Intelligent Computing Lab, Division of Informatics Graduate School at Shenzhen, Tsinghua University Shenzhen 518055, P.R.

More information

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN 2016 International Conference on Artificial Intelligence: Techniques and Applications (AITA 2016) ISBN: 978-1-60595-389-2 Face Recognition Using Vector Quantization Histogram and Support Vector Machine

More information

Human Gait Recognition Using Bezier Curves

Human Gait Recognition Using Bezier Curves Human Gait Recognition Using Bezier Curves Pratibha Mishra Samrat Ashok Technology Institute Vidisha, (M.P.) India Shweta Ezra Dhar Polytechnic College Dhar, (M.P.) India Abstract-- Gait recognition refers

More information

Medical images, segmentation and analysis

Medical images, segmentation and analysis Medical images, segmentation and analysis ImageLab group http://imagelab.ing.unimo.it Università degli Studi di Modena e Reggio Emilia Medical Images Macroscopic Dermoscopic ELM enhance the features of

More information

Real-Time Model-Based Hand Localization for Unsupervised Palmar Image Acquisition

Real-Time Model-Based Hand Localization for Unsupervised Palmar Image Acquisition Real-Time Model-Based Hand Localization for Unsupervised Palmar Image Acquisition Ivan Fratric 1, Slobodan Ribaric 1 1 University of Zagreb, Faculty of Electrical Engineering and Computing, Unska 3, 10000

More information

Gait analysis for person recognition using principal component analysis and support vector machines

Gait analysis for person recognition using principal component analysis and support vector machines Gait analysis for person recognition using principal component analysis and support vector machines O V Strukova 1, LV Shiripova 1 and E V Myasnikov 1 1 Samara National Research University, Moskovskoe

More information

Automatic Colorization of Grayscale Images

Automatic Colorization of Grayscale Images Automatic Colorization of Grayscale Images Austin Sousa Rasoul Kabirzadeh Patrick Blaes Department of Electrical Engineering, Stanford University 1 Introduction ere exists a wealth of photographic images,

More information

AN EFFICIENT METHOD FOR HUMAN POINTING ESTIMATION FOR ROBOT INTERACTION. Satoshi Ueno, Sei Naito, and Tsuhan Chen

AN EFFICIENT METHOD FOR HUMAN POINTING ESTIMATION FOR ROBOT INTERACTION. Satoshi Ueno, Sei Naito, and Tsuhan Chen AN EFFICIENT METHOD FOR HUMAN POINTING ESTIMATION FOR ROBOT INTERACTION Satoshi Ueno, Sei Naito, and Tsuhan Chen KDDI R&D Laboratories, Inc. Cornell University ABSTRACT In this paper, we propose an efficient

More information

A Novel Smoke Detection Method Using Support Vector Machine

A Novel Smoke Detection Method Using Support Vector Machine A Novel Smoke Detection Method Using Support Vector Machine Hidenori Maruta Information Media Center Nagasaki University, Japan 1-14 Bunkyo-machi, Nagasaki-shi Nagasaki, Japan Email: hmaruta@nagasaki-u.ac.jp

More information

Learning 3D Part Detection from Sparsely Labeled Data: Supplemental Material

Learning 3D Part Detection from Sparsely Labeled Data: Supplemental Material Learning 3D Part Detection from Sparsely Labeled Data: Supplemental Material Ameesh Makadia Google New York, NY 10011 makadia@google.com Mehmet Ersin Yumer Carnegie Mellon University Pittsburgh, PA 15213

More information

International Journal of Modern Engineering and Research Technology

International Journal of Modern Engineering and Research Technology Volume 4, Issue 3, July 2017 ISSN: 2348-8565 (Online) International Journal of Modern Engineering and Research Technology Website: http://www.ijmert.org Email: editor.ijmert@gmail.com A Novel Approach

More information

Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction

Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction Chieh-Chih Wang and Ko-Chih Wang Department of Computer Science and Information Engineering Graduate Institute of Networking

More information

Learning to Recognize Faces in Realistic Conditions

Learning to Recognize Faces in Realistic Conditions 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Task analysis based on observing hands and objects by vision

Task analysis based on observing hands and objects by vision Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In

More information

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model TAE IN SEOL*, SUN-TAE CHUNG*, SUNHO KI**, SEONGWON CHO**, YUN-KWANG HONG*** *School of Electronic Engineering

More information

Sign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features

Sign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features Sign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features Pat Jangyodsuk Department of Computer Science and Engineering The University

More information

Real-Time Continuous Action Detection and Recognition Using Depth Images and Inertial Signals

Real-Time Continuous Action Detection and Recognition Using Depth Images and Inertial Signals Real-Time Continuous Action Detection and Recognition Using Depth Images and Inertial Signals Neha Dawar 1, Chen Chen 2, Roozbeh Jafari 3, Nasser Kehtarnavaz 1 1 Department of Electrical and Computer Engineering,

More information

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors 33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors Kosei Ishida 1 1 School of

More information

Multi-projector-type immersive light field display

Multi-projector-type immersive light field display Multi-projector-type immersive light field display Qing Zhong ( é) 1, Beishi Chen (í ì) 1, Haifeng Li (Ó ô) 1, Xu Liu ( Ê) 1, Jun Xia ( ) 2, Baoping Wang ( ) 2, and Haisong Xu (Å Ø) 1 1 State Key Laboratory

More information

An Interactive Technique for Robot Control by Using Image Processing Method

An Interactive Technique for Robot Control by Using Image Processing Method An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.

More information

Human Gait Recognition using All Pair Shortest Path

Human Gait Recognition using All Pair Shortest Path 2011 International Conference on Software and Computer Applications IPCSIT vol.9 (2011) (2011) IACSIT Press, Singapore Human Gait Recognition using All Pair Shortest Path Jyoti Bharti 1+, M.K Gupta 2 1

More information

Document downloaded from: The final publication is available at: Copyright.

Document downloaded from: The final publication is available at: Copyright. Document downloaded from: http://hdl.handle.net/1459.1/62843 The final publication is available at: https://doi.org/1.17/978-3-319-563-8_4 Copyright Springer International Publishing Switzerland, 213 Evaluation

More information

Real Time Motion Authoring of a 3D Avatar

Real Time Motion Authoring of a 3D Avatar Vol.46 (Games and Graphics and 2014), pp.170-174 http://dx.doi.org/10.14257/astl.2014.46.38 Real Time Motion Authoring of a 3D Avatar Harinadha Reddy Chintalapalli and Young-Ho Chai Graduate School of

More information

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation 0 Robust and Accurate Detection of Object Orientation and ID without Color Segmentation Hironobu Fujiyoshi, Tomoyuki Nagahashi and Shoichi Shimizu Chubu University Japan Open Access Database www.i-techonline.com

More information

SCIENCE & TECHNOLOGY

SCIENCE & TECHNOLOGY Pertanika J. Sci. & Technol. 25 (S): 107-114 (2017) SCIENCE & TECHNOLOGY Journal homepage: http://www.pertanika.upm.edu.my/ Invariant Feature Descriptor based on Harmonic Image Transform for Plant Leaf

More information

Collecting outdoor datasets for benchmarking vision based robot localization

Collecting outdoor datasets for benchmarking vision based robot localization Collecting outdoor datasets for benchmarking vision based robot localization Emanuele Frontoni*, Andrea Ascani, Adriano Mancini, Primo Zingaretti Department of Ingegneria Infromatica, Gestionale e dell

More information

Handheld scanning with ToF sensors and cameras

Handheld scanning with ToF sensors and cameras Handheld scanning with ToF sensors and cameras Enrico Cappelletto, Pietro Zanuttigh, Guido Maria Cortelazzo Dept. of Information Engineering, University of Padova enrico.cappelletto,zanuttigh,corte@dei.unipd.it

More information

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation M. Blauth, E. Kraft, F. Hirschenberger, M. Böhm Fraunhofer Institute for Industrial Mathematics, Fraunhofer-Platz 1,

More information

Gesture Recognition Technique:A Review

Gesture Recognition Technique:A Review Gesture Recognition Technique:A Review Nishi Shah 1, Jignesh Patel 2 1 Student, Indus University, Ahmedabad 2 Assistant Professor,Indus University,Ahmadabad Abstract Gesture Recognition means identification

More information

3D Face and Hand Tracking for American Sign Language Recognition

3D Face and Hand Tracking for American Sign Language Recognition 3D Face and Hand Tracking for American Sign Language Recognition NSF-ITR (2004-2008) D. Metaxas, A. Elgammal, V. Pavlovic (Rutgers Univ.) C. Neidle (Boston Univ.) C. Vogler (Gallaudet) The need for automated

More information

Recognition of Gurmukhi Text from Sign Board Images Captured from Mobile Camera

Recognition of Gurmukhi Text from Sign Board Images Captured from Mobile Camera International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 17 (2014), pp. 1839-1845 International Research Publications House http://www. irphouse.com Recognition of

More information