Upper Body Pose Recognition with Labeled Depth Body Parts via Random Forests and Support Vector Machines

Size: px
Start display at page:

Download "Upper Body Pose Recognition with Labeled Depth Body Parts via Random Forests and Support Vector Machines"

Transcription

1 Upper Body Pose Recognition with Labeled Depth Body Parts via Random Forests and Support Vector Machines Myeong-Jun Lim, Jin-Ho Cho, Hee-Sok Han, and Tae-Seong Kim Abstract Human pose recognition has become an active research topic lately in the field of human computer interface (HCI). However it presents technical challenges due to the complexity of human motion. In this paper, we propose a novel methodology for human upper body pose recognition using labeled (i.e., recognized) human body parts in depth silhouettes. Our proposed method performs human upper body parts labeling using trained random forests (RFs) and utilizes support vector machines (SVMs) to recognize various upper body poses. To train RFs, we create a database of synthetic depth silhouettes of the upper body and their corresponding upper body parts labeled maps using a commercial computer graphics package. Once the body parts get labeled with the trained RFs, a skeletal upper body model is generated from the labeled body parts. Then, SVMs are trained with a set of joint angle features to recognize seven upper body poses. The experimental results show the mean recognition rate of 97.62%. Our proposed method should be useful as a near field HCI technique to be used in applications such as smart computer interfaces. Index Terms Upper body pose recognition, body parts labeling, random forests, support vector machines. I. INTRODUCTION Human pose recognition has become an active research topic lately in the field of human computer interface (HCI). However it presents technical challenges due to the complexity of human motion. Most previous studies on human pose recognition are based on color RGB images from which body parts are detected with respect to skin color or shape information. For instance, Lee et al. detected head and shoulder contours using Maximum Posteriori Probability from RGB images and estimated the pose using a body outline model [1]. Oh et al. proposed upper body pose estimation using a distance transform from human silhouettes in RGB images [2]. Their proposed method worked under a restricted environment with sufficient light. In general, these RGB image based methods are sensitive to light and background conditions. For improved recognition of body poses, stereo cameras have been tested. For instance, J. Mulligan estimated the upper body pose from 3D stereo images [3]. Chu et al. also used the disparity maps from a stereo camera and detected the head and hand using Haar features in the pre-populated space [4]. Song et al. also proposed a technique for upper body pose estimation in which they detected the hand using the skin Manuscript received October 30, 2012; revised December 13, Myeong-Jun Lim, Jin-Ho Cho, Hee-Sok Han, and Tae-Seong Kim are with the Department of Biomedical Engineering, Kyung Hee University, Yong In, Republic of Korea ( tskim@khu.ac.kr). color and estimated the upper body poses using depth maps from a stereo camera [5]. Cavin et al. segmented the upper body parts into eight regions and tracked a set of joints using likelihood based classification in Bayesian network [6]. Recently, a new type of depth camera has been introduced which utilizes an optical source and depth imaging sensor. This new camera is less sensitive to the lighting conditions. With this camera, Jain et al. proposed a method to estimate upper body pose via a weighted distance transformation [7]. However, their method could not overcome a merging problem because their method only used 2D information from the weighted distance transformation. Zhu et al. defined eight points as upper body joints, and fitted the torso and head using a likelihood function with initial poses [8]. Then, they estimated the arm pose using the connected regions with the torso. Some comments or discussion about the mentioned studies here, like if body parts are identified in depth silhouettes, improved pose recognition could be possible. Recently, Shotton et al. have introduced a real-time body parts labeling methodology using random forests (RFs) [9]. They showed the feasibility of recognizing 31 human body parts from a depth whole body silhouette. The presented methodology worked in a little far field (approximately 2~3 meters) due to the limitation of the depth camera and required a database created using optical markers and complicated motion capture setups to train RFs. No work on pose recognition was performed. In this work, we propose a methodology of upper body pose recognition with a near field supported depth camera and propose a new way of creating a training database. First, we have created a purely synthetic training database without optic markers and motion capture settings. This database is used in training RFs to recognize upper human body parts. Then, a skeletal model is generated from the labeled body parts. Second, Support Vector Machines (SVMs) are trained to recognize seven upper body poses with joint angle features from the skeletal model. We have achieved the mean recognition of 97.62%. Our proposed method works fast and robust, and should be applicable to human computer interface in a near field. II. METHODS To recognize upper body parts, At first, we create a database of depth silhouettes of various upper body poses and their corresponding body parts labeled maps using a computer graphics commercial package, 3Ds MAX [10] This database is used to train random forests (RFs). From the labeled body parts from the trained RFs, we generate the skeleton model for joint angle feature vectors and apply DOI: /IJIET.2013.V

2 linear discriminant analysis (LDA) to reduce feature dimensions and separate feature vectors more clearly. Then, we recognize the upper body poses using support vector machines (SVMs). Fig. 1 shows the overall flow of our algorithms. Fig. 1 shows the flow of body parts labeling via training RFs and Fig. 1 shows the flow of training SVMs. Fig. 1 shows the testing process using trained RFs and SVMs. B. 3D Upper Body Modeling and Database Generation To train RFs, we create a database (DB) using 3Ds MAX [10]. According to the human body ratio information, we create the upper body skin model which is a set of polygons, and do the same for the bone model. Our bone model consists of twelve bones (i.e., head, neck, spine, spine1, right shoulder, right upper arm, right forearm, right hand, left shoulder, left upper arm, left forearm, and left hand). Then, we modify the bone model to match the skin model in size and synchronize both the bone and skin models. To create a body parts labeled model, the upper body gets divided into twelve body parts and each body part is assigned with a different color. Our upper body model has the following parts (i.e., head, torso, right shoulder, right upper arm, right elbow, right forearm, right hand, left shoulder, left upper arm, left elbow, left forearm, and left hand). To take pictures of the body models, we regulate the camera direction and distance to the upper body model. Finally we generate images of the depth and labeled body parts. Fig. 2 - show our 3-D upper body model, Fig. 2 a depth image, and Fig. 2 (d) the color labeled map of the body parts corresponding to Figs. 2 and respectively. Fig. 1. Overall flow of our recognition system: processes of body parts labeling via RFs, processes of training SVMs for upper body pose recognition, and processes of real-time upper body pose recognition. A. Depth Imaging In this study, we utilize a Z-cam which can capture depth images in the near field [11]. The imaging parameters were set to be an image size of 240x320, field of view of 60 degrees, and frame speed of 30fps. The distance from the camera to a subject is in a range of 0.5m~1.5m in which a subject s upper body can be captured in the field of view of the camera. Fig. 2. Synthetic database generation: front view of our 3D upper body model, front view of our 3D upper body model with color labeled body parts, depth image of our model, and (d) color labeled body parts map corresponding to (d) C. Body Parts Labeling via Random Forests Fig. 3. Samples from our synthetic upper body database used in training RFs. 68

3 To recognize the body parts, we use RFs as a classifier [9]-[12]. The RFs consist of several number of decision trees the ensemble of trees votes to get the favorable decisions [12]. To generate features for the RFs, a set of 2,000 pixels is randomly selected from each of upper body depth silhouettes (as shown in Fig. 2 ) in the DB. A window with a size of 160x160 is utilized to select the 2,000 random vector pairs. The features are generated by taking differences in depth values as given below, f d ( I pi n1 n2 n1 n2, n) D( y u, x u ) D( y v, x v ) (1) where D(y, x) is depth value at the location of (y, x) and (u n1, u n2 ), (v n1, v n2 ) are the random vector pairs. At the same time, the label of selected pixel is stored as a ground truth of that feature vector. The feature vector and ground truth are used as a training set of RFs. Random subsample are selected from the training set to train each decision tree via bootstrap sampling [12]. Then, each tree is grown to the fullest extent possible without pruning. At each internal node, the best split is determined using the Gini index among the randomized selection of features [12]. Classification is performed with the majority vote from all individually trained trees. D. Body Parts Labeling via Random Forests To recognize the upper body pose, we generate a body skeleton model as shown in Fig. 4. The body skeleton model is created by connecting the centroids of the labeled regions [13]. From the skeleton model, the body joint (i.e., shoulder, elbow, and wrist) points are derived. Then from the body joint points, we derive directional unit vectors as features to be used in the upper body pose recognition. The equations for these features are shown below, III. RESULTS In our experiments, we created pairs of 350 synthetic depth and labeled images in our DB using the upper body model. Fig. 3 shows the synthetic database (i.e., depth and labeled images) used in training RFs. Then, with the trained RFs, every incoming upper body depth silhouette gets labeled into each body parts. Then, we performed the experiments with ten subjects in twenties. The subjects were asked to make a pose in front our system and our system recognize their poses. We divided the subjects into two groups: a group of five subjects deriving 100 images per activity to train SVMs and another group of five subjects deriving 100 images per activity to test. In the experiment, we recognized seven poses: namely stand, right hand lift (RHL), left hand lift (LHL), both hands lift (BHL), upward stretch (US), side stretch (SS), and love sign (LS). Fig. 4 shows a depth silhouette of both hands lift from the depth camera and Fig. 4 is an upper body parts labeled image using the trained RFs. After labeling, we found the centroids of each labeled body part. Fig. 4 show the centroids of the labeled body parts and the skeleton body model superimposed on the labeled silhouettes. Fig. 4 (d) show the joint proposal. Fig. 5 shows the features after LDA. Fig. 4 (d) and Fig. 6 show the labeling results and the skeletal model. The mean recognition rate of 97.62% was achieved. Table I shows the recognition results of the seven poses in a confusion matrix. d( x, y, z) J ( x, y, z) J ( x, y, z) (2) a b f ( i, v) d( x, y, z) / x y z (3) where d is the difference between two joint point, f(i,v) is a feature vector, and J a and J b joint points of the upper body respectively. Then, we apply linear discriminant analysis (LDA) to the feature vector to reduce the dimension of the feature space and make our features more compact and robust. It can be expressed as, T D SbD T Dopt arg max [ d1, d2,..., dt ] (4) D T D S D w where D opt is the optical discrimination projection matrix, S w and S b are the within and between classes scatter matrices respectively [14]. To recognize upper body poses, we use SVMs. A library for SVM, LIBSVM [15]-[16] is utilized which is open and available on the web. Fig. 4. Features for upper body pose recognition: a depth silhouette of both hand lift, labeled body parts using the trained random forest, a skeletal model superimposed on with the centroids, and (d) a skeletal model superimposed on with the joints in red boxes f Feature Space f1 0 (d) Stand RHL LHL BHL US 0.1 SS LS Fig. 5. A 3D feature plot from the joint points after LDA. 69

4 (d) (e) (f) Fig. 6. Skeletal models of six different poses superimposed on their labeled body parts: stand, right hand lift, left hand lift, (d) upward stretch, (e) side stretch, and (f) love sign. TABLE I: A CONFUSION MATRIX OF UPPER BODY POSE RECOGNITION RESULTS Mean Stand RHL LHL BHL US SS LS [%] Stand RHL LHL BHL US SS LS [2] C. M. Oh, M. Z. Islam, and C. W. Lee, A Gesture Recognition Interface with Upper Body Model-based Pose Tracking, International Conference on Computer Engineering and Technology, vol. 7, pp , [3] J. Mulligan, Upper body pose estimation from stereo and hand-face tracking, Canadian Conference on Computer and Robot Vision, British Columbia, Canada, pp. 9 11, May [4] C. T. Chu and R. Green, Robust Upper Body Pose Recognition in Unconstrained Environments Using Haar-Disparity, in Proceedings of Image and Vision Computing New Zealand 2007, Hamilton, New Zealand, pp , Dec [5] Y. Song, D. Demirdjian, and R. Davis, Tracking Body and Hands for Gesture Recognition: NATOPS Aircraft Handling Signals Database, IEEE International Conference on Automatic Face & Gesture Recognition and Workshops, MA, USA, pp , March 2011 [6] R. D. Cavin, A. T. Nefian, and N. Goef, A Bayesian Formulation for 3D Articulated Upper Body Segmentation and Tracking from Dense Disparity Maps, International Conference on Image Processing, Catalonia, Spain, pp , Sep [7] H. Jain and A. Subramanian, Real-time upper-body human pose estimation using a depth camera, HP Technical Reports, [8] Y. Zhu, B. Dariush, and K. Fujimura, Controlled human pose estimation from depth image streams, CVPR Workshop on Time of Flight Computer Vision, Anchorage, Alaska, [9] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, Real-Time Human Pose Recognition in Parts from Single Depth Images, Computer Vision and Pattern Recognition, pp , June [10] Autodesk 3ds MAX, [11] Z-Cam. 3DV System. [Online]. Available: [12] L. Breiman, Random Forests, Machine Learning, vol. 45, pp. 5-32, [13] J. A. M. Henk and Heijmans, Connected Morphological Operators for Binary Images, Computer Vision and Image Understanding, vol. 73, pp , [14] G. McLachlan, Discriminant Analysis and Statistical Pattern Recognition, New York: John Wiley & Sons, [15] C. C. Chang and C. J. Lin. LIBSVM: a library for support vector machines, Journal of ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, pp. 1-27, April [16] E. Mayoraz and E. Alpaydin, Support vector machines for multi-class classification, in Proc. of International Work-Conference on Artificial Neural Network, vol. 2, pp , 1999, IV. CONCLUSION In this paper, we have implemented an upper body pose recognition system via labeling upper body parts of depth silhouettes using random forests and support vector machines. Our system recognizes seven different upper body poses with the mean recognition rate of 97.62%. We expect that the proposed upper body pose recognition system which works in a near view field should be useful to HCI applications for smart TV, PC and smart home applications. ACKNOWLEDGMENT This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(mest) (No ). This research was supported by the MKE(The Ministry of Knowledge Economy), Korea, under the ITRC(Information Technology Research Center) support program supervised by the NIPA(National IT Industry Promotion Agency) (NIPA-2012-(H )). REFERENCES [1] M. Lee and R. Nevatia, Body part detection for human pose estimation and tracking, Workshop on Motion and Video Computing, DC. USA, Feb Myeong-Jun Lim received his B.S. degree in Biomedical Engineering from Kyung Hee University, South Korea. He is currently working toward his M.S. degree in the Department of Biomedical Engineering at Kyung Hee University, Republic of Korea. His research interests include image processing, pattern recognition, artificial intelligence, and computer vision. Jin-Ho Cho received his B.S. degree in Biomedical Engineering from Kyung Hee University, Republic of Korea. He is currently working toward his M.S. degree in the Department of Computer Engineering at Kyung Hee University, Republic of Korea. His research interests include image processing, pattern recognition, artificial intelligence, and machine learning. Hee-Sok Han received his B.S. degree in Biomedical Engineering from Kyung Hee University, Republic of Korea. He is currently working toward his M.S. degree in the Department of Biomedical Engineering at Kyung Hee University, Republic of Korea. His research interests include image processing, pattern recognition, ultrasound signal processing. 70

5 Tae-Seong Kim received the B.S. degree in Biomedical Engineering from the University of Southern California (USC) in 1991, M.S. degrees in Biomedical and Electrical Engineering from USC in 1993 and 1998 respectively, and Ph.D. in Biomedical Engineering from USC in After his postdoctoral work in cognitive sciences at the University of California, Irvine in 2000, he joined the Alfred E. Mann Institute for Biomedical Engineering and Dept. of Biomedical Engineering at USC as a Research Scientist and Research Assistant Professor. In 2004, he moved to Kyung Hee University in South Korea where he is currently an Associate Professor in the Biomedical Engineering Department. His research interests have spanned various areas of biomedical imaging including Magnetic Resonance Imaging (MRI), functional MRI, E/MEG imaging, DT-MRI, transmission ultrasonic CT, and Magnetic Resonance Electrical Impedance Imaging. Lately he has started research work in proactive computing at the u-lifecare Research Center where he serves as Vice Director. Dr. Kim has been developing advanced signal and image processing methods, pattern classification and machine learning methods, and novel medical imaging and rehabilitation instruments and technologies. Dr. Kim has published more than 60 peer reviewed papers and 100 proceedings, and holds 3 international patents. He is a member of IEEE, KOSOMBE, and Tau Beta Pi, and listed in Who s Who in the World ( 09-12). 71

Real-Time Human Pose Recognition in Parts from Single Depth Images

Real-Time Human Pose Recognition in Parts from Single Depth Images Real-Time Human Pose Recognition in Parts from Single Depth Images Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, Andrew Blake CVPR 2011 PRESENTER:

More information

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview Human Body Recognition and Tracking: How the Kinect Works Kinect RGB-D Camera Microsoft Kinect (Nov. 2010) Color video camera + laser-projected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due

More information

A Robust Gesture Recognition Using Depth Data

A Robust Gesture Recognition Using Depth Data A Robust Gesture Recognition Using Depth Data Hironori Takimoto, Jaemin Lee, and Akihiro Kanagawa In recent years, gesture recognition methods using depth sensor such as Kinect sensor and TOF sensor have

More information

Human Upper Body Pose Estimation in Static Images

Human Upper Body Pose Estimation in Static Images 1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project

More information

Articulated Pose Estimation with Flexible Mixtures-of-Parts

Articulated Pose Estimation with Flexible Mixtures-of-Parts Articulated Pose Estimation with Flexible Mixtures-of-Parts PRESENTATION: JESSE DAVIS CS 3710 VISUAL RECOGNITION Outline Modeling Special Cases Inferences Learning Experiments Problem and Relevance Problem:

More information

Hand part classification using single depth images

Hand part classification using single depth images Hand part classification using single depth images Myoung-Kyu Sohn, Dong-Ju Kim and Hyunduk Kim Department of IT Convergence, Daegu Gyeongbuk Institute of Science & Technology (DGIST), Daegu, South Korea

More information

A Robust Hand Gesture Recognition Using Combined Moment Invariants in Hand Shape

A Robust Hand Gesture Recognition Using Combined Moment Invariants in Hand Shape , pp.89-94 http://dx.doi.org/10.14257/astl.2016.122.17 A Robust Hand Gesture Recognition Using Combined Moment Invariants in Hand Shape Seungmin Leem 1, Hyeonseok Jeong 1, Yonghwan Lee 2, Sungyoung Kim

More information

Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, Andrew Blake CVPR 2011

Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, Andrew Blake CVPR 2011 Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, Andrew Blake CVPR 2011 Auto-initialize a tracking algorithm & recover from failures All human poses,

More information

Walking gait dataset: point clouds, skeletons and silhouettes

Walking gait dataset: point clouds, skeletons and silhouettes Walking gait dataset: point clouds, skeletons and silhouettes Technical Report Number 1379 Trong-Nguyen Nguyen * and Jean Meunier DIRO, University of Montreal, Montreal, QC, Canada September 8, 2018 Abstract

More information

CS Decision Trees / Random Forests

CS Decision Trees / Random Forests CS548 2015 Decision Trees / Random Forests Showcase by: Lily Amadeo, Bir B Kafle, Suman Kumar Lama, Cody Olivier Showcase work by Jamie Shotton, Andrew Fitzgibbon, Richard Moore, Mat Cook, Alex Kipman,

More information

The Kinect Sensor. Luís Carriço FCUL 2014/15

The Kinect Sensor. Luís Carriço FCUL 2014/15 Advanced Interaction Techniques The Kinect Sensor Luís Carriço FCUL 2014/15 Sources: MS Kinect for Xbox 360 John C. Tang. Using Kinect to explore NUI, Ms Research, From Stanford CS247 Shotton et al. Real-Time

More information

Gesture Recognition using Temporal Templates with disparity information

Gesture Recognition using Temporal Templates with disparity information 8- MVA7 IAPR Conference on Machine Vision Applications, May 6-8, 7, Tokyo, JAPAN Gesture Recognition using Temporal Templates with disparity information Kazunori Onoguchi and Masaaki Sato Hirosaki University

More information

Tri-modal Human Body Segmentation

Tri-modal Human Body Segmentation Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4

More information

Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images

Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images Seong-Jae Lim, Ho-Won Kim, Jin Sung Choi CG Team, Contents Division ETRI Daejeon, South Korea sjlim@etri.re.kr Bon-Ki

More information

Feature Weighting in Dynamic Time Warping for Gesture Recognition in Depth Data

Feature Weighting in Dynamic Time Warping for Gesture Recognition in Depth Data Feature Weighting in Dynamic Time Warping for Gesture Recognition in Depth Data Miguel Reyes Dept. Matemàtica Aplicada i Anàlisi, Universitat de Barcelona Gran Via de les Corts Catalanes 585, 08007, Barcelona,

More information

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING Proceedings of MUSME 2011, the International Symposium on Multibody Systems and Mechatronics Valencia, Spain, 25-28 October 2011 HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING Pedro Achanccaray, Cristian

More information

Robust Classification of Human Actions from 3D Data

Robust Classification of Human Actions from 3D Data Robust Classification of Human s from 3D Data Loc Huynh, Thanh Ho, Quang Tran, Thang Ba Dinh, Tien Dinh Faculty of Information Technology University of Science Ho Chi Minh City, Vietnam hvloc@apcs.vn,

More information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Ana González, Marcos Ortega Hortas, and Manuel G. Penedo University of A Coruña, VARPA group, A Coruña 15071,

More information

Implementation of a Smart Ward System in a Hi-Tech Hospital by Using a Kinect Sensor Camera

Implementation of a Smart Ward System in a Hi-Tech Hospital by Using a Kinect Sensor Camera Volume: 4 sue: 5 531-536 Implementation of a Smart Ward System in a Hi-Tech Hospital by Using a Kinect Sensor Camera Jyothilakshmi P a, K R Rekha b, K R Nataraj ab Research Scholar, Jain University, Bangalore,

More information

Development of a Fall Detection System with Microsoft Kinect

Development of a Fall Detection System with Microsoft Kinect Development of a Fall Detection System with Microsoft Kinect Christopher Kawatsu, Jiaxing Li, and C.J. Chung Department of Mathematics and Computer Science, Lawrence Technological University, 21000 West

More information

Random Forest A. Fornaser

Random Forest A. Fornaser Random Forest A. Fornaser alberto.fornaser@unitn.it Sources Lecture 15: decision trees, information theory and random forests, Dr. Richard E. Turner Trees and Random Forests, Adele Cutler, Utah State University

More information

Gesture Recognition: Hand Pose Estimation. Adrian Spurr Ubiquitous Computing Seminar FS

Gesture Recognition: Hand Pose Estimation. Adrian Spurr Ubiquitous Computing Seminar FS Gesture Recognition: Hand Pose Estimation Adrian Spurr Ubiquitous Computing Seminar FS2014 27.05.2014 1 What is hand pose estimation? Input Computer-usable form 2 Augmented Reality Gaming Robot Control

More information

A Real-Time Hand Gesture Recognition for Dynamic Applications

A Real-Time Hand Gesture Recognition for Dynamic Applications e-issn 2455 1392 Volume 2 Issue 2, February 2016 pp. 41-45 Scientific Journal Impact Factor : 3.468 http://www.ijcter.com A Real-Time Hand Gesture Recognition for Dynamic Applications Aishwarya Mandlik

More information

Human pose estimation using Active Shape Models

Human pose estimation using Active Shape Models Human pose estimation using Active Shape Models Changhyuk Jang and Keechul Jung Abstract Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body

More information

Dynamic Human Shape Description and Characterization

Dynamic Human Shape Description and Characterization Dynamic Human Shape Description and Characterization Z. Cheng*, S. Mosher, Jeanne Smith H. Cheng, and K. Robinette Infoscitex Corporation, Dayton, Ohio, USA 711 th Human Performance Wing, Air Force Research

More information

Combining PGMs and Discriminative Models for Upper Body Pose Detection

Combining PGMs and Discriminative Models for Upper Body Pose Detection Combining PGMs and Discriminative Models for Upper Body Pose Detection Gedas Bertasius May 30, 2014 1 Introduction In this project, I utilized probabilistic graphical models together with discriminative

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

Detecting Fingertip Method and Gesture Usability Research for Smart TV. Won-Jong Yoon and Jun-dong Cho

Detecting Fingertip Method and Gesture Usability Research for Smart TV. Won-Jong Yoon and Jun-dong Cho Detecting Fingertip Method and Gesture Usability Research for Smart TV Won-Jong Yoon and Jun-dong Cho Outline Previous work Scenario Finger detection Block diagram Result image Performance Usability test

More information

Motion capture: An evaluation of Kinect V2 body tracking for upper limb motion analysis

Motion capture: An evaluation of Kinect V2 body tracking for upper limb motion analysis Motion capture: An evaluation of Kinect V2 body tracking for upper limb motion analysis Silvio Giancola 1, Andrea Corti 1, Franco Molteni 2, Remo Sala 1 1 Vision Bricks Laboratory, Mechanical Departement,

More information

Development of real-time motion capture system for 3D on-line games linked with virtual character

Development of real-time motion capture system for 3D on-line games linked with virtual character Development of real-time motion capture system for 3D on-line games linked with virtual character Jong Hyeong Kim *a, Young Kee Ryu b, Hyung Suck Cho c a Seoul National Univ. of Tech., 172 Gongneung-dong,

More information

Object Tracking using Superpixel Confidence Map in Centroid Shifting Method

Object Tracking using Superpixel Confidence Map in Centroid Shifting Method Indian Journal of Science and Technology, Vol 9(35), DOI: 10.17485/ijst/2016/v9i35/101783, September 2016 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 Object Tracking using Superpixel Confidence

More information

Ulas Bagci

Ulas Bagci CAP5415-Computer Vision Lecture 14-Decision Forests for Computer Vision Ulas Bagci bagci@ucf.edu 1 Readings Slide Credits: Criminisi and Shotton Z. Tu R.Cipolla 2 Common Terminologies Randomized Decision

More information

A multi-camera positioning system for steering of a THz stand-off scanner

A multi-camera positioning system for steering of a THz stand-off scanner A multi-camera positioning system for steering of a THz stand-off scanner Maria Axelsson, Mikael Karlsson and Staffan Rudner Swedish Defence Research Agency, Box 1165, SE-581 11 Linköping, SWEDEN ABSTRACT

More information

Perceptual Quality Improvement of Stereoscopic Images

Perceptual Quality Improvement of Stereoscopic Images Perceptual Quality Improvement of Stereoscopic Images Jong In Gil and Manbae Kim Dept. of Computer and Communications Engineering Kangwon National University Chunchon, Republic of Korea, 200-701 E-mail:

More information

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION 2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent

More information

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models Emanuele Ruffaldi Lorenzo Peppoloni Alessandro Filippeschi Carlo Alberto Avizzano 2014 IEEE International

More information

Towards a Simulation Driven Stereo Vision System

Towards a Simulation Driven Stereo Vision System Towards a Simulation Driven Stereo Vision System Martin Peris Cyberdyne Inc., Japan Email: martin peris@cyberdyne.jp Sara Martull University of Tsukuba, Japan Email: info@martull.com Atsuto Maki Toshiba

More information

Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme

Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme Costas Panagiotakis and Anastasios Doulamis Abstract In this paper, an unsupervised, automatic video human members(human

More information

Sensors & Transducers 2015 by IFSA Publishing, S. L.

Sensors & Transducers 2015 by IFSA Publishing, S. L. Sensors & Transducers 2015 by IFSA Publishing, S. L. http://www.sensorsportal.com Gesture Based Control Using Windows Kinect and Robot Operating System KUNAL KAUSHIK, K. SRIRAM and M. MANIMOZHI School

More information

Modeling Body Motion Posture Recognition Using 2D-Skeleton Angle Feature

Modeling Body Motion Posture Recognition Using 2D-Skeleton Angle Feature 2012 International Conference on Image, Vision and Computing (ICIVC 2012) IPCSIT vol. 50 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V50.1 Modeling Body Motion Posture Recognition Using

More information

Facial Feature Extraction Based On FPD and GLCM Algorithms

Facial Feature Extraction Based On FPD and GLCM Algorithms Facial Feature Extraction Based On FPD and GLCM Algorithms Dr. S. Vijayarani 1, S. Priyatharsini 2 Assistant Professor, Department of Computer Science, School of Computer Science and Engineering, Bharathiar

More information

Vehicle Detection Method using Haar-like Feature on Real Time System

Vehicle Detection Method using Haar-like Feature on Real Time System Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

Human Action Recognition Using Independent Component Analysis

Human Action Recognition Using Independent Component Analysis Human Action Recognition Using Independent Component Analysis Masaki Yamazaki, Yen-Wei Chen and Gang Xu Department of Media echnology Ritsumeikan University 1-1-1 Nojihigashi, Kusatsu, Shiga, 525-8577,

More information

Hand Gesture Recognition System

Hand Gesture Recognition System Hand Gesture Recognition System Mohamed Alsheakhali, Ahmed Skaik, Mohammed Aldahdouh, Mahmoud Alhelou Computer Engineering Department, The Islamic University of Gaza Gaza Strip, Palestine, 2011 msali@iugaza.edu.ps,

More information

Implementation of a Face Recognition System for Interactive TV Control System

Implementation of a Face Recognition System for Interactive TV Control System Implementation of a Face Recognition System for Interactive TV Control System Sang-Heon Lee 1, Myoung-Kyu Sohn 1, Dong-Ju Kim 1, Byungmin Kim 1, Hyunduk Kim 1, and Chul-Ho Won 2 1 Dept. IT convergence,

More information

Hand gesture recognition with Leap Motion and Kinect devices

Hand gesture recognition with Leap Motion and Kinect devices Hand gesture recognition with Leap Motion and devices Giulio Marin, Fabio Dominio and Pietro Zanuttigh Department of Information Engineering University of Padova, Italy Abstract The recent introduction

More information

Keywords Binary Linked Object, Binary silhouette, Fingertip Detection, Hand Gesture Recognition, k-nn algorithm.

Keywords Binary Linked Object, Binary silhouette, Fingertip Detection, Hand Gesture Recognition, k-nn algorithm. Volume 7, Issue 5, May 2017 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Hand Gestures Recognition

More information

Domain Adaptation for Upper Body Pose Tracking in Signed TV Broadcasts

Domain Adaptation for Upper Body Pose Tracking in Signed TV Broadcasts CHARLES et al.: DOMAIN ADAPTATION FOR UPPER BODY POSE TRACKING 1 Domain Adaptation for Upper Body Pose Tracking in Signed TV Broadcasts James Charles 1 j.charles@leeds.ac.uk Tomas Pfister 2 tp@robots.ox.ac.uk

More information

arxiv: v3 [cs.cv] 10 Jan 2018

arxiv: v3 [cs.cv] 10 Jan 2018 HAND SEGMENTATION FOR HAND-OBJECT INTERACTION FROM DEPTH MAP Byeongkeun Kang Kar-Han Tan Nan Jiang Hung-Shuo Tai Daniel Tretter Truong Nguyen Department of Electrical and Computer Engineering, UC San Diego,

More information

Research Article Motion Control of Robot by using Kinect Sensor

Research Article Motion Control of Robot by using Kinect Sensor Research Journal of Applied Sciences, Engineering and Technology 8(11): 1384-1388, 2014 DOI:10.19026/rjaset.8.1111 ISSN: 2040-7459; e-issn: 2040-7467 2014 Maxwell Scientific Publication Corp. Submitted:

More information

Pose Estimation on Depth Images with Convolutional Neural Network

Pose Estimation on Depth Images with Convolutional Neural Network Pose Estimation on Depth Images with Convolutional Neural Network Jingwei Huang Stanford University jingweih@stanford.edu David Altamar Stanford University daltamar@stanford.edu Abstract In this paper

More information

Background subtraction in people detection framework for RGB-D cameras

Background subtraction in people detection framework for RGB-D cameras Background subtraction in people detection framework for RGB-D cameras Anh-Tuan Nghiem, Francois Bremond INRIA-Sophia Antipolis 2004 Route des Lucioles, 06902 Valbonne, France nghiemtuan@gmail.com, Francois.Bremond@inria.fr

More information

Small Object Segmentation Based on Visual Saliency in Natural Images

Small Object Segmentation Based on Visual Saliency in Natural Images J Inf Process Syst, Vol.9, No.4, pp.592-601, December 2013 http://dx.doi.org/10.3745/jips.2013.9.4.592 pissn 1976-913X eissn 2092-805X Small Object Segmentation Based on Visual Saliency in Natural Images

More information

Key Developments in Human Pose Estimation for Kinect

Key Developments in Human Pose Estimation for Kinect Key Developments in Human Pose Estimation for Kinect Pushmeet Kohli and Jamie Shotton Abstract The last few years have seen a surge in the development of natural user interfaces. These interfaces do not

More information

3D HAND LOCALIZATION BY LOW COST WEBCAMS

3D HAND LOCALIZATION BY LOW COST WEBCAMS 3D HAND LOCALIZATION BY LOW COST WEBCAMS Cheng-Yuan Ko, Chung-Te Li, Chen-Han Chung, and Liang-Gee Chen DSP/IC Design Lab, Graduated Institute of Electronics Engineering National Taiwan University, Taiwan,

More information

Human Pose Estimation in Stereo Images

Human Pose Estimation in Stereo Images Human Pose Estimation in Stereo Images Joe Lallemand 1,2, Magdalena Szczot 1, and Slobodan Ilic 2 1 BMW Group, Munich, Germany {joe.lallemand, magdalena.szczot}@bmw.de http://www.bmw.de 2 Computer Aided

More information

A Kinect Sensor based Windows Control Interface

A Kinect Sensor based Windows Control Interface , pp.113-124 http://dx.doi.org/10.14257/ijca.2014.7.3.12 A Kinect Sensor based Windows Control Interface Sang-Hyuk Lee 1 and Seung-Hyun Oh 2 Department of Computer Science, Dongguk University, Gyeongju,

More information

Edge Enhanced Depth Motion Map for Dynamic Hand Gesture Recognition

Edge Enhanced Depth Motion Map for Dynamic Hand Gesture Recognition 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Edge Enhanced Depth Motion Map for Dynamic Hand Gesture Recognition Chenyang Zhang and Yingli Tian Department of Electrical Engineering

More information

Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization

Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Jung H. Oh, Gyuho Eoh, and Beom H. Lee Electrical and Computer Engineering, Seoul National University,

More information

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN 2016 International Conference on Artificial Intelligence: Techniques and Applications (AITA 2016) ISBN: 978-1-60595-389-2 Face Recognition Using Vector Quantization Histogram and Support Vector Machine

More information

A Spatial Point Pattern Analysis to Recognize Fail Bit Patterns in Semiconductor Manufacturing

A Spatial Point Pattern Analysis to Recognize Fail Bit Patterns in Semiconductor Manufacturing A Spatial Point Pattern Analysis to Recognize Fail Bit Patterns in Semiconductor Manufacturing Youngji Yoo, Seung Hwan Park, Daewoong An, Sung-Shick Shick Kim, Jun-Geol Baek Abstract The yield management

More information

Path Planning for a Robot Manipulator based on Probabilistic Roadmap and Reinforcement Learning

Path Planning for a Robot Manipulator based on Probabilistic Roadmap and Reinforcement Learning 674 International Journal Jung-Jun of Control, Park, Automation, Ji-Hun Kim, and and Systems, Jae-Bok vol. Song 5, no. 6, pp. 674-680, December 2007 Path Planning for a Robot Manipulator based on Probabilistic

More information

Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen I. INTRODUCTION

Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen I. INTRODUCTION Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen Abstract: An XBOX 360 Kinect is used to develop two applications to control the desktop cursor of a Windows computer. Application

More information

Monocular Vision-based Displacement Measurement System Robust to Angle and Distance Using Homography

Monocular Vision-based Displacement Measurement System Robust to Angle and Distance Using Homography 6 th International Conference on Advances in Experimental Structural Engineering 11 th International Workshop on Advanced Smart Materials and Smart Structures Technology August 1-2, 2015, University of

More information

A Novel Criterion Function in Feature Evaluation. Application to the Classification of Corks.

A Novel Criterion Function in Feature Evaluation. Application to the Classification of Corks. A Novel Criterion Function in Feature Evaluation. Application to the Classification of Corks. X. Lladó, J. Martí, J. Freixenet, Ll. Pacheco Computer Vision and Robotics Group Institute of Informatics and

More information

A Virtual Dressing Room Using Kinect

A Virtual Dressing Room Using Kinect 2017 IJSRST Volume 3 Issue 3 Print ISSN: 2395-6011 Online ISSN: 2395-602X Themed Section: Science and Technology A Virtual Dressing Room Using Kinect Jagtap Prajakta Bansidhar, Bhole Sheetal Hiraman, Mate

More information

MORPHOLOGICAL BOUNDARY BASED SHAPE REPRESENTATION SCHEMES ON MOMENT INVARIANTS FOR CLASSIFICATION OF TEXTURES

MORPHOLOGICAL BOUNDARY BASED SHAPE REPRESENTATION SCHEMES ON MOMENT INVARIANTS FOR CLASSIFICATION OF TEXTURES International Journal of Computer Science and Communication Vol. 3, No. 1, January-June 2012, pp. 125-130 MORPHOLOGICAL BOUNDARY BASED SHAPE REPRESENTATION SCHEMES ON MOMENT INVARIANTS FOR CLASSIFICATION

More information

Planar pattern for automatic camera calibration

Planar pattern for automatic camera calibration Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Multi-view Appearance-based 3D Hand Pose Estimation

Multi-view Appearance-based 3D Hand Pose Estimation Multi-view Appearance-based 3D Hand Pose Estimation Haiying Guan, Jae Sik Chang, Longbin Chen, Rogerio S. Feris, and Matthew Turk Computer Science Department University of California at Santa Barbara,

More information

A Novel Priority-based Channel Access Algorithm for Contention-based MAC Protocol in WBANs

A Novel Priority-based Channel Access Algorithm for Contention-based MAC Protocol in WBANs A Novel Priority-based Channel Access Algorithm for Contention-based MAC Protocol in WBANs BeomSeok Kim Dept. of Computer Engineering Kyung Hee University Yongin 446-701, Korea passion0822@khu.ac.kr Jinsung

More information

A Novel Hand Posture Recognition System Based on Sparse Representation Using Color and Depth Images

A Novel Hand Posture Recognition System Based on Sparse Representation Using Color and Depth Images 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) November 3-7, 2013. Tokyo, Japan A Novel Hand Posture Recognition System Based on Sparse Representation Using Color and Depth

More information

Stackable 4-BAR Mechanisms and Their Robotic Applications

Stackable 4-BAR Mechanisms and Their Robotic Applications The 010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-, 010, Taipei, Taiwan Stackable 4-BAR Mechanisms and Their Robotic Applications Hoyul Lee and Youngjin Choi Abstract

More information

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION Dipankar Das Department of Information and Communication Engineering, University of Rajshahi, Rajshahi-6205, Bangladesh ABSTRACT Real-time

More information

Multimodal detection and recognition of persons with a static robot

Multimodal detection and recognition of persons with a static robot Multimodal detection and recognition of persons with a static robot Jaldert Rombouts rombouts@ai.rug.nl Internal advisors: prof. dr. L.R.B Schomaker. Artificial Intelligence, University of Groningen drs.

More information

Gesture Recognition Technique:A Review

Gesture Recognition Technique:A Review Gesture Recognition Technique:A Review Nishi Shah 1, Jignesh Patel 2 1 Student, Indus University, Ahmedabad 2 Assistant Professor,Indus University,Ahmadabad Abstract Gesture Recognition means identification

More information

Fast Human Detection for Intelligent Monitoring Using Surveillance Visible Sensors

Fast Human Detection for Intelligent Monitoring Using Surveillance Visible Sensors Sensors 2014, 14, 21247-21257; doi:10.3390/s141121247 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article Fast Human Detection for Intelligent Monitoring Using Surveillance Visible

More information

An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow

An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow , pp.247-251 http://dx.doi.org/10.14257/astl.2015.99.58 An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow Jin Woo Choi 1, Jae Seoung Kim 2, Taeg Kuen Whangbo

More information

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook

More information

AUTOMATIC 3D HUMAN ACTION RECOGNITION Ajmal Mian Associate Professor Computer Science & Software Engineering

AUTOMATIC 3D HUMAN ACTION RECOGNITION Ajmal Mian Associate Professor Computer Science & Software Engineering AUTOMATIC 3D HUMAN ACTION RECOGNITION Ajmal Mian Associate Professor Computer Science & Software Engineering www.csse.uwa.edu.au/~ajmal/ Overview Aim of automatic human action recognition Applications

More information

Multi-Angle Hand Posture Recognition Based on Hierarchical Temporal Memory

Multi-Angle Hand Posture Recognition Based on Hierarchical Temporal Memory , March 3-5, 203, Hong Kong Multi-Angle Hand Posture Recognition Based on Hierarchical Temporal Memory Yea-Shuan Huang and Yun-Jiun Wang Abstract In various pattern recognition applications, angle variation

More information

Tracking of Human Body using Multiple Predictors

Tracking of Human Body using Multiple Predictors Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,

More information

Data-driven Depth Inference from a Single Still Image

Data-driven Depth Inference from a Single Still Image Data-driven Depth Inference from a Single Still Image Kyunghee Kim Computer Science Department Stanford University kyunghee.kim@stanford.edu Abstract Given an indoor image, how to recover its depth information

More information

An Energy-Efficient Technique for Processing Sensor Data in Wireless Sensor Networks

An Energy-Efficient Technique for Processing Sensor Data in Wireless Sensor Networks An Energy-Efficient Technique for Processing Sensor Data in Wireless Sensor Networks Kyung-Chang Kim 1,1 and Choung-Seok Kim 2 1 Dept. of Computer Engineering, Hongik University Seoul, Korea 2 Dept. of

More information

Object Purpose Based Grasping

Object Purpose Based Grasping Object Purpose Based Grasping Song Cao, Jijie Zhao Abstract Objects often have multiple purposes, and the way humans grasp a certain object may vary based on the different intended purposes. To enable

More information

Kinect Joints Correction Using Optical Flow for Weightlifting Videos

Kinect Joints Correction Using Optical Flow for Weightlifting Videos 215 Seventh International Conference on Computational Intelligence, Modelling and Simulation Kinect Joints Correction Using Optical Flow for Weightlifting Videos Pichamon Srisen Computer Engineering Faculty

More information

A Validation Study of a Kinect Based Body Imaging (KBI) Device System Based on ISO 20685:2010

A Validation Study of a Kinect Based Body Imaging (KBI) Device System Based on ISO 20685:2010 A Validation Study of a Kinect Based Body Imaging (KBI) Device System Based on ISO 20685:2010 Sara BRAGANÇA* 1, Miguel CARVALHO 1, Bugao XU 2, Pedro AREZES 1, Susan ASHDOWN 3 1 University of Minho, Portugal;

More information

Modeling 3D Human Poses from Uncalibrated Monocular Images

Modeling 3D Human Poses from Uncalibrated Monocular Images Modeling 3D Human Poses from Uncalibrated Monocular Images Xiaolin K. Wei Texas A&M University xwei@cse.tamu.edu Jinxiang Chai Texas A&M University jchai@cse.tamu.edu Abstract This paper introduces an

More information

An Object Tracking for Studio Cameras by OpenCV-based Python Program

An Object Tracking for Studio Cameras by OpenCV-based Python Program An Object Tracking for Studio Cameras by OpenCV-based Python Program Sang Gu Lee, Gi Bum Song, Yong Jun Yang Department of Computer Engineering, Hannam University 133 Ojung-dong, Daeduk-gu, Daejon KOREA

More information

GENDER PREDICTION BY GAIT ANALYSIS BASED ON TIME SERIES VARIATION OF JOINT POSITIONS

GENDER PREDICTION BY GAIT ANALYSIS BASED ON TIME SERIES VARIATION OF JOINT POSITIONS GENDER PREDICTION BY GAIT ANALYSIS BASED ON TIME SERIES VARIATION OF JOINT POSITIONS Ryusuke Miyamoto Dept. of Computer Science School of Science and Technology Meiji University 1-1-1 Higashimita Tama-ku

More information

Efficient Stereo Image Rectification Method Using Horizontal Baseline

Efficient Stereo Image Rectification Method Using Horizontal Baseline Efficient Stereo Image Rectification Method Using Horizontal Baseline Yun-Suk Kang and Yo-Sung Ho School of Information and Communicatitions Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro,

More information

Vehicle Occupant Posture Analysis Using Voxel Data

Vehicle Occupant Posture Analysis Using Voxel Data Ninth World Congress on Intelligent Transport Systems, Chicago, Illinois, October Vehicle Occupant Posture Analysis Using Voxel Data Ivana Mikic, Mohan Trivedi Computer Vision and Robotics Research Laboratory

More information

Skeleton based Human Action Recognition using Kinect

Skeleton based Human Action Recognition using Kinect Skeleton based Human Action Recognition using Kinect Ayushi Gahlot Purvi Agarwal Akshya Agarwal Vijai Singh IMS Engineering college, Amit Kumar Gautam ABSTRACT This paper covers the aspects of action recognition

More information

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment

More information

Histogram of 3D Facets: A Characteristic Descriptor for Hand Gesture Recognition

Histogram of 3D Facets: A Characteristic Descriptor for Hand Gesture Recognition Histogram of 3D Facets: A Characteristic Descriptor for Hand Gesture Recognition Chenyang Zhang, Xiaodong Yang, and YingLi Tian Department of Electrical Engineering The City College of New York, CUNY {czhang10,

More information

On Modeling Variations for Face Authentication

On Modeling Variations for Face Authentication On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu

More information

Efficient and Fast Multi-View Face Detection Based on Feature Transformation

Efficient and Fast Multi-View Face Detection Based on Feature Transformation Efficient and Fast Multi-View Face Detection Based on Feature Transformation Dongyoon Han*, Jiwhan Kim*, Jeongwoo Ju*, Injae Lee**, Jihun Cha**, Junmo Kim* *Department of EECS, Korea Advanced Institute

More information

Validation of Automated Mobility Assessment using a Single 3D Sensor

Validation of Automated Mobility Assessment using a Single 3D Sensor Validation of Automated Mobility Assessment using a Single 3D Sensor Jiun-Yu Kao, Minh Nguyen, Luciano Nocera, Cyrus Shahabi, Antonio Ortega, Carolee Winstein, Ibrahim Sorkhoh, Yu-chen Chung, Yi-an Chen,

More information

Supplementary Material: Decision Tree Fields

Supplementary Material: Decision Tree Fields Supplementary Material: Decision Tree Fields Note, the supplementary material is not needed to understand the main paper. Sebastian Nowozin Sebastian.Nowozin@microsoft.com Toby Sharp toby.sharp@microsoft.com

More information