Real Time Motion Authoring of a 3D Avatar

Similar documents
Tap Position Inference on Smart Phones

Automatic Pipeline Generation by the Sequential Segmentation and Skelton Construction of Point Cloud

Research and Literature Review on Developing Motion Capture System for Analyzing Athletes Action

Accelerometer Gesture Recognition

Estimation of Altitude and Vertical Velocity for Multirotor Aerial Vehicle using Kalman Filter

Quaternion Kalman Filter Design Based on MEMS Sensors

Computationally Efficient Visual-inertial Sensor Fusion for GPS-denied Navigation on a Small Quadrotor

Implementation of Kinetic Typography by Motion Recognition Sensor

Simplified Orientation Determination in Ski Jumping using Inertial Sensor Data

Real-time Gesture Pattern Classification with IMU Data

A dynamic background subtraction method for detecting walkers using mobile stereo-camera

Selection and Integration of Sensors Alex Spitzer 11/23/14

CS 775: Advanced Computer Graphics. Lecture 17 : Motion Capture

Neural Networks Terrain Classification using Inertial Measurement Unit for an Autonomous Vehicle

Inertial Measurement Units II!

Implementation of a Face Recognition System for Interactive TV Control System

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models

Keywords: Motion Capture System, Virtual Interaction, Wireless Sensor Network, Motion Builder

StrokeTrack: Wireless Inertial Motion Tracking of Human Arms for Stroke Telerehabilitation

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

Evaluation of Hardware Oriented MRCoHOG using Logic Simulation

Block-Based Sparse Integral Histogram with Interpolation

Proposal of Quadruped Rigging Method by Using Auxiliary Joint

Improving the Orientation Estimation in a Packet Loss-affected Wireless Sensor Network for Tracking Human Motion

An Algorithm based on SURF and LBP approach for Facial Expression Recognition

Online Pose Classification and Walking Speed Estimation using Handheld Devices

Relative Pose of IMU-Camera Calibration Based on BP Network

Keywords: wearable system, flexible platform, complex bio-signal, wireless network

Multimodal Information Spaces for Content-based Image Retrieval

navigation Isaac Skog

Calibration of Inertial Measurement Units Using Pendulum Motion

CS 775: Advanced Computer Graphics. Lecture 8 : Motion Capture

Inertial Measurement Units I!

Robot localization method based on visual features and their geometric relationship

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Artificial Neural Network-Based Prediction of Human Posture

COARSE LEVELING OF INS ATTITUDE UNDER DYNAMIC TRAJECTORY CONDITIONS. Paul G. Savage Strapdown Associates, Inc.

Tracking of Human Arm Based on MEMS Sensors

Networked Virtual Marionette Theater

A Vision Recognition Based Method for Web Data Extraction

A Human Activity Recognition System Using Skeleton Data from RGBD Sensors Enea Cippitelli, Samuele Gasparrini, Ennio Gambi and Susanna Spinsante

Object Tracking using HOG and SVM

Triangulation: A new algorithm for Inverse Kinematics

Comparison of Default Patient Surface Model Estimation Methods

The Application Research of 3D Simulation Modeling Technology in the Sports Teaching YANG Jun-wa 1, a

TEXTURE CLASSIFICATION BY LOCAL SPATIAL PATTERN MAPPING BASED ON COMPLEX NETWORK MODEL. Srisupang Thewsuwan and Keiichi Horio

A Retrieval Method for Human Mocap Data Based on Biomimetic Pattern Recognition

Real-Time Continuous Action Detection and Recognition Using Depth Images and Inertial Signals

Bumptrees for Efficient Function, Constraint, and Classification Learning

MPU Based 6050 for 7bot Robot Arm Control Pose Algorithm

Multimodal Motion Capture Dataset TNT15

Proceedings 3D Visualisation of Wearable Inertial/Magnetic Sensors

Rhythmic EKF for Pose Estimation during Gait

Motion analysis for broadcast tennis video considering mutual interaction of players

Arm-hand Action Recognition Based on 3D Skeleton Joints Ling RUI 1, Shi-wei MA 1,a, *, Jia-rui WEN 1 and Li-na LIU 1,2

Character animation Christian Miller CS Fall 2011

Person Identity Recognition on Motion Capture Data Using Label Propagation

REAL-TIME human motion tracking has many applications

Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms

A SIMPLE HIERARCHICAL ACTIVITY RECOGNITION SYSTEM USING A GRAVITY SENSOR AND ACCELEROMETER ON A SMARTPHONE

Collaboration is encouraged among small groups (e.g., 2-3 students).

Robot Learning. There are generally three types of robot learning: Learning from data. Learning by demonstration. Reinforcement learning

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

Game Application Using Orientation Sensor

Multimodal Movement Sensing using. Motion Capture and Inertial Sensors. for Mixed-Reality Rehabilitation. Yangzi Liu

Epitomic Analysis of Human Motion

COMPARATIVE ANALYSIS OF EYE DETECTION AND TRACKING ALGORITHMS FOR SURVEILLANCE

INTERACTIVE ENVIRONMENT FOR INTUITIVE UNDERSTANDING OF 4D DATA. M. Murata and S. Hashimoto Humanoid Robotics Institute, Waseda University, Japan

CONTENT ADAPTIVE SCREEN IMAGE SCALING

Latency and Distortion compensation in Augmented Environments using Electromagnetic trackers

Well Analysis: Program psvm_welllogs

IMU and Encoders. Team project Robocon 2016

Robustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification

Motion Sequence Recognition with Multi-sensors Using Deep Convolutional Neural Network

THE DECISION OF THE OPTIMAL PARAMETERS IN MARKOV RANDOM FIELDS OF IMAGES BY GENETIC ALGORITHM

A Kinect Sensor based Windows Control Interface

Fusion of Discrete and Continuous Epipolar Geometry for Visual Odometry and Localization

Image and Video Quality Assessment Using Neural Network and SVM

Optimization of Control Parameter for Filter Algorithms for Attitude and Heading Reference Systems

Coin Size Wireless Sensor Interface for Interaction with Remote Displays

Face Recognition using SURF Features and SVM Classifier

Space Robot Path Planning for Collision Avoidance

Hand gesture recognition with Leap Motion and Kinect devices

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization

HAND GESTURE RECOGNITION USING MEMS SENSORS

An IMU-based Wearable Presentation Pointing Device

Skeleton Tracking Based Complex Human Activity Recognition Using Kinect Camera

3D Motion Retrieval for Martial Arts

Optimal Segmentation and Understanding of Motion Capture Data

Dynamic Human Shape Description and Characterization

A Method of Hyper-sphere Cover in Multidimensional Space for Human Mocap Data Retrieval

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN

Orientation Capture of a Walker s Leg Using Inexpensive Inertial Sensors with Optimized Complementary Filter Design

TEXTURE CLASSIFICATION METHODS: A REVIEW

Gesture Recognition using Neural Networks

Me 3-Axis Accelerometer and Gyro Sensor

Low-cost MEMS Sensor-based Attitude Determination System by Integration of Magnetometers and GPS: A Real-Data Test and Performance Evaluation

Morphable Displacement Field Based Image Matching for Face Recognition across Pose

Transcription:

Vol.46 (Games and Graphics and 2014), pp.170-174 http://dx.doi.org/10.14257/astl.2014.46.38 Real Time Motion Authoring of a 3D Avatar Harinadha Reddy Chintalapalli and Young-Ho Chai Graduate School of Advanced Imaging Science, Multimedia & Film, Chung-Ang University {harinath, yhchai}@cau.ac.kr Astract. A diverse numer of avatars that express human motions are produced in the fields of film, computer games, and virtual reality. In order to simulate human motions realistically, data are collected through a sensor-ased method such as motion capture and are organized into a dataase; motions are then generated on the asis of this dataase. Human ody motions can e reproduced naturally through a vast amount of data, ut complicated calculations are necessary to generate these motions, and thus, it is not easy to generate an interactive 3 dimensional (3D) avatar in real time. In order to generate the motions of a 3D moile avatar in real time, in this study, we implement a dataset with an intuitive classification through pattern recognition and propose the mapping of various avatar motions using a flexile and adaptive hierarchical algorithm. For real time motion authoring, we use a support vector machine (SVM) to classify the types of motion pattern data input from magnetic, angular rate, gravity (MARG) sensor. Keywords: Moile avatar, Motion authoring, Pose classification, MARG sensor, Support vector machine 1 Introduction Creating a 3D avatar with natural motions is an issue frequently dealt with in the fields of film, computer games, and virtual reality. To address this issue, software (such as Poser and Motion Builder) or various sensors are used for extracting and processing data to generate various forms of motions. To generate motions y using this method, the following requirements must e fulfilled: first, a large amount of data is required for generating the natural motions of an avatar; second, the processing and the interpolation of data are necessary to connect the motions smoothly; and finally, the user must e ale to generate motions intuitively [8]. In this study, we use a gyro sensor to help users learn the motion patterns [4] of the gyro sensor of their choice and to generate new motions of the avatar in real time from a pre-existing motion dataset through the learned patterns. For the motion data of the avatar, we can use files of various formats or a dataase management system (DBMS) with motion data. For example, BVH file format includes the skeleton information and motions of skeletons in each frame, and the Wave Front format includes the vertex or texture information. The FBX format is software that includes many types of information such as actual motion capture data and skeleton structure, and the C3D format is a form of data extracted from the motion capture system. This ISSN: 2287-1233 ASTL Copyright 2014 SERSC

form of data can e used directly, managed y a DBMS, or searched to generate motions using a query. Various other formats exist, and many studies are eing conducted to generate efficient data formats. In this paper, we propose a hierarchical structure in which real-time motion generation is also possile in moile de-vices [5-7], y using formats randomly selected from various motion data formats. The asic approach is as follows: the user uses the gyro to make a series of rotary motions and the motion data of one-to-one correspondence is then accessed to form the motion of the avatar. The rotation data of the gyro moved y the user uses SVM [2, 3] to learn, predict, and estimate the rotary motions. Considering that it is a moile device, we need to ensure that appropriately sampled data are used in the learning of the SVM in order to reduce the load. To generate the avatar's motions using the gyro, three processes need to e followed: start process, order process, and end process. The start process selects joint groups to facilitate significant motion authoring; the order process sets motions in the selected joint groups; and the end process completes the motion authoring or sets additional options for the motions. The series of processes minimizes the numer of patterns created using the 3-axis data, ecause the numers or letters that can e immediately rememered y humans are generally 7±2 [7]. In this study, we aim to show the utility of the proposed method y presenting realtime authoring examples for avatar motions in moile devices. For the experiment, various motion data are used, and the user can use the touch sensor and the gyro sensor to directly and immediately control the avatar. The method proposed in this paper can e implemented relatively simply and used in a wide range of applications. 2 Real time motion pattern recognition Fig. 1. (a) Motion pattern recognition. () Context-aware action classification. Equations 1 and 2 descrie how to compute the attitude quaternion e Q t y numerical integration [1], where Δt is the sampling period, e Q t 1 is the previous normalized attitude quaternion, and q0, q1, q2 and q3 represent the elements of quaternion e Q t. Q t e = 1 2 e Q t 1 ω t (1) Copyright 2014 SERSC 171

e Q t = [q0 q1 q2 q3 ] = eq t 1 + eq t Δt (2) The implemented motion pattern recognition lock diagram is shown in Figure 1(a). Training is the one time procedure that should e done first in order to recognize motion patterns in real time. Training data is the dataase of the recorded sensor data for different motion patterns; note that this dataase contains several samples of each motion pattern for training the algorithm. This training data will then e quantized to reduce the amount of data to e trained. This is ecause the sensor system provides data at higher rates (> 100Hz), using all this data for training will thus not e necessary. After training is finished, the resultant model from the training phase is used for the real time prediction of motion pattern made using the sensor data. The feature extraction module extracts features such as asolute magnitude of the recognized motion pattern and will e used later. We used radial asis function (RBF) kernel for the SVM classifier ecause inertial sensors data contain zero mean Gaussian noise. The classifier outputs predicted motion pattern lael along with an accuracy measure. Depending upon this accuracy and features computed, the predicted motion pattern will e considered as success. 3 Motion dataset generation and results The avatar's actions are classified according to the context, which is connected to the movements of the avatar. For example, the action of giving and receiving an oject can occur in various contexts as shown in Figure 1(). This study uses the motion data of around 2,000 processed frames y using the Poser software ased on motion capture data. For the motion data format, the BVH format is used. Fig. 2. Posture generation process. In this study, we use BVH ecause human motions can e generated through the central axis (hip) position and each skeleton's rotation data, which BVH satisfies. 172 Copyright 2014 SERSC

Figure 2 shows the motion generation process using BVH extracts and structuralizes the skeleton information from BVH and structuralizes it, and then maps the motion information suitale for the generated skeleton in frame units to generate motions. For example, the position of joint x is predicted y adding all offsets of the parent joint and adding the ROOT position in the present frame using Equation 3, where k is the parent joint count. P Joint = P Root + k i=1 P ParentJoint,i (1) All joints have a position property and a rotation property. The rotation of joint x can e predicted y multiplying the rotation quaternion of ROOT y the rotation quaternion of all parent joints as shown in Equation 4, where Q and k represent quaternion and parent joint count respectively. k Q Joint = Q Root i=1 Q ParentJoint,i Q joint (2) 4 Conclusion In this study, we have successfully generated various forms of motions from limited motion data using a class of SVM classifier with our MARG sensor motion data pattern recognition through a context-aware adaptive hierarchical structure. The context-aware structure proposed in this paper would e the optimal method for the real-time interactive avatar's motion authoring through efficient ifurcation with limited resources, and will enale relatively easy motion authoring y classifying the joint groups. References 1. A. Saatini: Quaternion-ased extended kalman filter for determining orientation y inertial and magnetic sensing. IEEE Transactions on Biomedical Engineering, vol. 53, no. 7, pp. 1346-1356, (2006). 2. Chang C.-C. and Lin C.-J.: Lisvm: A lirary for support vector machines. ACM Trans. Intell. Syst. Technol., vol. 2, no. 3, (2011), pp. 27:1-27. 3. Cristianini N. and Shawe-Taylor J.: An introduction to support Vector Machines: and other kernel-ased learning methods. Camridge University Press, New York, NY, USA, (2000). 4. Liu J., Zhong L., Wickramasuriya J. and Vasudevan V.: Uwave, Accelerometer-ased personalized gesture recognition and its applications. Pervasive Mo. Comput., vol. 5, no. 6, pp. 657-675, (2009). 5. Madgwick S., Harrison A. J. L. and Vaidyanathan R.: Estimation of imu and marg orientation using a gradient descent algorithm. In Rehailitation Rootics (ICORR), 2011 IEEE International Conference, pp. 1-7, (2011). 6. Purkayastha S. N., Eckenstein N., Byrne M. D., O'Malley M.: Analysis and comparison of low cost gaming controllers for motion analysis. IEEE/ASME Advanced Intelligent Mechatronics, AIM, (2010). Copyright 2014 SERSC 173

7. Miller G. A.: The magical numer seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, vol. 63, no. 2, pp. 81-97, March (1956). 8. Serizawa T., Yanagida Y.: Poster: Authoring tool for intuitive editing of avatar pose using a virtual puppet. Proceedings of the 2008 IEEE Symposium on 3D User Interfaces, pp. 153-154. IEEE Computer Society, (2008). 174 Copyright 2014 SERSC