Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images

Similar documents
Accurate 3D Face and Body Modeling from a Single Fixed Kinect

3D Modeling of Humans with Skeletons from Uncalibrated Wide Baseline Views

3D Reconstruction of Human Bodies with Clothes from Un-calibrated Monocular Video Images

Human Body Shape Deformation from. Front and Side Images

Sweep-based Human Deformation

3D Modeling for Capturing Human Motion from Monocular Video

Unsupervised Motion Classification by means of Efficient Feature Selection and Tracking

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

Animation II: Soft Object Animation. Watt and Watt Ch.17

MODELING AND HIERARCHY

Small Project: Automatic ragdoll creation from 3D models Dion Gerritzen,

ANATOMICALLY CORRECT ADAPTION OF KINEMATIC SKELETONS TO VIRTUAL HUMANS

SM2231 :: 3D Animation I :: Basic. Rigging

Human pose estimation using Active Shape Models

Multi-view stereo. Many slides adapted from S. Seitz

Data-driven Approaches to Simulation (Motion Capture)

Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics. Velocity Interpolation. Handing Free Surface with MAC

3D model-based human modeling and tracking

CMSC 425: Lecture 10 Skeletal Animation and Skinning

Human Motion Reconstruction and Animation from Video Sequences

Animations. Hakan Bilen University of Edinburgh. Computer Graphics Fall Some slides are courtesy of Steve Marschner and Kavita Bala

Capturing Skeleton-based Animation Data from a Video

Learning Autodesk Maya The Modeling & Animation Handbook. Free Models From Turbo Squid Value US $ Official Autodesk Training Guide

Announcements: Quiz. Animation, Motion Capture, & Inverse Kinematics. Last Time? Today: How do we Animate? Keyframing. Procedural Animation

Animating Non-Human Characters using Human Motion Capture Data

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE

Multiview Reconstruction

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme

Last Time? Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation

ASIAGRAPH 2008 The Intermediate View Synthesis System For Soccer Broadcasts

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

Making Machines See. Roberto Cipolla Department of Engineering. Research team

Kinematics & Motion Capture

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation

Game Programming. Bing-Yu Chen National Taiwan University

animation projects in digital art animation 2009 fabio pellacini 1

CSE452 Computer Graphics

Chapter 9 Animation System

Animator Friendly Rigging Part 2b

Animation Movie under Autodesk Maya

A 12-DOF Analytic Inverse Kinematics Solver for Human Motion Control

Interactive Computer Graphics

A Validation Study of a Kinect Based Body Imaging (KBI) Device System Based on ISO 20685:2010

3D Image Analysis and Synthesis at MPI Informatik

Multiple View Geometry

Computer Animation Visualization. Lecture 5. Facial animation

CS 231. Inverse Kinematics Intro to Motion Capture

Animation, Motion Capture, & Inverse Kinematics. Announcements: Quiz

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):

Keyframing an IK Skeleton Maya 2012

Using Shape Priors to Regularize Intermediate Views in Wide-Baseline Image-Based Rendering

Precise and Automatic Anthropometric Measurement Extraction Using Template Registration

A Tool Kit to Generate 3D Animated CAESAR Bodies

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation

3D Studio Production of Animated Actor Models

Kinematics and Orientations

3D Object Model Acquisition from Silhouettes

GENDER PREDICTION BY GAIT ANALYSIS BASED ON TIME SERIES VARIATION OF JOINT POSITIONS

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION

Human Shape from Silhouettes using Generative HKS Descriptors and Cross-Modal Neural Networks

Interpolation and extrapolation of motion capture data

Dynamic Human Shape Description and Characterization

Motion Capture Passive markers and video-based techniques

CS 231. Inverse Kinematics Intro to Motion Capture. 3D characters. Representation. 1) Skeleton Origin (root) Joint centers/ bones lengths

IRIS SEGMENTATION OF NON-IDEAL IMAGES

Human hand adaptation using sweeps: generating animatable hand models ...

CS354 Computer Graphics Character Animation and Skinning

Virtual Interaction System Based on Optical Capture

Geometric Modeling. Bing-Yu Chen National Taiwan University The University of Tokyo

Video-based Capturing and Rendering of People

3D Motion Retrieval for Martial Arts

CS 775: Advanced Computer Graphics. Lecture 17 : Motion Capture

Estimating Human Pose in Images. Navraj Singh December 11, 2009

CS770/870 Spring 2017 Animation Basics

CS770/870 Spring 2017 Animation Basics

Model-Based Human Motion Capture from Monocular Video Sequences

Dynamic Geometry Processing

International Conference on Communication, Media, Technology and Design. ICCMTD May 2012 Istanbul - Turkey

Mixamo Maya-Auto-Control-Rig

Computer Animation and Visualisation. Lecture 3. Motion capture and physically-based animation of characters

A Comparative Study of Region Matching Based on Shape Descriptors for Coloring Hand-drawn Animation

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

Shape from Selfies : Human Body Shape Estimation using CCA Regression Forests

CS 775: Advanced Computer Graphics. Lecture 8 : Motion Capture

Motion for Computer Animation. Michael Gleicher Department of Computer Sciences University of Wisconsin, Madison

Human Skeletal and Muscle Deformation Animation Using Motion Capture Data

Rigging / Skinning. based on Taku Komura, Jehee Lee and Charles B.Own's slides

Cloth Animation. CENG 732 Computer Animation. Simple Draping. Simple Draping. Simple Draping. Simple Draping

Vehicle Occupant Posture Analysis Using Voxel Data

BlockMan Directions V2 Computer Graphics and Animation. Description:

Automatic Animation of High Resolution Images

Introduction to Computer Graphics. Animation (1) May 19, 2016 Kenshi Takayama

Space-time Body Pose Estimation in Uncontrolled Environments

SCAPE: Shape Completion and Animation of People

An Animation Synthesis System based on 2D Skeleton Structures of Images

Grasp Recognition using a 3D Articulated Model and Infrared Images

Animation. CS 465 Lecture 22

Surface Registration. Gianpaolo Palma

Learning Articulated Skeletons From Motion

Transcription:

Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images Seong-Jae Lim, Ho-Won Kim, Jin Sung Choi CG Team, Contents Division ETRI Daejeon, South Korea sjlim@etri.re.kr Bon-Ki Koo Contents Division ETRI Daejeon, South Korea Abstract We propose a fully-automatic 3D model generating method of individual people from multi-view images. An automatically constructed 3D model can be deformed and animated by controlling the joint. An animatable 3D personalized model of individual people is generated by transferring the joint-skeleton structure and appearance of a generic 3D human model to an individual 3D volumetric model reconstructed from multi-view images. We automatically estimate the joint position of individual people by computing a weight function consisting of the kinematic joint-skeleton structure of the generic 3D human model and anthropometric information. Our generic 3D human model approximates the whole body using sweep surfaces. The vertices on the object boundary are bound to the sweep surfaces and follow their deformation. Thus, the animatable 3D individual model can be generated by transferring the sweep surface of the generic 3D model to the individual 3D volumetric model. Keywords-Personalized modeling; animatable model; jointskeleton; sweep surface; transfering I. INTRODUCTION A. Motivation There has been increasing demand to reconstruct moving 3D shapes and motions of individual people. Properly estimated 3D human models are thus very useful for a variety of applications including augmented reality, freeviewpoint video [1], media production for 3D television, sports industry, surveillance, virtual education, virtual shopping, gaming, and so on. Current methods for human modeling and motion capture are based on an active 3D sensing to build the shape of the body surface and on an optical motion capture system to capture the movements of the body. Such systems are prohibitively expensive and require interactive manual work to build high-quality models. In addition, meshes of a 3D model are often animated by hand using keyframing and procedural deformations. Current procedural approaches can generate mesh deformations with greater ease and efficiency but they are difficult to control when the goal is to match a particular motion or real performance. Current systems for whole-body shape and motion capturing are based on shape-from-silhouette approaches and model-based approaches. However, for the generation of artificial renditions of a scene from arbitrary novel viewpoints and for reusability of the reconstructed shape and motion, a model-based approach is more powerful. In order to achieve reliable motion capture of individual people, an articulated 3D human model that is very similar to the moving subject is essential. Modeling of 3D is becoming much easier than before, but it is still not simple and requires tedious tasks because the user must rig the model manually. In this paper, we present a fully-automated method for constructing an animatable 3D mesh model of individual people. A 3D human model of an individual person is generated by transferring the joint-skeleton structure and appearance of a generic 3D model to an individual 3D volumetric model reconstructed from multi-view images. B. Background Most prior related research mainly deals with constructing a surface skin model. 3D laser-scanner systems [2] capture the entire surface of a whole body, but such systems are highly priced and require interactive manual work. On the other hand, reconstructing systems of wholebody models from captured multi-view images are much cheaper and more easily available. The shape-fromsilhouette approaches [3-4] are a popular method that can generate a 3D volumetric model from multi-view silhouette images. However, shape-from-silhouette approaches depend on the segmentation results and have a blocky problem if there are an insufficient number of views. There are researches that have attempted to estimate the joint locations [5] in order to estimate a skeleton from a sequence of volume data of rigid bodies. However, the resultant skeleton is an estimated stick-figure-like structure that is not for realistic character animation or skinning. There is also a 3D modeling method using shape feature points, limb outlines, and a generic 3D model to yield a final customized 3D model [6]. However, user interaction for feature points is needed and skinning/animation using anatomical skeleton of 3D generic model is not easy or simple. Automatic rigging presents a method for animating characters automatically [7].

Figure 1. Overview of our method. This algorithm adapts the skeleton to a character by minimizing a penalty function, and attaches it to the surface, allowing skeletal motion data to animate the character. However, this method assumed there are 3D mesh models for rigging and animation. Park and Hodgins[8] used skin motion-capture data for transferring template mesh model to individual people and they captured skin deformation of individual people. This algorithm has some limitation that should use motion-capture system and use a lot of markers(approximately 350 markers) on skin of human. C. Outline of Our Approach In this paper, we address the automatic construction of an articulated 3D human model that can be deformed and animated. An animatable 3D human model of individual people can be generated by transferring a joint-skeleton structure and appearance of the generic 3D human model to an individual 3D volumetric model reconstructed from multiview images. Our algorithm consists of two main steps: joint positioning and appearance transfer. We automatically estimate the joint position of individual people in multi-view images by computing a weight function consisting of the kinematic joint-skeleton structure of the generic 3D human model and anthropometric information. The generic 3D human model is called the template model. In order to create a deformable and animatable 3D model of individual people, we present the transferring method of the sweep-based 3D human model. The sweep-based approach [9-10] represents the appearance of a 3D model as sweep surfaces with elliptic cross-sections. A sweep surface is generated by approximating these elliptic 2D cross-sections with swept 2D cross-sectional shapes of various human body parts, and the vertices of a human model are bound to the sweep surfaces and then follow their transformations. By controlling the size and orientation of one key ellipse assigned to each joint, we can deform the sweep surfaces, and then the vertices of the human model bound to the sweep surfaces follow the change. In [12], a star-shaped cross-sectional closed curve is used instead of elliptic cross-sections. This is further extended for freeform deformation. The important advantage of a sweep-based approach for deformation and animation of a 3D model is volume preservation. We can generate an animatable 3D human model of individual people through the transferring its appearance, and by sweeping the surfaces of the sweep-based 3D human model into 2D silhouettes and reconstructing the silhouettes into a 3D volumetric model. An overview of the automatic generating method of a 3D human model is illustrated in Fig. 1. The rest of this paper is organized as follows. Section II introduces the preprocessing of 3D modeling, including silhouette extraction, feature point extraction, and reconstruction of a 3D volumetric model. Section III describes the joint positioning. The appearance transfer of a sweep-based 3D model is presented in section IV. Section V describes the post-processing of 3D modeling, including refinement and texturing. Finally, section VI contains the experimental results of automatic generation of a 3D human model. II. PRE-PROCESSING OF 3D MODELING A. Sihouette Extraction and Feature Extraction Multi-view images are provided from several angles using multiple calibrated cameras. General background subtraction using CIELAB color space and a standard chroma-key technique are used to identify the background pixels. To maintain a consistent environment, we use the limited lighting condition and a static background. This consistent environment can allow the general background subtraction technique to provide a good segmentation result. In addition, appropriate threshold values and using the Lab color space characteristics help distinguish between foreground and background. A silhouette curve is extracted following an 8-connected-pixel chain on the border of the foreground. The objective of feature extraction is to represent each human part such as the head, neck, arms, legs and torso of individual people in multi-view images. Extracted feature points are used to estimate each joint position of the 3D model. We therefore perform robust feature extraction for a wide range of changes in body shape, size, and clothing by constraining of the initial pose and wearing clothing. To achieve this we use a pre-specified pose, as represented in Fig. 2, and clothing that allows both the armpits and crotch to be visible.

The algorithm for extracting feature points [11] from the frontal upper, frontal lower, and side binary images is presented in Fig. 2. Given a foreground mask, an edge pixel list is extracted according to the foreground boundary in a clockwise direction. We traverse the edge pixel list to locate extremum points that correspond to the head, hands, and feet, as shown in Fig. 2(a), and to identify feature points located with large changes in shape and pose. Those points correspond to the crotch, arm-pits, and elbows shown in Fig. 2(c). orientation of each cut-plane is determined by the orthogonal directions between two neighboring feature points and each body part of the 3D volumetric model. Then, we can find the voxels intersected with each cut-plane. The centroid of the intersected voxels is defined as the initial joint position. Figure 3. Sweep-based 3D human model. Figure 2. Segmented Foreground Images: (a) front of T-pose, (b) side, (c) front of N-pose. B. 3D Volumetric Reconstruction We reconstruct a 3D volumetric model of individual people for the search of 3D joint position, and a reference to transfer the appearance of a generic model into the shape of individual people through 3D-to-3D appearance mapping. We use the photo-consistent scene recovery method [13] for reconstruction of the 3D volumetric model. This methodology solves many problems associated with previous stereo-based and volumetric methodologies by introducing of a self-constrained greedy-style optimization technique based on the probabilistic shape photo-consistency measure. Figure 4. Joint-skeleton structure of 3D generic model. Figure 5 depicts 3D cut-planes on 3D feature points, and the initial joint position from the centroid of the cut-plane. III. JOINT POSITIONING We use a kinematic joint-structure with 21 joints including the head, neck, spine, root, shoulders, elbows, wrists, hips, knees, ankles, head-end, hands-end, and ends of the feet. Figure 3 depicts the sweep-based 3D human model used for the generic 3D model of this paper. The jointskeleton structure for the generic humanoid model is illustrated in Fig. 4. To estimate each joint position, landmarks, approximate 3D coordinates of each joint, are defined by estimating the initial joint position based on the feature points, described in section II, and the evaluating process is then performed on the initial joint position with a weight function consisting of the configuration ratio of the template s joint structure and anthropometric information. For the initial joint position, we obtain 3D feature points corresponding to the 2D feature points extracted in section II through a 2D-to-3D linear mapping between the silhouette image and reconstructed 3D volumetric model. Most of the 3D feature points are located in an approximate x-y coordinate of each joint because the joints exist inside the 3D model. For the z coordinate of the 3D feature points, we use a 3D cut-plane on the reconstructed 3D volumetric model. First, we set the 3D cut-plane on each feature point. The Figure 5. Cut-planes and initial joint position on reconstructed 3D volume model. Anthropometric information, which is the measurement of the human body, exists. This information is the statistical data regarding the distribution of body dimensions in a population [13]. We use this anthropometric information for our evaluation function. The goal of our evaluation function is to evaluate and refine the initial joint position by using the kinematic joint-skeleton features of the generic 3D model and anthropometric information. The equations are given below, where is the initial joint position, is the distortion ratio between template model and initial joint position, is the distortion ratio between anthropometric information and initial position, is the weight of the initial joint position, is the weight of the template model, and is the weight of the anthropometric feature on joint. Actually, is defined as the reliability level

of the 3D feature points. The total sum of the weight should be 1. After evaluating the initial joint position, we solve the forward kinematics with the newly evaluated joint position. We solve the forward kinematics by computing the linklength, global and local position, and rotation of each joint based on the estimated joint position. Solving the forward kinematic provides estimates of the joint position and rotation of several poses of individual people acquired from multi-view cameras. This sweep surface represents a time-variant star-shaped cross-section using scalar radius function. IV. TRANSFER OF APPEARANCE AND SWEEP SURFACE We use a sweep-based 3D human model that can be simply deformed by a joint angle change and animated with motion capture data. To create an animatable 3D personalized model of individual people captured from multi-view cameras, we transfer the appearance of the template model to a reconstructed 3D volumetric model, as shown in section II, based on the extracted joint-skeleton structure shown in section III. A reconstructed 3D volumetric model is called the target model. Since the mesh of the template model is bound to the sweep surface, we fit the appearance of the template model to the shape of the target model by controlling the parameter of the sweep surface. A. Global fitting Global fitting of the template model is started with the joint-skeleton structure described in section III, which is called the target joint-skeleton structure. Position fitting of a template s joint is first performed to the corresponding target joint as shown in Fig. 6. After fitting the position, the forward kinematic structure solves the connectivity, linklength between each joint, and each joint s orientation. At this point, each sweep surface of the body corresponding to each joint will be transferred by fitting the joint. Figure 6. Joint-skeleton structureof template target model. In this paper, we use star-shaped 2D cross-sections that approximate the cross-sectional shapes of various human body parts [12]. Figure 7 represents cross-sections of the whole body and the left arm of the template model. When a star-shaped cross-sectional closed curve of variable size is moving under rotation and translation, it generates a sweep surface by interpolating crosssections (detailed in [12]. Figure 7. Star-shaped cross-sections of the template model. Joint fitting that fits the position, orientation, and linklength of the joint change the rotation and orientation of the related cross-sections, which are called keyframes. Changing of the keyframes deforms the shape of the sweep surface. Thus, sweep fitting follows the joint fitting. Results of joint and sweep fitting are shown in Fig. 8. Figure 8. Joint/Sweep fitting. B. Local fitting To achieve local fitting, we can fit the cross-sectional shape of a sweep surface to the corresponding cross-section of the reconstructed 3D volumetric model by modifying the scalar radius function. After joint fitting, each crosssection of the target model is computed by cutting the 3D volumetric model of the corresponding part with planes. The center of each cross section is computed, and then corresponding radii from the center to the boundary voxels of the cross-section are sampled. The new radius function for local fitting is computed by the difference ratio of corresponding radii between the template and target model. At this time, we assign the weight for each feature to the new radius function for considering the characteristic of the body part of the template model and complementing the drawbacks of the reconstructed 3D volumetric model such as occlusions, specular problems, and so on.

After fitting the joint and cross-section to the target model, a 3D geometric mesh model of the individual is constructed. Figure 9 shows the results of the cross-section fitting. Figure 9(a) represents cross-sections of a template model, and Fig. 9(b) shows overlapping cross-sections of a template model on a 3D volumetric model (target model). The fitted cross-sections after local fitting are represented in Fig. 9(c). Figure 9. Cross-section of (a)template model, (b)3d volumetric model, and (c)3d volumetric model after local fitting. V. POST-PROCESSING OF 3D MODELING The transferred 3D model from the template model in section III and IV is an articulated 3D mesh model that can be deformable and animatable. However, displacements exist between the bounding vertices and the sweep surface. The displacement of the constructed 3D mesh model is from the template model, so we can adjust those vertices to the surface of the target model in detail. For a whole body, we obtain a single texture map by back projecting and integrating all overlapping images from multi-view cameras. Integration of the texture map is based on the approximate 3D shape information for the reconstructed model. The texture map is depicted in Fig. 10. Figure 10. Color texture map of the template and target model. VI. EXPERIMENTAL AND RESULTS The algorithm presented in this paper was evaluated in several sets of experiments with multi-view images containing people of different heights and body types, along with different types of poses such as a T pose and A pose. Figure 11 shows the results of our 3D model constructing algorithm. Figure 11(a) shows the frontal image among multi-view images acquired from several individual people, Fig. 11(b) shows the feature in a segmented image, and Fig. 11(c) shows the reconstructed 3D volumetric model. In addition, Fig. 11(d) shows the joint extraction by the cut-plane on the landmarks of each body part, Fig.11(e) shows the joint and sweep transfer, and lastly Fig.11(f) shows the final constructed 3D model of an individual. Figure 12 illustrates the snapshots of the deformation and animation of the constructed 3D human model. Figure 13 shows the evaluation result of the reconstructed mesh model. For comparison, we projected the reconstructed mesh model to silhouette boundary images and we could check the difference between the silhouette boundary(true) and the projected boundary of the reconstructed mesh model. The experimental timing of our automatic constructing method for a 3D individual human model is within 40s on a 3.5 GHz Intel Core Duo with 8GB of RAM. The most timeconsuming step is to reconstruct the 3D volumetric model from multi-view images. VII. CONCLUSION We have presented a fully-automated method for constructing an animatable 3D mesh model of individual people. The 3D human model is generated by transferring the joint-skeleton structure and appearance of a generic 3D model to an individual 3D volumetric model reconstructed from multi-view images. We automatically estimate the joint position of individual people in multi-view images by computing a weight function consisting of the kinematic joint-skeleton structure of the generic 3D human model and anthropometric information. In order to create a deformable and animatable 3D model of individual people, we present a transferring method of a sweep-based 3D human model. The sweep surface, interpolated by approximating a cross-section with a swept star-shaped cross-sectional closed curve, binds the vertices of the 3D generic model. By controlling the size and orientation of one key ellipse assigned to each joint, we can deform the sweep surfaces, and then the vertices of a human model bound to the sweep surfaces follow their changes. We can generate an animatable 3D human model by transferring the appearance and sweep surfaces of the sweep-based 3D human model into 2D silhouettes and reconstructing a 3D volumetric model from the silhouettes. Our proposed technique has unique specialty in contrast to the other method such as user-defined 3D model generation, only appearance deformation, respectively making 3D individual mesh model and skinning that model for animation. In future work, we will investigate the optimal radius function of the cross-sectional curve between the template and target model for local fitting in detail. In addition, we will use this customized model to track and capture the motion of individual subjects.

Figure 13. Comparing results of silhouette and projected boundary. ACKNOWLEDGMENT The research was supported by the strategic technology development program of MSIP/KEIT. [10047093, 3D Content Creation and Editing Technology Based on Real Objects for 3D Printing]. REFERENCES Figure 11. Experimental results of 3D modeling of a human subject. Figure 12. Deformation and animation of a constructed 3D personalized model: its (upper)joint and appearance and its (lower) mesh model. [1] J. Carranza, C. Theobalt, M. Magnor, et al., Free-Viewpoint Video of Human Actors, ACM Trans. Graph., vol. 22, no. 3, 2003, pp. 569-577. [2] Cyberware, http://www.cyberware.com [3] A. Hilton, D. Beresford, T. Gentils, et al., Whole-body Modeling of People from Multi-View Images to Populate Virtual Worlds, The Visual Computer, vol. 16, no. 7, 2000, pp. 411-436. [4] S. Weik, A Passive Full Body Scan using Shape from Silhouette, Proc. ICPR 00, 2000, pp. 99-105. [5] C. Theobalt, E. Aguiar, M. Magnor, et al., Marker-free kinematic skeleton estimation from sequence of volume data, Proc. ACM Virtual Reality Software and Technology 04, 2004, pp. 57-64. [6] C. K. Quah, A. Gagalowicz, R. Roussel, et al, 3D Modeling of Humans with Skeleton from Uncalibrated Wide Baseline Views, Proc. CAIP 05, 2005, pp. 379-389. [7] I. Baran and J. Popovic, Automatic Rigging and Animation of 3D Characters, ACM Trans. On Graphics, vol. 26, no. 3, 2007, pp. -. [8] S. I. Park, Jessica K. Hodgins, Capturing and Animating Skin Deformation in Human Motion, ACM Trans. On Graphics, vol. 25, no. 3, 2006, pp.881-889. [9] D. E. Hyun, S. H. Yoon, M. S. Kim, et al., Modeling and Deformation of Arms and Legs based on Ellipsoidal Sweeping, Proc. PG 03, 2003, pp. 204-212. [10] D. E. Hyun, S. H. Yoon, J. W. Chang, et al., Sweep-based Human Deformation, The Visual Computer, vol. 21, no. 8, 2005, pp. 542-550. [11] S. J. Lim, H. B. Joo, H. W. Kim, et al., Automatic Rigging of 3D Human Models, Proc. FCV 10, 2010, pp. 223-226. [12] S. H. Yoon and M. S. Kim, Sweep-based Freeform Deformation, Proc. EG 06, 2006, pp. -. [13] H. W. Kim and I. S. Kweon, Appearance-Cloning: Photo-Consistent Scene Recovery from Multi-View Images, IJCV, vol. 66, no. 2, 2006, pp. 163-192. [14] Anthropometry,http://www.wikipeida.com