Human hand adaptation using sweeps: generating animatable hand models ...

Similar documents
Realistic human hand deformation. Introduction. By Jieun Lee, Seung-Hyun Yoon and Myung-Soo Kim

Sweep-based Human Deformation

Human Body Shape Deformation from. Front and Side Images

Computer Animation Visualization. Lecture 5. Facial animation

A Data-driven Approach to Human-body Cloning Using a Segmented Body Database

Muscle Based facial Modeling. Wei Xu

Images from 3D Creative Magazine. 3D Modelling Systems

Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images

FACIAL ANIMATION WITH MOTION CAPTURE BASED ON SURFACE BLENDING

3D Reconstruction of Human Bodies with Clothes from Un-calibrated Monocular Video Images

Human Hand Modeling from Surface Anatomy

3D Face Deformation Using Control Points and Vector Muscles

Adding Hand Motion to the Motion Capture Based Character Animation

Animation II: Soft Object Animation. Watt and Watt Ch.17

Shape and Expression Space of Real istic Human Faces

Perspective silhouette of a general swept volume

MODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL

Parameterization of Triangular Meshes with Virtual Boundaries

Geometric Modeling. Bing-Yu Chen National Taiwan University The University of Tokyo

Motion Synthesis and Editing. Yisheng Chen

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets

Animation of 3D surfaces.

Development and Evaluation of a 25-DOF Hand Kinematic Model

Topics in Computer Animation

CMSC 425: Lecture 10 Skeletal Animation and Skinning

Character Modeling COPYRIGHTED MATERIAL

Sweep-based Freeform Deformations

CS770/870 Spring 2017 Animation Basics

CS770/870 Spring 2017 Animation Basics

Basics of Design p. 2 Approaching Design as an Artist p. 4 Knowing Your Character p. 4 Making Decisions p. 4 Categories of Design p.

Animation. CS 465 Lecture 22

Rigging / Skinning. based on Taku Komura, Jehee Lee and Charles B.Own's slides

Statistical Learning of Human Body through Feature Wireframe

Modelling and Animating Hand Wrinkles

MODELING AND HIERARCHY

Animation of 3D surfaces

Introduction to Computer Graphics. Animation (1) May 19, 2016 Kenshi Takayama

05 Mesh Animation. Steve Marschner CS5625 Spring 2016

Image-Based Deformation of Objects in Real Scenes

CS 231. Deformation simulation (and faces)

Shape Blending Using the Star-Skeleton Representation

Synthesizing Realistic Facial Expressions from Photographs

Modeling Deformable Human Hands from Medical Images

Cloning Skeleton-driven Animation to Other Models

Gesture Recognition Technique:A Review

CS 775: Advanced Computer Graphics. Lecture 4: Skinning

Physically-Based Modeling and Animation. University of Missouri at Columbia

CS 231. Deformation simulation (and faces)

Animation COM3404. Richard Everson. School of Engineering, Computer Science and Mathematics University of Exeter

Automatic Pipeline Generation by the Sequential Segmentation and Skelton Construction of Point Cloud

MOTION CAPTURE DATA PROCESSING - MOTION EDITING / RETARGETING - MOTION CONTROL / GRAPH - INVERSE KINEMATIC. Alexandre Meyer Master Informatique

A Sketch Interpreter System with Shading and Cross Section Lines

Transfer Facial Expressions with Identical Topology

Automated Drill Design Software

CSE452 Computer Graphics

Kinematics & Motion Capture

Real-Time Universal Capture Facial Animation with GPU Skin Rendering

Animations. Hakan Bilen University of Edinburgh. Computer Graphics Fall Some slides are courtesy of Steve Marschner and Kavita Bala

Animation Movie under Autodesk Maya

Animation. Motion over time

Free-Form Deformation and Other Deformation Techniques

CS 231. Basics of Computer Animation

Animation. CS 4620 Lecture 33. Cornell CS4620 Fall Kavita Bala

Introduction to Solid Modeling Parametric Modeling. Mechanical Engineering Dept.

Curved Projection Integral Imaging Using an Additional Large-Aperture Convex Lens for Viewing Angle Improvement

H-Anim Facial Animation (Updates)

To Do. Advanced Computer Graphics. The Story So Far. Course Outline. Rendering (Creating, shading images from geometry, lighting, materials)

Computational Design. Stelian Coros

Course Outline. Advanced Computer Graphics. Animation. The Story So Far. Animation. To Do

Offset Triangular Mesh Using the Multiple Normal Vectors of a Vertex

Data-driven Approaches to Simulation (Motion Capture)

Augmented Reality of Robust Tracking with Realistic Illumination 1

Computer Animation Fundamentals. Animation Methods Keyframing Interpolation Kinematics Inverse Kinematics

Development of a 25-DOF Hand Forward Kinematic Model Using Motion Data

Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn

An Automatic 3D Face Model Segmentation for Acquiring Weight Motion Area

Motion synthesis and editing in low-dimensional spaces. Introduction. By Hyun Joon Shin and Jehee Lee

AN ONTOLOGY OF VIRTUAL HUMANS: INCORPORATING SEMANTICS INTO HUMAN SHAPES

Real Time Skin Deformation with Bones Blending

Fast Facial Motion Cloning in MPEG-4

Curve skeleton skinning for human and creature characters

Circular Arcs as Primitives for Vector Textures

Registration of Dynamic Range Images

Small Project: Automatic ragdoll creation from 3D models Dion Gerritzen,

Model-Based Face Computation

CS 523: Computer Graphics, Spring Shape Modeling. Skeletal deformation. Andrew Nealen, Rutgers, /12/2011 1

Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics. Velocity Interpolation. Handing Free Surface with MAC

Automatic 3D wig Generation Method using FFD and Robotic Arm

animation computer graphics animation 2009 fabio pellacini 1

An Animation Synthesis System based on 2D Skeleton Structures of Images

Data-Driven Face Modeling and Animation

ERC Expressive Seminar

3D Character Animation Synthesis From 2D Sketches

Goals: Course Unit: Describing Moving Objects Different Ways of Representing Functions Vector-valued Functions, or Parametric Curves

Sculpting 3D Models. Glossary

Virtual Interaction System Based on Optical Capture

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):

A 12-DOF Analytic Inverse Kinematics Solver for Human Motion Control

Modelling a Lamb Hind Leg

03 Vector Graphics. Multimedia Systems. 2D and 3D Graphics, Transformations

Transcription:

COMPUTER ANIMATION AND VIRTUAL WORLDS Comp. Anim. Virtual Worlds 2007; 18: 505 516 Published online 16 July 2007 in Wiley InterScience (www.interscience.wiley.com).193 Human hand adaptation using sweeps: generating animatable hand models By Jieun Lee and Myung-Soo Kim *... We introduce a sweep-based hand shape adaptation algorithm to fit a generic sweep-based hand model to the shape of an individual s hand, presented as a single photograph. The sweep trajectory curves of the generic hand model are modified to interpolate a sequence of keyframes determined by target features. Details of the real hand can be transferred to the model by adjusting its sweep displacement map. Palm lines are also acquired from sketches drawn on the photograph. The bespoke model inherits the fully animatable structure of the generic model. We demonstrate the effectiveness of our sweep-based approach using several examples of reconstructing animatable bespoke hand models. Copyright 2007 John Wiley & Sons, Ltd. Received: 15 May 2007; Accepted: 15 May 2007 KEY WORDS: hand modeling; shape fitting; shape adaptation; model reconstruction from 2D image Introduction Models of an individual human body, face, or hand have significant uses, especially for interactions in a virtual environment. There are many ways of creating 3D models of an individual s body, including such as from photography, video, and range scanning. Hand models are so complicated to deform that it is appropriate to generate animatable models of individual hands, as automatically as possible, by shape fitting. Albrecht et al. 1 created bespoke hand models from photographs using a set of feature points that relate an individual hand to a generic model. However, these positional features are insufficient for reconstructing detailed hand shapes. Taking a similar approach, Rhee et al. 2 automatically extracted joint locations and fitted a skin mesh using a knowledge of the surface anatomy of the hand. The resulting models are accurate, but cannot support animation. We use the sweep-based hand model of Lee et al. 3 It can be animated, including realistic palm surface and palm line generation, collision detection, and the elimination of self-intersections. Using sweep-based shape adaptation, we can acquire a fully animatable realistic bespoke model of an individual hand. *Correspondence to: Professor, M.-S. Kim, Seoul National University, Seoul 151-744, Korea. E-mail: mskim@cse.snu.ac.kr We start with a photograph of the palm of an actual hand, separated from its background. On this image, the user marks a total of 22 feature points and palm lines. They control the modification of a generic hand model to match this individual hand. The marking and sketching procedure usually takes less than two minutes. The bespoke hand model is then generated almost instantaneously. Our fitting method proceeds in three steps. First, the locations of the joints of the real hand are determined in barycentric coordinates relative to the user-supplied feature point. These joint locations are used as keyframes when interpolating the sweep trajectories. Extra keyframes are also inserted at the branching points of fingers and thumb, marked in the input image. Using these keyframes, the sweep trajectories of the generic hand model are adjusted to match the real hand. Second, the displacement value of each mesh vertex from the sweep trajectory is adjusted to conform to the surface details of the real hand, largely by silhouette matching. Finally, user-drawn palm lines are projected on to the modified generic model to simulate the specific palm lines of the real hand. Palm lines crease when fingers bend and the resulting patterns are characteristic of an individual hand. The main contributions of this paper can be summarized as follows: Sweep-based shape adaptation allows a bespoke hand model to be animated. Copyright 2007 John Wiley & Sons, Ltd.

J. LEE AND M.-S. KIM... An individual hand model is acquired using only one photograph. A bespoke hand model is generated almost instantaneously as soon as 22 feature points and palm lines are marked on the input image. The marking procedure usually takes less than 2 minutes. major creases are important landmarks, 1,2 which we use. Sweep-Based Shape Adaptation Previous Work There has been quite a lot of work on creating personspecific models of body parts from photographs, image sequences, or range-scanned data. 1,2,4 7 But most of the models created by these processes cannot be animated. If animation is required, a generic model with animation structures is the most common approach. Using rangescan data, Allen et al. 4 and Seo et al. 5 generated animatable whole-body models, and Kähler generated head models. 6 Albrecht et al. 1 created individual hand models from photographs, using a set of feature points to establish correspondence between the photograph and a generic model, and then transforming the generic hand model using a radial basis function. All components of the generic model, such as the skeleton, muscles, and skin are transferred, and the resulting hand model is instantly animatable. However, their technique does not allow the details of a hand to be modeled. The sweep-based approach to human modeling, 3,8 that we use in the current work, makes it straightforward to represent surface details using displacement maps. Rhee et al. 2 constructed bespoke hand models using features automatically recognized in photograph of a palm. They extracted palm lines and finger creases from the image using tensor voting and used these features to model the anatomy of the hand surface. Joint locations are determined from the surface anatomy and skin vertices are created by relating the contours of the model to the silhouette of the image. The resulting hand models are accurate but cannot be animated. Our approach requires the marking of 22 feature points and the sketching of palm lines. Although the marking procedure is not automatic, it takes less than 2 minutes. Biologically meaningful landmarks assist in fitting a generic model to an individual anatomy. 9 In the case of a face model, the positions and contours of the eyes, nose, lips, and ears are often used. In creating a hand model, the locations of the fingertips and We will now show how to fit a generic sweep-based model 3,8,10,11 to a simple target shape. We start with a generic model represented by sweeps, a target shape represented by photographs or range-scanned data, and user-specified features which will establish the correspondence between the generic model and the target shape. The fitting process requires two main steps: fitting the sweep trajectory, and fitting the sweep displacements. Figure 1 illustrates the fitting process in two dimensions. A stylized generic model, a target shape, and a small set of features are shown in Figure 1(a). Figure 1(b) shows how the sweep trajectory curves are interpolated. We project the feature points of the generic model on to the sweep trajectory, and retrieve their time parameters {t 1,t 2,t 3,t 4 } as sequence. Then the sequences of key positions {P 1,P 2,P 3,P 4 } and key orientations {Q 1,Q 2,Q 3,Q 4 } are computed from the features of the target shape. We will refer to <P i,q i,t i > as a featuredetermined keyframe. The positional and orientational curves for the target sweep are generated by interpolating a sequence of feature-determined keyframes (see Figure 1(b)). Finally we substitute the displacement values d i measured on the target shape for the sweep displacement parameters d i on the generic model using the same time parameter t i (see Figure 1(c)). Figure 1(d) shows what happens when the trajectory curves are interpolated by chord-length parametrization rather than by feature-determined parameters. The resulting model has the same shape as the target, but the features do not correspond consistently; and so it does not represent the intended shape fitting. Moreover, when we represent the animation of models with sweeps, the time parameters of the sweeps are usually used to control the animation of the corresponding parts of the models, and the animation results would be quite different for the models of Figure 1(c) and (d). This means that we cannot apply a set of animation controls designed for the generic model to the fitted model without modification. Thus, the selection of suitable time parameters for the sweeps is important.... Copyright 2007 John Wiley & Sons, Ltd. 506 Comp. Anim. Virtual Worlds 2007; 18: 505 516

GENERATING ANIMATABLE HAND MODELS Generic Hand Model We use the sweep-based hand model due to Lee et al. 3 as our generic model. This forms the fundamental shape of a hand using five sweeps, which run along the skeleton from the wrist all the way to the tip of each finger and the thumb, as shown in Figure 2(b). Palm deformation is controlled by a freeform surface which is updated after each sweepbased deformation (see Figure 2(c)). The palm lines are represented by displacements from the palm surface (see Figure 2(d)). A mesh vertex is bound to a sweep by a time parameter and a displacement vector. When the user changes the joint angles to generate a new pose, the sweeps reflect the changes and all the vertices bound to that sweep are reconstructed relative to the sweep trajectories. Then the reconstructed results for each sweep must be blended across the palm and the back of the hand, using the vertex-to-sweep weights. A generic model that is well constructed in terms of binding and blending can be realistically animated. However, binding and blending the vertices can be a non-trivial task for a complicated model such as a human hand. Our sweep-based approach can make this task relatively easy compared with other conventional techniques. Figure 1. Sweep-based shape adaptation: (a) a sweep-based generic model (left) and a target shape (right), showing a pair of corresponding features; (b) sweep trajectory fitting; (c) sweep displacement fitting; and (d) sweep trajectory fitting using chord-length parametrization. Hand Features We use a photograph of a hand with fingers spread, such as that in Figure 2(f), as the input image. The user marks 22 feature points and draws palm lines on the image, as shown in Figure 2(f). There are two feature points at the most dented parts of the wrist silhouette. Five features are located at the tips of the fingers and the thumb, and four features in the valleys between the fingers. The creases in the fingers and the thumb characterize the shape of a hand and the medial position of each crease becomes a feature point. Finally, we include two extra points from the silhouette. One marks the protuberance near the thumb MCP joint (see Figure 2(f)), and the other is the trace of the palmar distal crease of the little finger on the silhouette. Table 1 lists all the feature points input by the user, with their locations. These features can easily be distinguished without anatomical knowledge. Corresponding feature points, shown in Figure 2(g), are also specified on the generic model located in the Copyright 2007 John Wiley & Sons, Ltd. 507 Comp. Anim. Virtual Worlds 2007; 18: 505 516

J. LEE AND M.-S. KIM Figure 2. Generic sweep-based hand model and input hand image: (a) (e) sweep-based hand model, (a) hand skeleton, (b) control sweeps, (c) palm-control surface, (d) palm lines, and (e) a deformed hand. (f) and (g) hand image for fitting, (f) input photograph of a male hand with 22 feature points (dots) and palm lines (black and white curves) marked by the user, and (g) a generic hand model with 22 corresponding feature points (black dots). same stretched pose as the real hand. The 22 feature vertices on the generic model are selected in the same manner as the corresponding features are marked on the photograph. Palm lines are significant markers of the shape of a hand and also discriminate between individual hands. We use specific palm lines to achieve a more realistic result. Palm-line vertices are computed from palm lines drawn on the input photograph. The hand region in the input photograph has to be separated from the background to obtain the silhouette of the target hand. We do this using the Magic Wand tool in Adobe Photoshop 12 which can select a region using color similarity. We fill the background with a predefined color to distinguish the hand region. Skeleton Fitting Arrangement of Features Our skeleton-fitting method works in a two-dimensional projection space that corresponds to the plane of the photograph. The first step in skeleton fitting is to create an appropriate projection of the generic hand, as shown in Figure 3(a). A virtual image plane is positioned in front Copyright 2007 John Wiley & Sons, Ltd. 508 Comp. Anim. Virtual Worlds 2007; 18: 505 516

GENERATING ANIMATABLE HAND MODELS Feature point WRTH WRFG VL00 VL01 VL12 VL23 VL34 VL44 C1TH C1F1 C1F2 C1F3 C1F4 C2F1 C2F2 C2F3 C2F4 TPTH TPF1 TPF2 TPF3 TPF4 Location Wrist point on the thumb side Wrist point on the little finger side Protuberance of thumb Valley between thumb and index finger Valley between index finger and middle finger Valley between middle finger and ring finger Valley between ring finger and little finger Trace of palmar distal crease of ring finger in the silhouette Mid-point of thumb IP crease Mid-point index finger PIP crease Mid-point middle finger PIP crease Mid-point ring finger PIP crease Mid-point little finger PIP crease Mid-point index finger DIP crease Mid-point middle finger DIP crease Mid-point ring finger DIP crease Mid-point little finger DIP crease Tip of thumb Tip of index finger Tip of middle finger Tip of ring finger Tip of little finger Table 1. Feature points and their locations in a hand of the generic model, on to which the feature points and the joints of the generic model are projected orthogonally. At the same time, the perpendicular distance between each joint and the plane is measured for later restoration. Then we transform the feature points of the real hand so that the corresponding features of the tip of the middle finger and the medial position of the wrist (see A and A, and B and B in Figure 3(a)) all coincide. Figure 3(b) shows the features of the generic hand model and a real hand superimposed in the same image plane. and VL23 m : Wrist = a WRTH m + b WRFG m + c VL23 m The position of the wrist joint of the real hand is then computed using the same barycentric coordinates (a, b, c) and its three corresponding features WRTH i, WRFG i, and VL23 i : Wrist = a WRTH i + b WRFG i + c VL23 i Joint Locations Using the generic model, we compute the barycentric coordinates of each projected joint in terms of the three closest features in the image plane. We then estimate the locations of the joints on the image of the real hand by placing them at the same barycentric coordinates in terms of the corresponding three features. For example, the position of the wrist joint of the generic model has the barycentric coordinates (a, b, c) relative to its three neighboring features WRTH m, WRFG m, All the joint positions of the real hand can be obtained in the same way. Table 2 lists the three features used to compute the position of each joint. To lift these joint positions of the real hand from the virtual image plane into three-dimensional space, we add the perpendicular distances that we mentioned Section Arrangement of Features. The orientation of each joint is determined by the direction to the next joint and is represented as a frame relative to the previous joint. 3 Finally the kinematic structure of the skeleton of the model of the individual hand is created using the new joint positions Copyright 2007 John Wiley & Sons, Ltd. 509 Comp. Anim. Virtual Worlds 2007; 18: 505 516

J. LEE AND M.-S. KIM Figure 3. Skeleton fitting: (a) and (b) arrangement of the model and the photograph for fitting, (a) projection of the generic model features and corresponding features from the photograph on to the virtual image plane; and (b) features from both sources on the virtual image plane. (c) and (d) finding the joint positions, (a) the features and the joint positions of the generic model, and (b) the features and the joint positions transferred to the image of the real hand. and orientations. Figure 3(c) and (d) show the features and joints on both models. Skin Fitting Sweep Trajectories Now we construct five sweeps for the real hand model using the result of skeleton fitting. The positional and orientational curves of each sweep trajectory are interpolated using the feature-determined keyframes discussed in Section Sweep-Based Shape Adaptation. All the joints, the four valley features (VL01, VL12, VL23, and VL34), and the two extra features (VL00 and VL44) are used as keyframes. We will now explain trajectory curve interpolation using the middle finger sweep as an example. The left curve of Figure 4(a) shows the original trajectory curve of the generic model. We first compute the time parameters t 4 and t 5 for VL23 m and VL12 m, by projecting them on to the line joining F2 MCP m and Copyright 2007 John Wiley & Sons, Ltd. 510 Comp. Anim. Virtual Worlds 2007; 18: 505 516

GENERATING ANIMATABLE HAND MODELS Joint Feature 0 Feature 1 Feature 2 Wrist WRTH WRFG VL23 Thumb CMC WRTH WRFG VL23 Thumb MCP WRTH VL00 VL01 Thumb IP VL00 VL01 C1TH Thumb Tip VL00 VL01 TPTH Index CMC WRTH WRFG VL23 Index MCP WRTH WRFG (VL01+VL12)/2 Index PIP VL01 VL12 C1F1 Index DIP VL01 VL12 C2F1 Index Tip VL01 Vl12 TPF1 Middle CMC WRTH WRFG VL23 Middle MCP WRTH WRFG (VL12+VL23)/2 Middle PIP VL12 VL23 C1F2 Middle DIP VL12 VL23 C2F2 Middle Tip VL12 VL23 TPF2 Ring CMC WRTH WRFG VL23 Ring MCP WRTH WRFG (VL23+VL34)/2 Ring PIP VL23 VL34 C1F3 Ring DIP VL23 VL34 C2F3 Ring Tip VL23 VL34 TPF3 Little CMC WRTH WRFG VL23 Little MCP WRTH WRFG (VL34+VL44)/2 Little PIP VL34 VL44 C1F4 Little DIP VL34 VL44 C2F4 Little Tip VL34 VL44 TPF4 Table 2. The three features used for computing the new position of each joint F2 PIP m. Then the interpolation parameters are the sequence {t 1,t 2,...,t 8 }. The right curve of Figure 4(a) shows the joint positions and orientations of the real hand computed in the skeleton-fitting process. The key positions P 4 and P 5 are determined from the ratios of the projections of VL12 i and VL23 i on to the line joining F2 MCP i and F2 PIP i in the image plane, by finding the positions with the same ratios in the threedimensional skeleton. The key orientations Q 4 and Q 5 follow the orientation of the joint F2 MCP i. Other key positions and orientations follow the joint positions and orientations. The sequence of key positions is {P 1,P 2,...,P 8 } and the sequence of key orientations is {Q 1,Q 2,...,Q 8 }. VL00 and VL01 are also incorporated in the sweep trajectory of the thumb, VL01 and VL12 in the index finger sweep, VL23 and VL34 in the ring finger sweep, and VL34 and VL44 in the little finger sweep. This produces accurate shapes at the roots of the fingers and thumb (compare Figure 4(b) to Figure 2(b)). Sweep Displacements We mimic the details of the real hand by adjusting the sweep displacement parameters, using a boundary that matches the silhouette of the image of the real hand and the skin surface of the generic model. We then change the displacement of each vertex of the generic model to conform to the corresponding displacement on the silhouette of the real hand. We are effectively guessing the thickness of the real hand, which is necessary because we have only the photograph of its palm. Figure 4(c) shows the parameters involved in displacement fitting. Suppose a vertex V is located between two consecutive joints JointA m and JointB m in the generic model, and JointA i and JointB i are the corresponding joints marked on the input image. The three-dimensional point V m is computed using the projection of the vertex V on to the sweep trajectory curve. V i is a two-dimensional intermediate point Copyright 2007 John Wiley & Sons, Ltd. 511 Comp. Anim. Virtual Worlds 2007; 18: 505 516

J. LEE AND M.-S. KIM Figure 4. Skin fitting: (a) sweep trajectory curve interpolation for the middle finger sweep. (b) The result of sweep trajectory fitting; extra keyframes are incorporated using the features at the branching positions of the fingers and thumb. (c) The parameters involved in displacement fitting. They are the left radius L m and the right radius R m, measured from the generic model, and the left radius L i and the right radius R i, measured from the input photograph. (d) The skinning result; the feature points, the silhouette, the bespoke skeleton, and mesh vertices reconstructed by sweeps. between JointA i and JointB i, and is obtained from the ratio of the distances between JointA m, V m, and JointB m. We then measure the left radius L m and the right radius R m at V m in the generic model by checking the line-face intersection. L i and R i are measured at V i in the input image, in which the boundary of the hand is identified by the transition to background color, as mentioned above. Then the new displacement dispv of V is computed as follows: ratiov = ( Ri L ) i θ R m L m π + L i L m dispv = V V m ratiov for 0 θ π where θ is the angle between the left direction and the direction of V V m. By using two radii rather than one diameter, we can represent quite fine details. The resulting mesh is a little bumpy because of the jagged boundary of the real hand image. We can smooth the mesh using the displacement parameters of the sweeps. The displacement value of each vertex is regulated by the average displacement values of its neighboring vertices. The result of skinning after sweep trajectory fitting and displacement fitting is shown in Figure 4(d). It shows the feature points of the bespoke generic model, the feature points of the real hand, and the joint Copyright 2007 John Wiley & Sons, Ltd. 512 Comp. Anim. Virtual Worlds 2007; 18: 505 516

GENERATING ANIMATABLE HAND MODELS positions. The dark boundary in Figure 4(d) is the silhouette of the real hand from the input photograph. Our feature-driven keyframing method deals quite well with the disparities between the feature points on the bespoke model and those on the real hand, and between the bespoke hand silhouette and the real hand boundary, producing acceptably accurate results. Figure 4(d) also shows the bespoke skeleton and the mesh vertices reconstructed using sweeps. No inconsistency is apparent between the skeleton and the mesh vertices and the animation keeps its accuracy when we change the joint angles to generate different poses. Palm and Palm Line Fitting Our hand-fitting algorithm consists of two major parts: skeleton fitting and skin fitting, as discussed in previous sections. We now introduce a further fitting method for generating the palm surface and the palm lines visible on a particular hand. To control the palm deformation, Lee et al. 3 generated a freeform surface by interpolating a set of palm vertices, which are updated after a sweepbased deformation. In this paper, the sweeps are automatically constructed from feature-determined keyframes, and so we can produce the palm-control surface using the palm vertices. Figure 5(a) shows the palm vertices and the palm-control surface that interpolates them. The palm lines are represented by displacing the palmline vertices from the palm-control surface into the hand. 3 We project all the vertices on the palm on to the virtual image plane that contains the palm lines drawn by the user (see Figure 2(f)). Vertices close to the user-drawn palm lines are selected as new palm-line vertices. But the resulting palm lines are jagged and unnatural because of the coarse mesh, as shown in Figure 5(b). We provide a user interface to adjust the palm lines starting from the intermediate result of Figure 5(b). Figure 5(c) shows adjusted palm lines and Figure 5(d) is a deformed palm which shows the appropriate bulges and palm line for the pose. Results Figure 6 shows the construction of several bespoke hand models and their deformation. The rationale for our Figure 5. Fitting a palm-control surface and palm lines: (a) the palm-control surface of the bespoke hand, (b) palm lines constructed by automatic fitting, (c) user-adjusted palm lines based on (b), and (d) a deformed palm. sweep-based shape adaptation technique is to generate fully animatable hand models; thus we demonstrate their animations. Figure 6(a) and (b) show models of an adult male s hand and an adult female s hand, respectively. We see that they deform realistically to take up various poses. Figure 6(c) and (d) show how our sweep-based shape adaptation method copes with the hands of a 4-year-old baby and an adult female. Collision detection and the elimination of selfintersections are handled using geometric primitives which are automatically generated from the sweeps and palm-control surface in the manner discussed by Lee et al. 3 Copyright 2007 John Wiley & Sons, Ltd. 513 Comp. Anim. Virtual Worlds 2007; 18: 505 516

J. LEE AND M.-S. KIM Figure 6. Bespoke hand models and deformation results. Copyright 2007 John Wiley & Sons, Ltd. 514 Comp. Anim. Virtual Worlds 2007; 18: 505 516

GENERATING ANIMATABLE HAND MODELS Conclusions We have presented a sweep-based shape adaptation method that provides animatable bespoke hand models. We transfer the shape of a real hand to a generic model, so as to preserve the animation structure in the latter. The bespoke hand models that we have built can deform to a wide range of poses in real time. Nevertheless, the input required to create such a model is no more than a single photograph with simple features marked by the user; so we can easily and rapidly acquire many individual hand models. Because we use a single photograph, we need to guess the thickness of the hand. In future, we may use multiple photographs of each finger, viewed laterally, to improve on this aspect of our technique. We also plan to extend our method of sweep-based shape adaptation to more general shape models and to use other methods of shape morphing. 7. Lee Y, Terzopoulos D, Waters K. Realistic modeling for facial animation. In Proceedings of SIGGRAPH 1995, pp. 55 62, 1995. 8. Hyun D-E, Yoon S-H, Chang J-W, Seong J-K, Kim M-S, Jüttler B. Sweep-based human deformation. The Visual Computer 2005; 21(8 10): 542 550. 9. Noh J-Y, Neumann U. A survey of facial modeling and animation techniques. Technical Report 99-705, Integrated Media Systems Center, University of Southern California, 1998. 10. Coquillart S. A control-point-based sweeping technique. IEEE Computer Graphics and Applications 1987; 7(11): 36 45. 11. Chang T-I, Lee J-H, Kim M-S, Hong SJ. Direct manipulation of generalized cylinders based on b-spline motion. The Visual Computer 1998; 14(5/6): 228 239. 12. Adobe Systems Incorporated. Photoshop, http://www. adobe.com. date of access:05-15-2007. Authors biographies: ACKNOWLEDGMENTS This work is supported by the Korean Ministry of Information and Communication (MIC) under the Program of IT Research Center on CGVR. An anonymous reviewer gave invaluable comments which were very useful in improving the expository style of this paper. References 1. Albrecht I, Haber J, Seidel H-P. Construction and animation of anatomically based human hand models. In Proceedings of 2003 ACM Symposium on Computer Animation, pp. 98 109, 2003. 2. Rhee T, Neumann U, Lewis JP. Human hand modeling from surface anatomy. In Proceedings of 2006 ACM Symposium on Interactive 3D Graphics and Games, pp. 27 34, 2006. 3. Lee J, Yoon S-H, Kim M-S. Realistic human hand deformation. Computer Animation and Virtual Worlds 2006; 17(3 4): 479 489. 4. Allen B, Curless B, Popović Z. The space of human body shapes: reconstruction and parameterization from range scans. ACM Transactions on Graphics 2003; 22(3): 587 594. 5. Seo H, Cordier F, Magnenat-Thalmann N. Synthesizing animatable body models with parameterized shape modifications. In Proceedings of 2003 ACM Symposium on Computer Animation, pp. 120 125, 2003. 6. Kähler K, Haber J, Yamauchi H, Seidel H-P. Head shop: Generating animated head models with anatomical structure. In Proceedings of 2002 ACM Symposium on Computer Animation, pp. 55 63, 2002. Jieun Lee is a postdoctoral researcher and received her Ph.D. from Seoul National University in 2007. She received her B.S. degree in Computer Science and Engineering from Ewha Womans University in 1997 and her M.S. degree in Computer Science and Engineering from POSTECH in 1999. She worked at LG Electronics Institute of Technology as a Research Engineer from 1999 to 2002. Her fields of specialization are geometric modeling, computer graphics, and multimedia information processing. Myung-Soo Kim is a Professor of the School of Computer Science and Engineering, Seoul National University. His research interests are in computer graphics and geometric modeling. Prof. Kim received his B.S. and M.S. degrees from Seoul National University in 1980 and 1982, respectively. He continued his graduate study at Purdue University, where he received an M.S. degree in Applied Mathematics in 1985 and M.S. and Ph.D. in Computer Science in 1987 and 1988, respectively. From then until 1998, he was with the Department of Computer Science, POSTECH, Korea. Prof. Kim serves on the editorial Copyright 2007 John Wiley & Sons, Ltd. 515 Comp. Anim. Virtual Worlds 2007; 18: 505 516

J. LEE AND M.-S. KIM boards of Computer-Aided Design, Computer Aided Geometric Design, Computer Graphics Forum, and the International Journal of Shape Modeling. He has also edited several special issues of journals such as Computer- Aided Design, Graphical Models, the Journal of Visualization and Computer Animation, The Visual Computer, and the International Journal of Shape Modeling. Recently, together with Gerald Farin and Josef Hoschek, he edited the Handbook of Computer Aided Geometric Design, North- Holland, 2002. Copyright 2007 John Wiley & Sons, Ltd. 516 Comp. Anim. Virtual Worlds 2007; 18: 505 516