Deliverable 5.2 Creation of Population

Size: px
Start display at page:

Download "Deliverable 5.2 Creation of Population"

Transcription

1 ABSTRACT Deliverable 5.2 Creation of Population Population of virtual humans is an essential keystone in any performance that involve a number of virtual humans cohabiting within the same virtual environment and the public. This deliverable describes a method for generating population. We first discuss cloning faces from two orthogonal pictures and then for generating populations from a small number of these clones. The problem of generating various body shapes is also discussed. An efficient method for reconstructing 3D heads suitable for animation from pictures has shape modification part and texture mapping part. The several individualised head serve either to statistically infer the parameters of the multivariate probability distribution characterising a hypothetical population of heads or to a dynamic system for 3D morphing with 3D spatial interpolation and powerful 2D textureimage metamorphosis. To realise a full body shape, we construct a database of several body sizes for men and women. Depending on a given face, we select a suitable body and connect individualised face to it and animate a full face and body in a virtual world. Various examples of different bodies with different heads are provided. Document ID Type erena D5.2 Deliverable report with files and video material Status Final Version 1.0 Date Aug Author(s) Pierre Beylot, WonSook Lee, Jean-Claude Moussaly, HyeWon Seo, Christian Zanardi, MIRALab, University of Geneva. Director of Professor Nadia Magnenat-Thalmann, MIRALab, the research University of Geneva Task 5.2

2 Table of Contents 1. Introduction and problematic of representing real looking humans Face reconstruction Generation of face population Body reconstruction Outline Capturing data out of photographs: head reconstruction6 using a generic model & features points Feature detection Modification of a generic model Automatic texture mapping Texture generation Image deformation Multiresolution image mosaic Texture fitting Cloning Results Real-time animation Statistical methods to generate a large population Constructing and sampling from an hypothetical population Inference Sampling D morphing for creating various individuals Texture morphing Image metamorphosis based on triangulation Results of large population Generate various body shapes out of a body database Creation of a body How to simply create diverse shapes Connecting the head to the body Changing texture Examples of complete virtual humans Conclusion Link with erena Project References ESPRIT Project 25379

3 1. Introduction and problematic of representing real looking humans Population of virtual humans is an essential keystone in any performance that involve a number of virtual humans cohabiting within the same virtual environment and the public. This deliverable aims at presenting the method we have developed within the erena project in order to provide huge crowds of realistic looking virtual humans that may be used within performance. Before constructing virtual groups, crowds or populations, we must be able to synthesise realistic figures and move them about in a convincing and efficient way. Animators agree that the most difficult subjects to model and animate realistically are humans and particularly human faces. The explanation resides in the universally shared (with some cultural differences) processes and criteria not only for recognising people in general, but also for identifying individuals, expressions of emotion and other facial communicative signals, based on the co-variation of a large number of partially correlated shape parameters within narrowly constrained ranges. Though some of this may be amusingly conveyed in a schematic manner by various 2D or 3D animation technologies, the results are easily identified as cartoons by the most naive observer, while the complex details and subtle nuances of truly realistic representation remain a daunting challenge for the field. Once we have a way of modelling specific individuals, how can we make use of a limited number of reconstructed faces in the automatic generation of a large population of distinct individuals of the same general characteristics? Simple approaches, such as the random mix and match of features, do not take into account local and global structural correlation among facial sizes and structures, distances between features, their dimensions, shapes, and dispositions, and skin complexion and textures. In this report, we describe an efficient and robust method for individualised face modelling, followed by techniques for generating animation-ready populations in a structurally principled way, based on a number of existing head models and a database of existing body models. We also introduce a rapid dynamic system, which enables 3D polymorphing with smooth texture variation under intuitive control. The problematic for the creation of realistic human bodies will be also described. 1.1 Face reconstruction Approaches to the realistic reconstruction of individuals, some of them with a view to animation, include the use of a laser scanner [7], a stereoscopic camera [4], or an active light stripper [8]. There is increasing interest in utilising a video stream [13] to reconstruct heads and natural expressions. These methods are not developed enough; however, to be commercialised (such as in a camera-like device) in the near future, in terms of direct input of data for reconstruction and final animation as output. The animation results by Guenter et al.[2] are impressive, but input and output processes are not practical for real-time animation. The term real-time animation refers here to the ability of having one or more virtual human that is deformed and according to facial or body parameters and displayed on the computer screen at more than 10 frames per second in its virtual environment. Real-time is important for animation but it is not necessary that the face reconstruction is also in real-time. This process can be done completely offline and then loaded later in the virtual environment. Pighin et al.[3] get naturalistic results for animation using several views as input. Their techniques have elements in common with our methods, but the aims are different. For example, we are primarily interested here in morphing between a number of different individuals, while their application is to interpolate between different stages of a single facial model. Modelling has also been done from picture data [6][9], detecting features, modifying a given generic model and then mapping texture on it. Not all of these, however, combine sophisticated and reliable shape deformation methods with seamless, high-resolution texture generation and mapping ESPRIT Project 25379

4 1.2 Generation of face population Techniques for generating virtual populations have been investigated by DeCarlo et al.[1]. They vary a geometric human face model using random distances between facial points sampled from distributions of anthropometric statistics, but they do not consider a real-life face data as an input and realistic texture variation. The term morph, short for metamorphosis, has been applied to various computer graphics methods for smoothly transforming geometric models or images. Morphing techniques can be classified into image based methods and object-space methods. In most cases, the imagebased methods are used for 2D morphing and the object-space methods for 3D morphing. Image morphing has three terms to be considered, which are feature specification, warp generation methods, and transition control[20]. These areas relate to the ease of use and quality of results. Feature specification is the most tedious aspect of morphing. Although the choice of allowable primitives may vary, all morphing approaches require careful attention to the precise placement of primitives. Given feature correspondence constraints between both images, a warp function over the whole image plane must be derived. This process, which we refer to as Warp Generation, is essentially an interpolation problem. Another important problem in image morphing is transition control. If transition rates are allowed to vary locally across in-between images, more interesting animations are possible. The main methods are based on mesh warping [21] field morphing [22], radial based functions [23], thin plate splines [24], energy minimisation [25], and multilevel free-form deformations [12]. A tradeoff exists between the complexity of feature specification and warp generation. As feature specification becomes more convenient, warp generation becomes more formidable. Some methods do not guarantee the one-to-one property of the generated warp functions while others derive one-to-one warp functions, but the performance is hampered by its high computational cost. Most of the morphing methods and applications require manual user intervention to specify matches between given objects. In many cases, significant user involvement is required to achieve a pleasing visual result. It is desirable to eliminate, or at least to limit, this involvement in morphing design, and to have a more automatic process. A work minimisation approach by Gao and Sederberg [26] to image morphing is capable of automatically creating good image morphs in many case where the two images are sufficiently similar, where good results can be obtained in less than 10s for 256x256 images. The traditional formulation for image morphing considers only two input images at a time, i.e. the source and target images. Morphing among multiple images, referred as polymorph, is understood to mean a seamless blend of several images at once. A metamorphosis or a 3D morphing, for example topology independent morphing in a short animation film Galaxy Sweetheart [28] in late 80, is the process of continuously transforming one object into another[10][11]. Since there is no intrinsic solution to the morphing problem, user interaction can be a key component of morphing software if two objects do not share the same structure. Most efforts have been dedicated to the correspondence problem, and we still lack intuitive solutions to control the shape interpolation. The idea of combining 2D-image morphing and 3D-shape morphing has been little exploited. Nevertheless, this seems a natural approach to better results with less effort. Furthermore, were this to be combined with face cloning methods, many user interactions could be avoided. 1.3 Body reconstruction When it comes to the modelling and deformation of human body shapes, this is an important but difficult problem. The human form is very complex; it comes in a variety of sizes and has two main types (or gender); it includes a skeletal frame that supports muscle, fat and fleshes; all enclosed by a skin that can slide, stretch, and fold over this volume. The torso, head, arms, and legs are relatively large, roughly symmetrical collections of articulated components which vary with age, gender and race factors. Attempting to model and animate such a structure is one of the most difficult and challenging problems in computer graphics. Moreover, since our eyes are especially sensitive to the human Fig., computer generated images must be extremely convincing to satisfy our demands for realism ESPRIT Project 25379

5 Several approaches[31][32][33] have been proposed but they are not convenient for our problem of producing realistic multiple humans. We are currently using the approach based on implicit surfaces. An implicit surface is a surface consisting of those points p that satisfy the arbitrary implicit function f, f(p) = 0. Because implicit defined surfaces possess some unique and useful attributes for modelling, such as blending and constraint properties, they have received increasingly attention in the design of 3D objects. A particular subset of implicit surfaces, called soft objects (or metaballs, blobs) is becoming hot topic in computer graphics. Because metaballs joint smoothly and gradually, it gives shape to realistic, organiclooking creations, suitable for modelling human bodies, animals and other organic figures, which are very hard to model using traditional geometric methods. 1.4 Outline In this report, we describe our methods to create individualised faces just from a few photos and generate a population based on existing virtual humans. Then body modelling and how to build body database are explained. Fig. 1 contains a flow chart for the generation of photorealistic virtual population from photos and database of body. Orthogonal Picture of faces Feature Points Texture Generation A set of Feature Points Texture fitting Principal Component Space Sampling Texture Morphing Body construction sets of Feature Points Variation of body Shape Modification Generic Model Texture Mapping Data base for various body shapes and textures Dynamic system for 3D morphing for faces Individualized virtual human Fig. 1: Generation of populations from picture input: from the two orthogonal photos of an individual feature points are semi-automatically extracted. The feature points are used both for the texturing process of the virtual face and with the geometrical deformation of the generic face model. The 3D-morphing system is then used on the database of all available virtual faces to automatically create completely new faces by dynamic morphing. The body is created using a generic database of bodies and simple available transformation. Association of one head to one body creates the final virtual human ESPRIT Project 25379

6 In Section 0 of this report, we describe a fast and robust method for head-shape modelling based on the extraction of feature points from two orthogonal views. Section 0 shows fully automatic and robust texture mapping method for realistic looking virtual human. Based on a representation in terms of a large number of vectors between feature points, principal component analysis is used to discover the statistical structure of an input sample of heads, namely a representation of the data in terms of a reduced number of significant (and independent) dimensions of variability in Section 0. Section 0 is devoted to a dynamic system for generating heads, including shape and texture details that interpolate several input heads. This enables the user to visually control mixing ratios with close to real-time calculation. Some experimental results are presented in Section 0. In the last section of this report we will present different examples of virtual human created out of cloned heads and from a database of virtual human bodies. 2. Capturing data out of photographs: head reconstruction using a generic model & features points 2.1 Feature detection 2D photos offer cues to the 3D shape of an object. It is not feasible; however, to get 3D coordinates for points densely distributed on the head. In most cases, we know the location of only a few visible features such as eyes, lips and silhouettes, key characteristics for recognising people. We call them as feature points whose location can be detected either automatically or at least interactively. Our process is semi-automatic. It means that most of the tedious processing is done automatically and that only minor manual adjustment are necessary to complete the models To reconstruct a photographically realistic head, ready for animation, we detect corresponding feature points on both of two orthogonal images front and side and deduce their 3D positions. This information is used to modify a generic model through a geometrical deformation. Feature detection is processed in a semiautomatic way using the structured snake method [9] with some anchor functionality. Fig. 2 depicts an orthogonal pair of normalised images, showing the features detected. Here normalisation signifies locating images in the feature point space, so that the front and side views of a head have the same height. The feature points are overlaid on the images even though they are located in spaces with different origins and scales from the images. The two 2D sets of position co-ordinates, from front and side views, i.e., the (x, y) and the (z, y) planes, are combined to give a single set of 3D points. To get perfectly aligned and orthogonal views is almost impossible, and this leads to difficulties in determining the (x, y, z) co-ordinates of a point from the (x, y) on the front image and the (y, z) on the side image. Taking the average of the two points often results in an unnatural face shape. Thus we rely mainly on the front y co-ordinate, using the side y only when we do not have front one. This convention is very effective when applied to almost orthogonal pairs of images. In addition, for asymmetrical faces, this convention allows for retention of the asymmetry with regard to the most salient features, even though a single side image is used in reconstructing both the right and left aspects of the face. A global transformation moves the 3D feature points to the space containing the generic head, which is a normalisation of every head. 2.2 Modification of a generic model We have a certain set of 3D feature points, which has about 160 points. The problem is how to deform a generic model, which has more than a thousand points to make an individualised smooth surface. One solution is to use 3D feature points as a set of control points for a deformation. Then the deformation of a surface can be seen as an interpolation of the displacements of the control points. Free-Form Deformations (FFD)[14] belong to a wider class of geometric deformation tools. However FFD has a serious constraint in that control points boxes have to be rectangular, which limits the expression of any point of the surface to deform relative to the control points box. Farin [17] extends the natural neighbours interpolant based on the natural neighbours ESPRIT Project 25379

7 co-ordinates, the Sibson co-ordinate system[15] based on Voronoi/Dirichlet and Delaunay diagrams [16] for a scattered data interpolant, using the support for a multivariate Bezier simplex [18]. He defines a new type of surfaces with this extended interpolant called Dirichlet surfaces. Combining FFD s and Dirichlet surfaces leads to a generalized model of FFD s: Dirichlet FFD s or DFFD s. In the Dirichlet-based FFD approach, any point of the surface to deform located in the convex hull of a set of control points in general position, is expressed relative to a subset of the control points set with the Sibson co-ordinate system. One major advantage of this technique is that it removes any constraint on the position and topology of control points. It also removes the need to specify the control lattice topology [19]. One control point is defined at the position of each surface point, so that any displacement applied to the control point will also be applied to the surface point. Therefore DFFD is used to get new geometrical co-ordinates for a modification of the generic head on which are situated the newly detected feature points. All points on this head are located utilising the feature points as constraint control points for DFFD. Since our feature points includes the front and side silhouette, the convex hull of control points contains most points on a 3D head, but there can be some missing points outside the convex hull. So we add 27 extra white points surrounding the 3D head in Fig. 2. The correct shapes of the eyes and teeth are assured through translation and scaling appropriate to new head. As shown in Fig. 2, this is a rough matching method; it does not attempt to locate all points on the head exactly, in contrast to range data from laser scanners or stereoscopic cameras, which create a enormous number of points. Considering the input data (pictures from only two views), the result is quite respectable. Most important, it greatly limits the size of the data set associated with an individual head, as is necessary to accelerate animation speed. The problem of how to reduce the size of the data from range data equipment for animation purposes has not been satisfactorily solved. To improve realism, we make use of automatic texture mapping, as described in the next section. Feature points in 2D Features in 3D A generic model An individualized head DFFD Fig. 2: Modification of a generic head according to feature points detected on pictures.points on a 3D head are control points for DFFD. 3. Automatic texture mapping Texture mapping serves not only to disguise the roughness of shape matching determined by feature point matching only, but also to imbue the face with more realistic complexion and tint. We use information from the set of feature points detected to generate texture automatically, based on the two views. We then obtain appropriate texture co-ordinates for every point on the head using the same image transformation. 3.1 Texture generation The main criterion is to obtain the highest resolution possible for most detailed portions. We first connect two pictures along predefined feature lines using geometrical deformations and a multiresolution technique to avoid visible boundary effects ESPRIT Project 25379

8 3.1.1 Image deformation We use the front view preponderantly, since it provides the highest resolution for the features. The side view is deformed to be connected to front view along certain defined feature points lines on the left hand side and, flipped over, on the right hand side. The feature lines are drawn on the front image as shown Fig. 3 by the red lines. A corresponding feature line is defined for the side images. We deform the side image to transform the feature line to coincide with the one on the front view. Image pixels on the right side of the feature line are transformed by the same transform as the line transform. To get the right part of the image, we utilise the side image as it is and deform it according to the right-hand red feature line on the front image. For a left part of the image, we flip the side image and deform it according to the left-hand red feature line on the front image. The resulting three images are illustrated in Fig. 4 (a). Here we use a piecewise linear transformation using piecewise feature lines, but higher degree feature curves can perform a smoother deformation for side images Multiresolution image mosaic The three images resulting from the deformation are merged using a pyramid decomposition [5] based on the Gaussian operator. We utilise common REDUCE (also called shrinking) and EXPAND operators to obtain G k (the Gaussian image) and L k (the Laplacian image) and connect three L k images at each level on any given curve; here they are the feature lines described above. Then the connected image P k is augmented to obtain S k, which is the combination of P k and S k+1. The final image is S 0. This multiresolution technique is effective in removing the boundaries between the three images. No matter how carefully the picturetaking environment is controlled, in practice boundaries are always visible. As in Fig. 4 (a), skin colours are not continuous when the multiresolution technique is not applied. The image in (b) shows the results with the technique, which has smoothed the connection between the images without visible boundaries. Front Side ( right, left ) Deformed side ( right, left ) (a) (b) (c) Fig. 3: (a) Red lines are feature lines. (b) Feature lines on three images. (c) Feature lines on a 3D head. (a) Fig. 4: Combining the texture images generated from the three (front, right and left) images without multiresolution techniques, in (a) and with the technique in (b). Eye and teeth images are superimposed automatically on the image. These are important for animation of eye and mouth regions. 3.2 Texture fitting To find suitable co-ordinates on a combined image for every point on a head, we first project an individualised 3D head onto three planes as shown in Fig. 5 (a). Guided by the feature lines used for image merging in above section, we decide to which plane a point on a (b) ESPRIT Project 25379

9 3D head is to be projected. The points projected on one of three planes are then transferred to either the front or the side 2D-feature point space. Finally, one more transform on the image space is processed to obtain the texture co-ordinates and the final mapping of points on a texture image is generated. The origins of each space are also shown in Fig. 5 (a). The 3D head space is the space for the 3D head model, the 2D feature point space is the one for feature points used for feature detection, the 2D image space is the one for the orthogonal images used for input, and the 2D texture image space is for the generated image space. The 2D-feature point space is different from the 2D-image space even though they are displayed together in Fig. 5 (a). The final texture fitting on a texture image is shown in Fig. 5 (b). The final texture mapping results in smoothly connected images inside triangles of texture co-ordinate points, which are accurately positioned. The eye and teeth fitting process are done with predefined co-ordinates and transformations related to the texture image size. After doing this once for a generic model, it is fully automated. The brighter points in Fig. 5 (b) correspond to feature points and the triangles are a projection of triangular faces on a 3D head. Since our generic model is endowed with a triangular mesh, the texture mapping benefits from an efficient triangulation of texture image containing finer triangles over the highly curved and/or highly articulated regions of the face and larger triangles elsewhere, as in the generic model. This triangulation is also used for the 3D-image morphing in Section 0. Top view of a 3D head front front Texture image right left (a) front 3D Head space side 2D Feature Points space side 2D Image space (input pictures) 2D texture Image space (generated image) Feature lines Fig. 5: (a) Texture fitting giving a texture co-ordinate on an image for each point on a head. (b) Texture co-ordinates overlaid on a texture image. (B) 3.3 Cloning Results Fig. 6 shows several views of the head reconstructed from the two pictures in Fig. 2. Fig. 6: Snapshots of a reconstructed head in several views. Other examples covering wide range of age and many kinds of ethnic group are shown in Fig. 7. Every face in this report is modified from the SAME generic model shown in Fig. 5. How individualised the representations are depends on how many feature points are identified on the input pictures. Our generic model currently does not have many points on the hairdo, so it is not easy to vary hair length, for example ESPRIT Project 25379

10 Fig. 7: Examples of reconstructed heads from pictures. Two views of each virtual human are shown just after two photos of the corresponding real person. These are ready for immediate animation in a virtual world. As an example of the cloning capabilities we have asked the member of the erena consortium to send us photos in order to clone them and used their face in the future for the creation of the population. Results from cloning are presented in Fig ESPRIT Project 25379

11 Fig. 8 Examples of cloned head of erena participants 3.4 Real-time animation Real-time animation means that the virtual human can be animated according to facial or body parameters and that it will be displayed on the computer screen at more than 10 frames per second in its virtual environment. Real-time is important for animation but the cloning and population creation can be done completely offline and then loaded later in the virtual environment. The predefined regions [27] of the generic model are associated with animation information, which can be directly transported to the heads constructed from it by geometrical modification in Section 0. Fig. 9 shows several expressions on a head reconstructed from three pictures. Fig. 9: Examples of a reconstructed head from three photos and several expressions. 4. Statistical methods to generate a large population The statistical methods provide efficient tools for automatically generate a large population. The ideas and objectives of this method are similar to the method that will be presented in the next chapter but they rely heavily however on specific biostatistical notions. This notion are related specifically to face features and when extracted provide an efficient way of basically representing a type of person. The dynamic morphing system is not considering those ESPRIT Project 25379

12 aspects or more precisely is not based upon those basic principles but may have the similar results, as it will be explained in the next chapter. Those two methods are therefore complementary rather that opposed. 4.1 Constructing and sampling from an hypothetical population Our approach to generating populations is based on biostatistical notions of morphological variation within a community. The underlying hypothesis is that if we determine a large number of facial measurements, these will be approximately distributed in the population according to a multivariate normal distribution, where most of the variation can be located in a few orthogonal dimensions. These dimensions can be inferred by principal component analysis [37] applied to measurements of relatively few input heads. A random sample from the reduced distribution over the space spanned by the principal component yields the facial measurements of a new head, typical of the population. An input head is described by its set of feature points obtained interactively from photographs as explained in Section 0. The principal component analysis will generate new sets of feature points having original positions and therefore describing the geometry of new heads. These new samples will be used to modify the shape of a generic model with the same way as in Section 0. Fig. 1 shows how principal component analysis is used inside the population generation process. For a realistic morphological variation, it is better to use a large number of heads. We currently use 30 heads representatives of an average population (15 males / 15 females, 15 Asian / 15 Caucasian) but we are continuously increasing this number using the face of cloned visiting persons to our lab or during events such as the CEBIT fair in Hanover (1999) Inference The feature points are divided into subset according to the head region where they are defined (mouth, eye, nose, cheek, and etc.). Two sets of pre-specified 3D vectors representing distances between feature points are calculated for each head. The first set reflects the overall shape parameters and consists of distances between a central point situated on the nose and local points, each of them belonging to a different region. The second set represents local relationships and corresponds to distances defined between a local point and the points being in the same region as shown in Fig. 10. Denote by n the total number of measurements represented by the central point co-ordinates and distance vectors co-ordinates and denote by H the number of input heads. The measurements for the H heads are each standardised to Normal [0,1] and entered into an H x n matrix M. The principal components of variation are found using standard procedures, involving the decomposition XLX t =MM t, where X is orthonormal and L contains the eigenvalues in decreasing order. Only those dimensions of X with non-negligible eigenvalues, i.e. the principal components, are retained Sampling For each head in the population being constructed, independent samples are drawn using a one-dimensional sampling distribution (N[0,1]) on each principal component axis. The i-th component is multiplied by sqrt(l i ), where L i is the i-th eigenvalue of the i-th axis, and the feature point distance vectors are then constructed by inverting the transformation to X from the original measurement co-ordinates in n-dimensional space, and then inverting the standardisation process for each measurement. It is then straightforward to recover all the new feature points from the n measurements, starting with the central point on the nose. Sampling from the principal component space is a rapid method for generating any number of feature point sets. We currently use 160 feature points. The matrix MM t has a dimension of 160 x 3. It is decomposed in less than one minute on a computer with at today s performance ESPRIT Project 25379

13 Fig. 10: 3D vectors representing distances between feature points. Arrows from nose tip to facial region centre (forehead, cheek, and mouth on the Fig.) represent global distances. Arrows from region centre to local feature points represent local distances. 5. 3D morphing for creating various individuals Morphing technology has seen a great deal of development, both in 2D and 3D. Very high quality morphing between people or animals is available in 2D, while in 3D, it has been applied mainly to pairs of objects with differing topology. In this report, we vary aspects of both 2D and 3D representations in creating new virtual faces. When we morph one person to another in 3D, we must deal with alteration in both shape and texture. Since every face created through the methods described here shares the same topology, it is relatively easy to vary head shapes in 3D just using a simple linear interpolation of 3D co-ordinates. On the other hand, it is less straightforward to carry out 2D morphing of textures since this requires some feature information specifying correspondences between specific points in the two images. Here we use a surprisingly simple method, drawing on 3D information to facilitate 2D morphing among texture images. 5.1 Texture morphing For the heads newly created through sampling in the principal component space, we do not have source images to generate texture mapping. For this case we use texture polymorph with a given set of cloned heads with textures. Image morphing normally considers feature specification, warp generation, and transition control. When we, however, use texture image with 3D head information, feature specification and warp generations are automatically given by 3D head topology and only a simple calculation based on triangulation with given ratios among input texture images is performed. Two steps are necessary to obtain texture morphing. First, texture co-ordinates are interpolated, and then image morphing is carried out. We use simple linear interpolation among several texture co-ordinate sets ESPRIT Project 25379

14 5.1.1 Image metamorphosis based on triangulation We morph several images with given ratios using the information of texture co-ordinates and the triangulation information of the texture mapping. We first interpolate every 3D vertex on the several heads. Then to generate a new intermediate texture image, we morph triangle by triangle the entire 3D head. The parts of the image used for texture mapping are triangulated by projection of triangular faces of 3D heads since the generic head is a triangular mesh as seen in Fig. 5 (b). With this information for triangles, barycentric coordinate interpolation can be employed for image morphing. First, the three vertexes of each triangle are interpolated. Then pixel values inside triangles can be obtained from interpolation between corresponding pixels in several triangles with the same barycentric co-ordinate. Smoothing of the image pixels is achieved through bilinear interpolation among four neighbouring pixels. Given two texture images, for instance, we find, the barycentric coordinate (u, v, w) for each pixel (x, y) in an intermediate images inside a triangle P 1 P 2 P 3. The corresponding triangles in the two input images are P 1L P 2L P 3L and P 1R P 2R P 3R, respectively. Then up 1L + vp 2L + wp 3L is the corresponding pixel in the first image and up 1R + vp 2R + wp 3R in the second. The colour value M(x, y) of a given pixel (x, y) is found from linear n interpolation, with given ratios r i where r = 1 and n=2, between the colour value M L (x, y) i i= 1 of the first image and M R (x, y) of the second. Fig. 11 illustrates intermediate texture images between two persons. Note that pixels in the eyes, mouth and part of the hair region, which are not covered by triangles, are not treated in the same way to get intermediate values. Indeed, they are not used for texture mapping at all. If morphed heads have different eye colours, our procedure also interpolates these. The resulting head manifests very smooth images without any gaps in the textured parts. Fig. 12 shows the final result of 3D morphing. It depicts the results of interpolating the shapes, the skin and the hair colours between an older Caucasian male and a younger Asian female, whose input pictures are shown in Fig. 7. Fig. 11: Intermediate images mixed for (Caucasian male, Asian female) Fig. 12: Morphing steps for (Caucasian male, Asian female). This approach to 3D morphing has the advantage of extending to several people in a natural way. Fig. 13 illustrates a dynamic system for 3D morphing among a given number of people, here three females, Caucasian, Asian and Indian, respectively, and an African male with a nonneutral expression on his face. The interface has two windows, the left one for controlling input ratio and the upper right one containing the result. Two options are provided, one-time morphing and continuous morphing. One-time morphing provides a convenient way to select an input ratio in an n-polygon in the left window for n given virtual persons. The result appears immediately. For continuous morphing, the result varies according to the ratio points, which are chosen along a line or curve in the polygon. Since only barycentric calculation for ESPRIT Project 25379

15 texture image pixels is required, the calculation is very fast. There is also an option to set on/off for shape and image variation rate separately. Fig. 13: A dynamic system for 3D morphing. Here four persons are used for continuous morphing following a user-specified curve. A key to the efficiency of this system is the restriction that all the heads share the same structure. A benefit is that the user can experiment with the ratios for mixing two or more heads in real-time. 6. Results of large population We report here on experiments with cloning and the creation of new heads out of cloned one. We used seven orthogonal photo sets as inputs. Fig. 7 shows the input photos and output reconstructed virtual faces. This includes four Caucasians of various ages, an Indian, an Asian, and an African. There are three adult females, three adult males and a child. The creation of a population is illustrated in Fig. 14. The 42 faces, some of whom have nonneutral facial expressions, are generated from the H=7 sets of orthogonal photos in Fig. 7 according to the steps schematised in Fig. 1. All faces are ready to be animated. The size of the texture image for each person is 256x256, which is less than 10 KB. The total amount of data for the heads in OpenInventor format is rather small considering their realistic appearance. The size of Inventor format (corresponding to VRML format) is about 200 KB. The texture image is stored in JPEG format and has sized about 5~50 KB depending on the quality of pictures; all examples shown in this report have size less than 45 KB. The number of points on a head is 1257, where 192 of them are for teeth and 282 of them are used for the two eyes. This leaves only 783 points for the individualisation of the face surface aside from eyes and teeth. The small size of data used in the face geometry and texture gives us the possibility of animating several heads together in real time. Fig. 15 shows a snapshot of an OpenGL-based interface allowing the animation of a few heads. Every head has its own location and script file, which defines several continuous expressions (like eye, eyebrow, mouth movement or head rotation) with a given range of time. The expressions repeat with different frequency for each head.. In this interface, up to 5 heads can be animated in real-time. With smaller size of texture images and fewer points on heads, we will certainly increase number of heads for real-time animation ESPRIT Project 25379

16 Fig. 14: Forty-two virtual faces created from seven faces reconstructed from images ESPRIT Project 25379

17 Fig. 15: Animation of several people together. The snapshot with 7 heads is from cloned persons while the one with 9 heads from created persons. 7. Generate various body shapes out of a body database This section describes the methods used to create diverse body shapes. The first section explains the creation of generic body shapes. The second section deals with the deformation that can be applied to the generic bodies to create a multitude of different bodies. Once the body has been created, the head may be attached to obtain the complete shape of the virtual human as explained in Section 0. Finally, with the flexibility that is offered by the texturing method of our virtual human, the realism of the population can be further improved by appropriately changing the texture applied to the virtual human bodies. 7.1 Creation of a body Our goal with the population is to make realistic and efficient human modelling and deformation capabilities for many different bodies without the need of physical prototypes or scanning devices. These capabilities will make possible the automatic creation and animation of a rich variety of human shapes. Fig. 16 describes the four steps necessary to obtain a complete realistic human body: 1) the skeleton, 2) the deformation model, 3) the skin model, and 4) the realistic texturing model. Fig. 17 presents the interface of the BodyBuilder software used for the creation of body. Fig. 16 The 4 steps towards realistic human bodies ESPRIT Project 25379

18 BodyBuilder is a software package developed for the creation of human body envelopes. It uses a multi-layered approach for the design of human bodies [28][29]. The first layer is an underlying articulated skeleton hierarchy. This skeleton is close to a schematic real-human skeleton. All the human postures can be defined using this skeleton. The proportions of the virtual human are designed at this stage. The second layer is composed of grouped volume primitive. The volume primitives are metaballs, which fall into two categories: blendable and unblendable. Because metaballs can joint smoothly and gradually, they give shape to realistic organic-looking creations, suitable for modelling human bodies. They are attached to the proximal joint of the skeleton and can be transformed and deformed interactively. This way, a designer can define the 3D shape of a virtual human. Designers can start from a rough shape, made with just a few metaballs, and then add details by increasing the number of metaballs, while using editing tools (add, delete, transform, adjust the parameters). The human form is very complex, and modelling is a tedious task since the human eye is very sensitive to human Fig. It is even more challenging because the metaballs simulate the shape and behaviour of muscles, so designing requires good anatomical knowledge. The third layer is the equivalent of the human skin. We define the envelope of the body with spline surfaces using a ray-casting method. This way, metaballs have observable effects on the surface shape. We use this approach because human limbs exhibit a cylindrical topology and the underlying skeleton provides a natural centric axis upon which a number of cross-sections can be defined. The metaball technique is inherent to interactive design. One can start with a rough shape consisting of just a few metaballs. Then adding details by simply editing metaballs: add, delete, transform, and adjust the parameters. However, the metaball technique suffers from two serious drawbacks: first, it requires considerable skill to construct complex objects by properly generating numerous overlapping functions. Second, interactive shape design demands quick feedback when the designer is editing blobs. Unfortunately polygonalization or ray tracing is usually required to visualise a soft object which is very expensive. Fig. 17 Interface for the creation of bodies ESPRIT Project 25379

19 In order to enhance the modelling capability of the metaball technique, we devised some ways to reduce the number of metaballs needed for a desired detail shape and to allow the designer to manipulate body surface at interactive speed. A cross-sectional based isosurfacesampling method, combined with the B-spline blending, enable us to achieve those two goals. From a practical point-of-view, an interactive metaball editor for shape designers is written. We start the shape design by first creating a skeleton for the organic body to be modelled. Metaballs are used to approximate the shape of internal structures, which have observable effects on the surface shape. Each metaball is attached to its proximal joint, defined in the joint local co-ordinate system of the underlying skeleton which offers a convenient reference frame for position and editing metaball primitives, because the relative proportion, orientation, and size of different body parts is already well-defined. In our editor, the workstation can display shaded or wireframe colour metaballs and high resolution body envelope near real time. Metaballs are displayed as ellipsoids either with effective radius or threshold radius. The threshold mode shows the visible size of metaballs, while the effective mode shows how far the metaball will influence. Some widget panels are used to interactively adjust the size, weight, position, and orientation of metaballs. SpaceBall or trackball enables the user to rotate models around in space for different viewing. By turning on/off various display entities of different layers, the designer can selectively check skeleton, metaballs, cross-section contours, and skin envelope simultaneously. The designer can interactively create, delete, pick, and joint attach/detach metaballs. A file format is designed which can store both joint hierarchy and metaball information. Models can be saved to a file and load in later for successive sculpting. The designer can get quick feedback of the resulting skin form. Fig. 18 presents two examples of created man and woman bodies. In Fig. 19, the examples of small and large virtual humans are presented. Fig. 18 Examples of man and woman bodies ESPRIT Project 25379

20 Fig. 19 Various shapes of body small/large A further enhancement of the body shape and its realism is possible by appropriately using the metaball techniques to simulate the clothes of a person. This is done mostly by using typical scaling techniques on specific metaballs. This technique can be used to create a jacket or pants but it is not suitable for some clothes such as skirts for instance. The main advantage of this technique is that the body thus created has much more realism due to the added clothing and it can be used for real-time animation. Fig. 20 present an example of a pant and a costume created out of metaballs. Fig. 20 Metaballs-based virtual clothes ESPRIT Project 25379

21 7.2 How to simply create diverse shapes The body creation interface includes also various transformations that can be easily applied to an already constructed body. Fig. 21 presents the most typical transformations. The first and most important transformation is the global scaling that allows the creation of child body to normal height human bodies. Using the frontal and lateral scaling, we can easily create virtual humans with normal characteristic towards much stronger built humans. Finally the origin of the spine may be changed to create different ratio between the torso and the legs. Extreme examples are displayed in Fig. 21. Fig. 21 Available transformation to change the body shape 7.3 Connecting the head to the body In the previous sections, we have explained how we could obtain various heads and body shapes. In this section we will explain how we can position those heads and attached them to the bodies. Because of the disparity between bodies and heads, we do not include the neck with any of them. Therefore the neck must be created automatically according to the relative position of the head and body. The first step is therefore to position the head relatively to the body using a suitable matrix of transformation. This matrix is usually available when the body is created but it may be changed if necessary. The automatic program for the creation of the neck proceeds then first by identifying the vertex on the head where the neck will be attached. Because the number of vertices on the body and on the head may vary with from one instance to another, a neighbouring technique is used to find the ordered n-to-m relationship between the neck vertexes on the body and on the head. This relationship is finally used to generate the neck mesh that links the head to the body. The mesh thus created has a generic procedure for its deformation according to the value of the head rotation. Fig. 22 displays an example of a created neck that connects the body to the head. Fig. 22 Neck connection between the body and the head ESPRIT Project 25379

22 7.4 Changing texture In previous section, we have shown how multiple bodies and faces could be created and how they can be assembled to create a population of virtual humans. In this section, we will explain the texturing method used for our virtual human and show that this final feature can further improve the variety of virtual human inside the population. Fig. 23 presents the different textures that are applied to the body parts of the virtual human. The head texturing technique was fully described previously. A different texture can be applied for left and right upper and lower leg, left and right arm and forearm, hip, neck and front and back torso. In total 12 textures can be applied to the body of a virtual human. This leads to a simple way to create multiple population of virtual human by simply changing the appropriate texture. Right column in Fig. 23 present s two simple examples of two different sets of texture applied to the same body. Fig. 23 Textures for Virtual Humans and a body with two different textures ESPRIT Project 25379

23 7.5 Examples of complete virtual humans Fig. 24 Example of bodies with Heads In Fig. 24, we present several examples of various bodies (male and female) with several heads placed on top of them. These complete models are ready to be animated because they are based upon generic models and that the animation parameters are embedded within the definition of the body and head files. Depending upon the kind of population that we would ESPRIT Project 25379

Images from 3D Creative Magazine. 3D Modelling Systems

Images from 3D Creative Magazine. 3D Modelling Systems Images from 3D Creative Magazine 3D Modelling Systems Contents Reference & Accuracy 3D Primitives Transforms Move (Translate) Rotate Scale Mirror Align 3D Booleans Deforms Bend Taper Skew Twist Squash

More information

Multilevel Deformation Model Applied to Hand Simulation for Virtual Actors

Multilevel Deformation Model Applied to Hand Simulation for Virtual Actors MIRALab Copyright Information 1998 Multilevel Deformation Model Applied to Hand Simulation for Virtual Actors Laurent Moccozet, Nadia Magnenat Thalmann MIRALab, University of Geneva Abstract Until now,

More information

Human Body Shape Deformation from. Front and Side Images

Human Body Shape Deformation from. Front and Side Images Human Body Shape Deformation from Front and Side Images Yueh-Ling Lin 1 and Mao-Jiun J. Wang 2 Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan

More information

Basics of Design p. 2 Approaching Design as an Artist p. 4 Knowing Your Character p. 4 Making Decisions p. 4 Categories of Design p.

Basics of Design p. 2 Approaching Design as an Artist p. 4 Knowing Your Character p. 4 Making Decisions p. 4 Categories of Design p. Basics of Design p. 2 Approaching Design as an Artist p. 4 Knowing Your Character p. 4 Making Decisions p. 4 Categories of Design p. 6 Realistic Designs p. 6 Stylized Designs p. 7 Designing a Character

More information

POLYMORPH: AN ALGORITHM FOR MORPHING AMONG MULTIPLE IMAGES

POLYMORPH: AN ALGORITHM FOR MORPHING AMONG MULTIPLE IMAGES POLYMORPH: AN ALGORITHM FOR MORPHING AMONG MULTIPLE IMAGES Seungyong Lee Department of Computer Science and Engineering Pohang University of Science and Technology Pohang, 790-784, S. Korea leesy@postech.ac.kr

More information

Synthesizing Realistic Facial Expressions from Photographs

Synthesizing Realistic Facial Expressions from Photographs Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1

More information

Interactive Deformation with Triangles

Interactive Deformation with Triangles Interactive Deformation with Triangles James Dean Palmer and Ergun Akleman Visualization Sciences Program Texas A&M University Jianer Chen Department of Computer Science Texas A&M University Abstract In

More information

Data-Driven Face Modeling and Animation

Data-Driven Face Modeling and Animation 1. Research Team Data-Driven Face Modeling and Animation Project Leader: Post Doc(s): Graduate Students: Undergraduate Students: Prof. Ulrich Neumann, IMSC and Computer Science John P. Lewis Zhigang Deng,

More information

Face Morphing. Introduction. Related Work. Alex (Yu) Li CS284: Professor Séquin December 11, 2009

Face Morphing. Introduction. Related Work. Alex (Yu) Li CS284: Professor Séquin December 11, 2009 Alex (Yu) Li CS284: Professor Séquin December 11, 2009 Face Morphing Introduction Face morphing, a specific case of geometry morphing, is a powerful tool for animation and graphics. It consists of the

More information

Mesh Morphing. Ligang Liu Graphics&Geometric Computing Lab USTC

Mesh Morphing. Ligang Liu Graphics&Geometric Computing Lab USTC Mesh Morphing Ligang Liu Graphics&Geometric Computing Lab USTC http://staff.ustc.edu.cn/~lgliu Morphing Given two objects produce sequence of intermediate objects that gradually evolve from one object

More information

Character Modeling COPYRIGHTED MATERIAL

Character Modeling COPYRIGHTED MATERIAL 38 Character Modeling p a r t _ 1 COPYRIGHTED MATERIAL 39 Character Modeling Character Modeling 40 1Subdivision & Polygon Modeling Many of Maya's features have seen great improvements in recent updates

More information

Computer Animation Visualization. Lecture 5. Facial animation

Computer Animation Visualization. Lecture 5. Facial animation Computer Animation Visualization Lecture 5 Facial animation Taku Komura Facial Animation The face is deformable Need to decide how all the vertices on the surface shall move Manually create them Muscle-based

More information

The Simulation of a Virtual TV Presentor

The Simulation of a Virtual TV Presentor MIRALab Copyright Information 1998 The Simulation of a Virtual TV Presentor Abstract Nadia Magnenat Thalmann, Prem Kalra MIRALab, University of Geneva This paper presents the making of six short sequences

More information

Warping and Morphing. Ligang Liu Graphics&Geometric Computing Lab USTC

Warping and Morphing. Ligang Liu Graphics&Geometric Computing Lab USTC Warping and Morphing Ligang Liu Graphics&Geometric Computing Lab USTC http://staff.ustc.edu.cn/~lgliu Metamorphosis "transformation of a shape and its visual attributes" Intrinsic in our environment Deformations

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

CSE452 Computer Graphics

CSE452 Computer Graphics CSE452 Computer Graphics Lecture 19: From Morphing To Animation Capturing and Animating Skin Deformation in Human Motion, Park and Hodgins, SIGGRAPH 2006 CSE452 Lecture 19: From Morphing to Animation 1

More information

CS 231. Deformation simulation (and faces)

CS 231. Deformation simulation (and faces) CS 231 Deformation simulation (and faces) Deformation BODY Simulation Discretization Spring-mass models difficult to model continuum properties Simple & fast to implement and understand Finite Element

More information

Modeling the Virtual World

Modeling the Virtual World Modeling the Virtual World Joaquim Madeira November, 2013 RVA - 2013/2014 1 A VR system architecture Modeling the Virtual World Geometry Physics Haptics VR Toolkits RVA - 2013/2014 2 VR object modeling

More information

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

Animated Talking Head With Personalized 3D Head Model

Animated Talking Head With Personalized 3D Head Model Animated Talking Head With Personalized 3D Head Model L.S.Chen, T.S.Huang - Beckman Institute & CSL University of Illinois, Urbana, IL 61801, USA; lchen@ifp.uiuc.edu Jörn Ostermann, AT&T Labs-Research,

More information

4. Basic Mapping Techniques

4. Basic Mapping Techniques 4. Basic Mapping Techniques Mapping from (filtered) data to renderable representation Most important part of visualization Possible visual representations: Position Size Orientation Shape Brightness Color

More information

Reconstruction of Complete Head Models with. Consistent Parameterization

Reconstruction of Complete Head Models with. Consistent Parameterization Reconstruction of Complete Head Models with Consistent Parameterization Niloofar Aghayan Thesis submitted to the Faculty of Graduate and Postdoctoral Studies In partial fulfillment of the requirements

More information

TSBK03 Screen-Space Ambient Occlusion

TSBK03 Screen-Space Ambient Occlusion TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm

More information

Lesson 01 Polygon Basics 17. Lesson 02 Modeling a Body 27. Lesson 03 Modeling a Head 63. Lesson 04 Polygon Texturing 87. Lesson 05 NURBS Basics 117

Lesson 01 Polygon Basics 17. Lesson 02 Modeling a Body 27. Lesson 03 Modeling a Head 63. Lesson 04 Polygon Texturing 87. Lesson 05 NURBS Basics 117 Table of Contents Project 01 Lesson 01 Polygon Basics 17 Lesson 02 Modeling a Body 27 Lesson 03 Modeling a Head 63 Lesson 04 Polygon Texturing 87 Project 02 Lesson 05 NURBS Basics 117 Lesson 06 Modeling

More information

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development Chap. 5 Scene Management Overview Scene Management vs Rendering This chapter is about rendering

More information

Research On 3D Emotional Face Animation Based on Dirichlet Free Deformation Algorithm

Research On 3D Emotional Face Animation Based on Dirichlet Free Deformation Algorithm 2017 3rd International Conference on Electronic Information Technology and Intellectualization (ICEITI 2017) ISBN: 978-1-60595-512-4 Research On 3D Emotional Face Animation Based on Dirichlet Free Deformation

More information

Model-Based Face Computation

Model-Based Face Computation Model-Based Face Computation 1. Research Team Project Leader: Post Doc(s): Graduate Students: Prof. Ulrich Neumann, IMSC and Computer Science John P. Lewis Hea-juen Hwang, Zhenyao Mo, Gordon Thomas 2.

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

MODEL BASED FACE RECONSTRUCTION FOR ANIMATION WON-SOOK LEE, PREM KALRA, NADIA MAGNENAT THALMANN

MODEL BASED FACE RECONSTRUCTION FOR ANIMATION WON-SOOK LEE, PREM KALRA, NADIA MAGNENAT THALMANN MODEL BASED FACE RECONSTRUCTION FOR ANIMATION WON-SOOK LEE, PREM KALRA, NADIA MAGNENAT THALMANN MIRALab, CUI, University of Geneva, Geneva, Switzerland E-mail : {wslee, kalra, thalmann}@cui.unige.ch In

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 17 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Interactive Shape Metamorphosis

Interactive Shape Metamorphosis Interactive Shape Metamorphosis David T. Chen Andrei State Department of Computer Science University of North Carolina Chapel Hill, NC 27599 David Banks Institute for Computer Applications in Science and

More information

Free-Form Deformation and Other Deformation Techniques

Free-Form Deformation and Other Deformation Techniques Free-Form Deformation and Other Deformation Techniques Deformation Deformation Basic Definition Deformation: A transformation/mapping of the positions of every particle in the original object to those

More information

Facial Deformations for MPEG-4

Facial Deformations for MPEG-4 Facial Deformations for MPEG-4 Marc Escher, Igor Pandzic, Nadia Magnenat Thalmann MIRALab - CUI University of Geneva 24 rue du Général-Dufour CH1211 Geneva 4, Switzerland {Marc.Escher, Igor.Pandzic, Nadia.Thalmann}@cui.unige.ch

More information

CS 231. Deformation simulation (and faces)

CS 231. Deformation simulation (and faces) CS 231 Deformation simulation (and faces) 1 Cloth Simulation deformable surface model Represent cloth model as a triangular or rectangular grid Points of finite mass as vertices Forces or energies of points

More information

CSE 554 Lecture 7: Deformation II

CSE 554 Lecture 7: Deformation II CSE 554 Lecture 7: Deformation II Fall 2011 CSE554 Deformation II Slide 1 Review Rigid-body alignment Non-rigid deformation Intrinsic methods: deforming the boundary points An optimization problem Minimize

More information

Animation II: Soft Object Animation. Watt and Watt Ch.17

Animation II: Soft Object Animation. Watt and Watt Ch.17 Animation II: Soft Object Animation Watt and Watt Ch.17 Soft Object Animation Animation I: skeletal animation forward kinematics x=f(φ) inverse kinematics φ=f -1 (x) Curves and Surfaces I&II: parametric

More information

Animation COM3404. Richard Everson. School of Engineering, Computer Science and Mathematics University of Exeter

Animation COM3404. Richard Everson. School of Engineering, Computer Science and Mathematics University of Exeter Animation COM3404 Richard Everson School of Engineering, Computer Science and Mathematics University of Exeter R.M.Everson@exeter.ac.uk http://www.secamlocal.ex.ac.uk/studyres/com304 Richard Everson Animation

More information

CS770/870 Spring 2017 Animation Basics

CS770/870 Spring 2017 Animation Basics Preview CS770/870 Spring 2017 Animation Basics Related material Angel 6e: 1.1.3, 8.6 Thalman, N and D. Thalman, Computer Animation, Encyclopedia of Computer Science, CRC Press. Lasseter, J. Principles

More information

CS770/870 Spring 2017 Animation Basics

CS770/870 Spring 2017 Animation Basics CS770/870 Spring 2017 Animation Basics Related material Angel 6e: 1.1.3, 8.6 Thalman, N and D. Thalman, Computer Animation, Encyclopedia of Computer Science, CRC Press. Lasseter, J. Principles of traditional

More information

Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya

Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya Hartmann - 1 Bjoern Hartman Advisor: Dr. Norm Badler Applied Senior Design Project - Final Report Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya Introduction Realistic

More information

Multimedia Technology CHAPTER 4. Video and Animation

Multimedia Technology CHAPTER 4. Video and Animation CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia

More information

Computer Graphics 1. Chapter 2 (May 19th, 2011, 2-4pm): 3D Modeling. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2011

Computer Graphics 1. Chapter 2 (May 19th, 2011, 2-4pm): 3D Modeling. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2011 Computer Graphics 1 Chapter 2 (May 19th, 2011, 2-4pm): 3D Modeling 1 The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons in

More information

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation Computer Animation Aitor Rovira March 2010 Human body animation Based on slides by Marco Gillies Human Body Animation Skeletal Animation Skeletal Animation (FK, IK) Motion Capture Motion Editing (retargeting,

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Reading for Today A Practical Model for Subsurface Light Transport, Jensen, Marschner, Levoy, & Hanrahan, SIGGRAPH 2001 Participating Media Measuring BRDFs

More information

SEOUL NATIONAL UNIVERSITY

SEOUL NATIONAL UNIVERSITY Fashion Technology 5. 3D Garment CAD-1 Sungmin Kim SEOUL NATIONAL UNIVERSITY Overview Design Process Concept Design Scalable vector graphics Feature-based design Pattern Design 2D Parametric design 3D

More information

Lecture 7: Image Morphing. Idea #2: Align, then cross-disolve. Dog Averaging. Averaging vectors. Idea #1: Cross-Dissolving / Cross-fading

Lecture 7: Image Morphing. Idea #2: Align, then cross-disolve. Dog Averaging. Averaging vectors. Idea #1: Cross-Dissolving / Cross-fading Lecture 7: Image Morphing Averaging vectors v = p + α (q p) = (1 - α) p + α q where α = q - v p α v (1-α) q p and q can be anything: points on a plane (2D) or in space (3D) Colors in RGB or HSV (3D) Whole

More information

Fast Facial Motion Cloning in MPEG-4

Fast Facial Motion Cloning in MPEG-4 Fast Facial Motion Cloning in MPEG-4 Marco Fratarcangeli and Marco Schaerf Department of Computer and Systems Science University of Rome La Sapienza frat,schaerf@dis.uniroma1.it Abstract Facial Motion

More information

Character Modeling IAT 343 Lab 6. Lanz Singbeil

Character Modeling IAT 343 Lab 6. Lanz Singbeil Character Modeling IAT 343 Lab 6 Modeling Using Reference Sketches Start by creating a character sketch in a T-Pose (arms outstretched) Separate the sketch into 2 images with the same pixel height. Make

More information

EPFL. When an animator species an animation sequence, it privileges when animating the Virtual. body, the privileged information to be manipulated

EPFL. When an animator species an animation sequence, it privileges when animating the Virtual. body, the privileged information to be manipulated Realistic Human Body Modeling P. Fua, R. Plankers, and D. Thalmann Computer Graphics Lab (LIG) EPFL CH-1015 Lausanne, Switzerland 1 Introduction Synthetic modeling of human bodies and the simulation of

More information

Lecture 17: Solid Modeling.... a cubit on the one side, and a cubit on the other side Exodus 26:13

Lecture 17: Solid Modeling.... a cubit on the one side, and a cubit on the other side Exodus 26:13 Lecture 17: Solid Modeling... a cubit on the one side, and a cubit on the other side Exodus 26:13 Who is on the LORD's side? Exodus 32:26 1. Solid Representations A solid is a 3-dimensional shape with

More information

Modeling People Wearing Body Armor and Protective Equipment: Applications to Vehicle Design

Modeling People Wearing Body Armor and Protective Equipment: Applications to Vehicle Design Modeling People Wearing Body Armor and Protective Equipment: Applications to Vehicle Design Matthew P. Reed 1, Monica L.H. Jones 1 and Byoung-keon Daniel Park 1 1 University of Michigan, Ann Arbor, MI

More information

A Data-driven Approach to Human-body Cloning Using a Segmented Body Database

A Data-driven Approach to Human-body Cloning Using a Segmented Body Database 15th Pacific Conference on Computer Graphics and Applications A Data-driven Approach to Human-body Cloning Using a Segmented Body Database Pengcheng Xi National Research Council of Canada pengcheng.xi@nrc-cnrc.gc.ca

More information

Animated Modifiers (Morphing Teapot) Richard J Lapidus

Animated Modifiers (Morphing Teapot) Richard J Lapidus Animated Modifiers (Morphing Teapot) Richard J Lapidus Learning Objectives After completing this chapter, you will be able to: Add and adjust a wide range of modifiers. Work in both object and world space

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

There we are; that's got the 3D screen and mouse sorted out.

There we are; that's got the 3D screen and mouse sorted out. Introduction to 3D To all intents and purposes, the world we live in is three dimensional. Therefore, if we want to construct a realistic computer model of it, the model should be three dimensional as

More information

Stereo pairs from linear morphing

Stereo pairs from linear morphing Proc. of SPIE Vol. 3295, Stereoscopic Displays and Virtual Reality Systems V, ed. M T Bolas, S S Fisher, J O Merritt (Apr 1998) Copyright SPIE Stereo pairs from linear morphing David F. McAllister Multimedia

More information

MODELING AND HIERARCHY

MODELING AND HIERARCHY MODELING AND HIERARCHY Introduction Models are abstractions of the world both of the real world in which we live and of virtual worlds that we create with computers. We are all familiar with mathematical

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Participating Media Measuring BRDFs 3D Digitizing & Scattering BSSRDFs Monte Carlo Simulation Dipole Approximation Today Ray Casting / Tracing Advantages? Ray

More information

Interactive Computer Graphics

Interactive Computer Graphics Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create

More information

Image-Based Deformation of Objects in Real Scenes

Image-Based Deformation of Objects in Real Scenes Image-Based Deformation of Objects in Real Scenes Han-Vit Chung and In-Kwon Lee Dept. of Computer Science, Yonsei University sharpguy@cs.yonsei.ac.kr, iklee@yonsei.ac.kr Abstract. We present a new method

More information

Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn

Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn Facial Image Synthesis Page 1 of 5 Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn 1 Introduction Facial expression has been central to the

More information

Digital Makeup Face Generation

Digital Makeup Face Generation Digital Makeup Face Generation Wut Yee Oo Mechanical Engineering Stanford University wutyee@stanford.edu Abstract Make up applications offer photoshop tools to get users inputs in generating a make up

More information

Animation. CS 465 Lecture 22

Animation. CS 465 Lecture 22 Animation CS 465 Lecture 22 Animation Industry production process leading up to animation What animation is How animation works (very generally) Artistic process of animation Further topics in how it works

More information

Texture Mapping using Surface Flattening via Multi-Dimensional Scaling

Texture Mapping using Surface Flattening via Multi-Dimensional Scaling Texture Mapping using Surface Flattening via Multi-Dimensional Scaling Gil Zigelman Ron Kimmel Department of Computer Science, Technion, Haifa 32000, Israel and Nahum Kiryati Department of Electrical Engineering

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

User Guide. v1.0. A-Lab Software

User Guide. v1.0. A-Lab Software User Guide v1.0 A-Lab Software Getting Started with Morph 3D Studio 1. After you import the Unity Package from the Asset Store, you will see a folder named A-Lab Software within the Project view. 2. If

More information

2D to pseudo-3d conversion of "head and shoulder" images using feature based parametric disparity maps

2D to pseudo-3d conversion of head and shoulder images using feature based parametric disparity maps University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2001 2D to pseudo-3d conversion of "head and shoulder" images using feature

More information

ANIMATING ANIMAL MOTION

ANIMATING ANIMAL MOTION ANIMATING ANIMAL MOTION FROM STILL Xuemiao Xu, Liang Wan, Xiaopei Liu, Tien-Tsin Wong, Liansheng Wnag, Chi-Sing Leung (presented by Kam, Hyeong Ryeol) CONTENTS 0 Abstract 1 Introduction 2 Related Work

More information

Chapter 9 Animation System

Chapter 9 Animation System Chapter 9 Animation System 9.1 Types of Character Animation Cel Animation Cel animation is a specific type of traditional animation. A cel is a transparent sheet of plastic on which images can be painted

More information

Surfaces, meshes, and topology

Surfaces, meshes, and topology Surfaces from Point Samples Surfaces, meshes, and topology A surface is a 2-manifold embedded in 3- dimensional Euclidean space Such surfaces are often approximated by triangle meshes 2 1 Triangle mesh

More information

Face Synthesis in the VIDAS project

Face Synthesis in the VIDAS project Face Synthesis in the VIDAS project Marc Escher 1, Igor Pandzic 1, Nadia Magnenat Thalmann 1, Daniel Thalmann 2, Frank Bossen 3 Abstract 1 MIRALab - CUI University of Geneva 24 rue du Général-Dufour CH1211

More information

3D Modeling Course Outline

3D Modeling Course Outline 3D Modeling Course Outline Points Possible Course Hours Course Overview 4 Lab 1: Start the Course Identify computer requirements. Learn how to move through the course. Switch between windows. Lab 2: Set

More information

Modelling and Animating Hand Wrinkles

Modelling and Animating Hand Wrinkles Modelling and Animating Hand Wrinkles X. S. Yang and Jian J. Zhang National Centre for Computer Animation Bournemouth University, United Kingdom {xyang, jzhang}@bournemouth.ac.uk Abstract. Wrinkles are

More information

Single-view 3D Reconstruction

Single-view 3D Reconstruction Single-view 3D Reconstruction 10/12/17 Computational Photography Derek Hoiem, University of Illinois Some slides from Alyosha Efros, Steve Seitz Notes about Project 4 (Image-based Lighting) You can work

More information

Real-Time Universal Capture Facial Animation with GPU Skin Rendering

Real-Time Universal Capture Facial Animation with GPU Skin Rendering Real-Time Universal Capture Facial Animation with GPU Skin Rendering Meng Yang mengyang@seas.upenn.edu PROJECT ABSTRACT The project implements the real-time skin rendering algorithm presented in [1], and

More information

Shape Blending Using the Star-Skeleton Representation

Shape Blending Using the Star-Skeleton Representation Shape Blending Using the Star-Skeleton Representation Michal Shapira Ari Rappoport Institute of Computer Science, The Hebrew University of Jerusalem Jerusalem 91904, Israel. arir@cs.huji.ac.il Abstract:

More information

Image warping/morphing

Image warping/morphing Image warping/morphing Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Tom Funkhouser and Alexei Efros Image warping Image formation B A Sampling and quantization What

More information

CGS 3220 Lecture 13 Polygonal Character Modeling

CGS 3220 Lecture 13 Polygonal Character Modeling CGS 3220 Lecture 13 Polygonal Character Modeling Introduction to Computer Aided Modeling Instructor: Brent Rossen Overview Box modeling Polygon proxy Mirroring Polygonal components Topology editing Procedural

More information

Physically-Based Modeling and Animation. University of Missouri at Columbia

Physically-Based Modeling and Animation. University of Missouri at Columbia Overview of Geometric Modeling Overview 3D Shape Primitives: Points Vertices. Curves Lines, polylines, curves. Surfaces Triangle meshes, splines, subdivision surfaces, implicit surfaces, particles. Solids

More information

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models 3D Programming Concepts Outline 3D Concepts Displaying 3D Models 3D Programming CS 4390 3D Computer 1 2 3D Concepts 3D Model is a 3D simulation of an object. Coordinate Systems 3D Models 3D Shapes 3D Concepts

More information

MPEG-4 Compatible Faces from Orthogonal Photos

MPEG-4 Compatible Faces from Orthogonal Photos MPEG-4 Compatible Faces from Orthogonal Photos Won-Sook Lee, Marc Escher, Gael Sannier, Nadia Magnenat-Thalmann MIRALab, CUI, University of Geneva http://miralabwww.unige.ch E-mail: {wslee, escher, sannier,

More information

Announcements: Quiz. Animation, Motion Capture, & Inverse Kinematics. Last Time? Today: How do we Animate? Keyframing. Procedural Animation

Announcements: Quiz. Animation, Motion Capture, & Inverse Kinematics. Last Time? Today: How do we Animate? Keyframing. Procedural Animation Announcements: Quiz Animation, Motion Capture, & Inverse Kinematics On Friday (3/1), in class One 8.5x11 sheet of notes allowed Sample quiz (from a previous year) on website Focus on reading comprehension

More information

Digitization of 3D Objects for Virtual Museum

Digitization of 3D Objects for Virtual Museum Digitization of 3D Objects for Virtual Museum Yi-Ping Hung 1, 2 and Chu-Song Chen 2 1 Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan 2 Institute of

More information

An Adaptive Eigenshape Model

An Adaptive Eigenshape Model An Adaptive Eigenshape Model Adam Baumberg and David Hogg School of Computer Studies University of Leeds, Leeds LS2 9JT, U.K. amb@scs.leeds.ac.uk Abstract There has been a great deal of recent interest

More information

Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images

Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images Seong-Jae Lim, Ho-Won Kim, Jin Sung Choi CG Team, Contents Division ETRI Daejeon, South Korea sjlim@etri.re.kr Bon-Ki

More information

Beginners Guide Maya. To be used next to Learning Maya 5 Foundation. 15 juni 2005 Clara Coepijn Raoul Franker

Beginners Guide Maya. To be used next to Learning Maya 5 Foundation. 15 juni 2005 Clara Coepijn Raoul Franker Beginners Guide Maya To be used next to Learning Maya 5 Foundation 15 juni 2005 Clara Coepijn 0928283 Raoul Franker 1202596 Index Index 1 Introduction 2 The Interface 3 Main Shortcuts 4 Building a Character

More information

Facial Expression Morphing and Animation with Local Warping Methods

Facial Expression Morphing and Animation with Local Warping Methods Facial Expression Morphing and Animation with Local Warping Methods Daw-Tung Lin and Han Huang Department of Computer Science and Information Engineering Chung Hua University 30 Tung-shiang, Hsin-chu,

More information

Case Study: The Pixar Story. By Connor Molde Comptuer Games & Interactive Media Year 1

Case Study: The Pixar Story. By Connor Molde Comptuer Games & Interactive Media Year 1 Case Study: The Pixar Story By Connor Molde Comptuer Games & Interactive Media Year 1 Contents Section One: Introduction Page 1 Section Two: About Pixar Page 2 Section Three: Drawing Page 3 Section Four:

More information

View-Dependent Texture Mapping

View-Dependent Texture Mapping 81 Chapter 6 View-Dependent Texture Mapping 6.1 Motivation Once a model of an architectural scene is recovered, the goal is to produce photorealistic renderings. A traditional approach that is consistent

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

Sculpting 3D Models. Glossary

Sculpting 3D Models. Glossary A Array An array clones copies of an object in a pattern, such as in rows and columns, or in a circle. Each object in an array can be transformed individually. Array Flyout Array flyout is available in

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Final Projects Proposals due Thursday 4/8 Proposed project summary At least 3 related papers (read & summarized) Description of series of test cases Timeline & initial task assignment The Traditional Graphics

More information

COMP 102: Computers and Computing

COMP 102: Computers and Computing COMP 102: Computers and Computing Lecture 23: Computer Vision Instructor: Kaleem Siddiqi (siddiqi@cim.mcgill.ca) Class web page: www.cim.mcgill.ca/~siddiqi/102.html What is computer vision? Broadly speaking,

More information

Topics for thesis. Automatic Speech-based Emotion Recognition

Topics for thesis. Automatic Speech-based Emotion Recognition Topics for thesis Bachelor: Automatic Speech-based Emotion Recognition Emotion recognition is an important part of Human-Computer Interaction (HCI). It has various applications in industrial and commercial

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

Lesson 1 Parametric Modeling Fundamentals

Lesson 1 Parametric Modeling Fundamentals 1-1 Lesson 1 Parametric Modeling Fundamentals Create Simple Parametric Models. Understand the Basic Parametric Modeling Process. Create and Profile Rough Sketches. Understand the "Shape before size" approach.

More information

CHAPTER 1 Graphics Systems and Models 3

CHAPTER 1 Graphics Systems and Models 3 ?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........

More information

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India Keshav Mahavidyalaya, University of Delhi, Delhi, India Abstract

More information