POLYMORPH: AN ALGORITHM FOR MORPHING AMONG MULTIPLE IMAGES

Size: px
Start display at page:

Download "POLYMORPH: AN ALGORITHM FOR MORPHING AMONG MULTIPLE IMAGES"

Transcription

1 POLYMORPH: AN ALGORITHM FOR MORPHING AMONG MULTIPLE IMAGES Seungyong Lee Department of Computer Science and Engineering Pohang University of Science and Technology Pohang, , S. Korea George Wolberg Department of Computer Science City College of New York New York, NY Sung Yong Shin Department of Computer Science Korea Advanced Institute of Science and Technology Taejon , S. Korea ABSTRACT This paper presents polymorph, a novel algorithm for morphing among multiple images. Traditional image morphing generates a sequence of images depicting an evolution from one image into another. We extend this approach to permit morphed images to be derived from more than two images at once. We formulate each input image to be a vertex of a simplex. An inbetween, or morphed, image is considered to be a point in the simplex. It is generated by computing a linear combination of the input images, with the weights derived from the barycentric coordinates of the point. To reduce run-time computation and memory overhead, we define a central image and use it as an intermediate node between the input images and the inbetween image. Preprocessing is introduced to resolve conflicting positions of selected features in input images when they are blended to generate a nonuniform inbetween image. We present warp propagation to efficiently derive warp functions among input images. Blending functions are effectively obtained by constructing surfaces that interpolate user-specified blending rates. The polymorph algorithm furnishes a powerful tool for image composition which effectively integrates geometric manipulations and color blending. The algorithm is demonstrated with examples that seamlessly blend and manipulate facial features derived from various input images. Keywords: Image metamorphosis, morphing, polymorph, image composition. 1

2 1 INTRODUCTION Image metamorphosis has proven to be a powerful visual effects tool. There are now many breathtaking examples in film and television depicting the fluid transformation of one digital image into another. This process, commonly known as morphing, is realized by coupling image warping with color interpolation. Image warping applies 2D geometric transformations on the images to obtain geometric alignment between their features, while color interpolation blends their color. Details of several image morphing techniques can be found in [1, 2, 3, 4]. The traditional formulation for image morphing considers only two input images at a time, i.e., the source and target images. In that case, morphing among multiple images is understood to mean a series of transformations from one image to another. This limits any morphed image to take on the features and colors blended from just two input images. Given the success of morphing using this paradigm, it is reasonable to consider the benefits possible from a blend of more than two images at a time. For instance, consider the generation of a facial image that is to have blended characteristics of eyes, nose, and mouth derived from several input faces. In this case, morphing among multiple images is understood to mean a blend of several images at once. We will refer to this process as polymorph. Rowland and Perrett considered a special case of polymorph to obtain a prototype face, in terms of gender and age, from several tens of sample faces [5]. They superimpose feature points on input images to specify the different positions of features in sample faces. The shape of a prototype face is determined by averaging the specified feature positions. A prototype face is generated by applying image warping to each input image and then performing a cross-dissolve operation among the warped images. The shape and color differences between prototypes from different genders and ages are used to manipulate a facial image to perform predictive gender and age transformations. In this paper, we present a general framework for polymorph by extending the traditional image morphing paradigm that applies to two images. We formulate each input image to be a vertex of an (n? 1)-dimensional simplex, where n is the number of input images. Note that an (n? 1)-dimensional simplex is a convex polyhedron having n vertices in (n? 1)-dimensional space, e.g., a triangle in 2D, a tetrahedron in 3D, etc. An arbitrary inbetween (morphed) image can be specified by a point in the simplex. The barycentric coordinates of that point determine the weights used to blend the input images into the inbetween image. When only two images are considered, the simplex degenerates into a line. Points along the line correspond to inbetween images in a morph sequence. This case is identical to conventional image morphing. When more than two images are considered, a path lying anywhere in the simplex constitutes the inbetween images in a morph sequence. In morphing among two images, nonuniform blending was introduced to derive an inbetween image in which blending rates differ across the image [3, 4]. This allows us to generate more interesting animations such as transformation of the source image to the target from top to bottom. Nonuniform blending was also considered in volume metamorphosis to control blending schedules [6, 7]. In this paper, the framework for polymorph includes nonuniform blending of features in several input images. For instance, a facial image can be generated to have its eyes, nose, mouth, and ears derived from four different input faces. Polymorph is ideally suited for image composition applications. A composite image is treated as a metamorphosis of selected regions in several input images. The regions seamlessly blend 2

3 together with respect to geometry and color. The technique produces high quality composites with considerably less effort than conventional image composition techniques. In this regard, polymorph brings to image composition what image warping has brought to cross-dissolve in deriving morphing: a richer and more sophisticated class of visual effects that are achieved with intuitive and minimal user interaction. This paper is organized as follows. Section 2 presents the mathematical framework for polymorph. Warp function generation and propagation are presented in Section 3. Blending function generation is described in Section 4. Section 5 describes the implemented polymorph system and its performance. Section 6 presents metamorphosis examples, which demonstrate the use of polymorph for image composition. Section 7 discusses applications and extensions of polymorph. Conclusions are given in Section 8. 2 MATHEMATICAL FRAMEWORK In this section, we present the mathematical framework for polymorph. We extend the metamorphosis framework for two images [3, 4] to generate an inbetween image from several images. The framework is further optimized by introducing the notion of a central image. Finally, we introduce preprocessing and postprocessing steps to the framework to enhance the usefulness of the polymorph technique. 2.1 Image representation Consider n input images I 1 ; I 2 ; : : : ; I n. We formulate each input image to be a vertex of an (n? 1)-dimensional simplex. An inbetween image is considered to be a point in the simplex. All points are given in P barycentric coordinates in R n?1 by b = (b 1 ; b 2 ; : : : ; b n ), subject to the n constraints b i 0 and i=1 b i = 1. Each input image I i corresponds to the i-th vertex of the simplex, where only the i-th barycentric coordinate is 1 and all the others are 0. An inbetween image I is specified by a point b, where each coordinate b i determines the relative influence of input image I i on I. In conventional morphing among two images, transition rates 0 and 1 imply the source and target images, respectively [3, 4]. An inbetween image is then represented by a real number between 0 and 1, which determines a point in 2D barycentric coordinates. The image representation in polymorph can be considered to be a generalization of that used for morphing among two images. In the conventional approach, morphing among n input images implies a sequence of animations between two images, for example, I 0! I 1!! I n. The animation sequence corresponds to a path visiting all vertices along the edges of the simplex. In contrast, polymorph can generate an animation which corresponds to an arbitrary path inside the simplex. The animation contains a sequence of inbetween images which blends all n input images at a time. In the following, we consider the procedure to generate the inbetween image associated with a point along the path. The procedure can be readily applied to all other points along the path to generate an animation. 2.2 Basic metamorphosis framework Suppose that we are to generate an inbetween image I at point, b = (b 1 ; b 2 ; : : : ; b n ), from input images I 1 ; I 2 ; : : : ; I n. Let W ij be the warp function from image I i to image I j. W ij specifies 3

4 the corresponding point in I j for each point in I i. When applied to I i, W ij generates a warped image whereby the features in I i coincide with their corresponding features in I j. Note that W ii is the identity warp function, and W ji is the inverse function of W ij. To generate an inbetween image I, we first derive a warp function W i by linearly interpolating W ij for each i. Each image I i is then distorted by W i to generate an intermediate image I i. Images I i have the same inbetween positions and shapes of corresponding features for all i. Inbetween image I is finally obtained by linearly interpolating the pixel colors among I i. Each coordinate b i of I is used as the relative weight for I i in the linear interpolation of warps and colors. We call b a blending vector. It determines the blending of geometry and color among the input images to generate an inbetween image. For simplicity, we treat the blending vector for both geometry and color to be identical, although they may be different in practice. Fig. 1 shows the warp functions used to generate an inbetween image from three input images. Each warp function in the figure distorts one image towards the other so that the corresponding features coincide in their shapes and positions. Note that warp function W ij is independent of the specified blending vector b, while W i is determined by b and W ij. Since there are no geometric distortions between intermediate image I i and the final inbetween image I, it is sufficient for Fig. 1 to depict warps directly from I i to I, omitting any reference to I i. In this manner, the figure considers only the warp functions and neglects color blending. I0 W0 W01 W10 I W02 W20 W1 W2 I1 W21 W12 I2 Figure 1: Warp functions for three input images Given images I i and warp functions W ij, the following equations summarize the steps for generating an inbetween image I from a blending vector b. W i I i denotes the application of warp function W i to image I i. p and r represent points in I i and I, respectively. They are related by r = W i (p). Color interpolation is achieved by attenuating the pixel colors of the input images and adding the warped images. W i (p) = nx j=1 b j W ij (p) ; 4

5 I i (r) = W i (p) b i I i (p) ; I(r) = nx i=1 I i (r) : The image on the left in Fig. 2 was produced by ordinary cross-dissolve from input images I i. Notice that the image appears triple-exposed due to the blending of misaligned features. The images on the right of I i illustrate the process to generate an inbetween image I using the proposed framework. Although the intensities of intermediate images I i should appear attenuated, the images are shown in full intensity to clearly demonstrate the distortions. The blending vector used for the figure is b = ( 1; 1; 1 ). The resulting image I equally blends the shapes, positions, and colors of the eyes, nose, and mouth of the input faces. W0 [ b] I0 I0 W1 [ b] cross-dissolve I1 I1 I W2 [ b] I2 I2 Figure 2: A cross-dissolve and uniform inbetween image from three input images 2.3 General metamorphosis framework The framework presented in Section 2.2 generates a uniform inbetween image on which the same blending vector is used across the image. A visually compelling inbetween image may be obtained by applying different blending vectors to its various parts. For example, consider a facial image that has its eyes, ears, nose, and mouth derived from four different input images. We introduce a blending function to facilitate a nonuniform inbetween image that has different blending vectors over its points. 5

6 A blending function specifies a blending vector for each point in an image. Let B i be a blending function defined on image I i. For each point p in I i, B i (p) is a blending vector that determines the weights used for linearly interpolating W ij (p) to derive W i (p). Also, the i-th coordinate of B i (p) determines the color contribution of point p to the corresponding point in inbetween image I. The metamorphosis characteristics of a nonuniform inbetween image is fully specified by one blending function B i defined on any input image I i. This is analogous to using one blending vector to specify a uniform inbetween image. From the correspondence between points in input images, the blending information specified by B i can be shared among all input images. The blending functions B j for the other images I j can be derived by composing B i and warp functions W ji. That is, B j = B i W ji, or equivalently, B j (p) = B i (W ji (p)). For all corresponding points in the input images, the resulting blending functions specify the same blending vector. Given images I i, warp functions W ij, and blending functions B i, the following equations summarize the steps for generating a nonuniform inbetween image I. b j i (p) denotes the j-th coordinate in blending vector B i (p). W i (p) = nx j=1 b j i (p)w ij(p) ; I i (r) = W i (p) b i i (p)i i(p) ; I(r) = nx i=1 I i (r) : Fig. 3 shows an illustration of the above framework. Blending functions B i were designed to make inbetween image I retain the hair, eyes and nose, and mouth and chin from input images I 0, I 1, and I 2, respectively. Warp functions W i are determined by B i and generate distortions of I i whereby the parts of interest remain intact. Here again intermediate images I i are shown in full intensity for clarity. In practice, the I i s are nonuniformly attenuated by applying B i before they are distorted. The attenuation maintains those parts to be retained in I at their full intensity. The strict requirement to retain specified features of the input images has produced an unnatural result around the mouth and chin in Fig. 3. Additional processing, described in Section 2.5, may be necessary to address the artifacts inherent in the current process. First, we shall consider how to reduce run-time computation and memory overhead in generating an inbetween image. 2.4 Optimization with a central image The evaluation of warp functions W ij is generally an expensive operation. To reduce run-time computation, we compute W ij only once when feature correspondences are established among input images I i. We sample each W ij over all source pixels and store the resulting target coordinates in an array. The arrays for all W ij are then used to compute intermediate warp functions W i which depend on blending functions B i. For large n, these n 2 warp functions require significant memory overhead, especially when the input images are large. To avoid this memory overhead, we define a central image and use it to reduce the number of stored warp functions. A central image I C is a uniform inbetween image corresponding to the centroid of the simplex that consists of input images. The blending vector (1=n; 1=n; : : : ; 1=n) is associated with this 6

7 W0[ ] B0 I0 I0 W1[ ] B1 I1 I1 I W2[ ] B2 I2 I2 Figure 3: A nonuniform inbetween image from three input images 7

8 image. Instead of keeping n 2 warp functions W ij, we maintain 2n warp functions, W ic and W Ci, and compute W i from a blending function specified on I C. W ic is a warp function from input image I i to central image I C. Conversely, W Ci is a warp function from I C to I i. W Ci is the inverse function of W ic. Let a blending function B C be defined on central image I C. B C determines the metamorphosis characteristics of an inbetween image I. B C has the same role as B i except that it operates on I C instead of I i. Therefore, B C (q) gives a blending vector that specifies the relative influences of corresponding points in I i onto I for each point q in I C. The equivalent blending functions B i for input images I i can be derived by function compositions B C W ic. To obtain warp functions W i from I i to I, we first generate a warp function W C from I C to I. For each point in I C, warp functions W Ci give a set of corresponding points in input images I i. Hence, W C can be derived by linearly interpolating W Ci with the weights of B C. The W i s are then obtained by function compositions W C W ic. Fig. 4 shows the relationship of warp functions W ic, W Ci, and W C with images I i, I C, and I. Note that W ic and W Ci are independent of B C, whereas W C is determined by B C from W Ci. I0 W0C WC0 IC WC I WC1 W2C W1C WC2 I1 I2 Figure 4: Warp functions with a central image Given images I i and warp functions W ic and W Ci, the following equations summarize the steps for generating an inbetween image I from a blending function B C defined on central image I C. Let p, q, and r represent points in I i, I C, and I, respectively. They are related by q = W ic (p) and r = W C (q). b j i (p) and bj (q) denote the j-th coordinates in blending vectors B C i(p) and B C (q), respectively. W C (q) = nx i=1 b i C (q)w Ci(q) ; W i (p) = (W C W ic )(p) = W C (W ic (p)) ; B i (p) = (B C W ic )(p) = B C (W ic (p)) ; I i (r) = W i (p) b i i (p)i i(p) ; 8

9 I(r) = nx i=1 I i (r) : Note that the and operators denote forward and inverse mapping, respectively. For an image I and warp function W, W I maps all pixels in I onto the distorted image I 0, while I W?1 maps all pixels in I 0 onto I, assuming that the inverse function W?1 exists. Although the operator could have been applied to compute I i above, we use the operator because W?1 i is not readily available. In Fig. 4, central image I C is drawn with dashed borders because it is not actually constructed in the process of generating an inbetween image. It is introduced to provide a conceptual intermediate step to derive the necessary warp and blending functions. Any image, including an input image, can be made to play the role of the central image. However, we have defined the central image to lie at the centroid of the simplex to establish symmetry in the metamorphosis framework. In most cases, a central image relates to a natural inbetween image among the input images. It equally blends the features in the input images, such as a prototype face among input faces. 2.5 Preprocessing and postprocessing A useful application of polymorph is feature-based image composition where selected features in input images seamlessly blend in an inbetween image. In that case, if the shapes and positions of the selected features do not match among the input images, the inbetween image may not have its features in appropriate shapes and positions. For example, the inbetween face in Fig. 3 retains hair, eyes and nose, and mouth and chin features from the input faces, yet it appears unnatural. This is due to the rigid placement of selected input regions into a patchwork of inappropriately scaled and positioned elements. This is akin to a cut-and-paste operation where smooth blending has been limited to areas between these regions. This problem can be overcome by adding a preprocessing step to the metamorphosis framework presented in Section 2.4. Before operating the framework on input images I i, we can apply warp functions P i to I i to generate distorted input images Ii. 0 The distortions change the shapes and positions of the selected features in I i so that an inbetween image from I 0 i has appropriate feature shapes and positions. In that case, the framework is applied to I 0, instead of I i i, treating I 0 i as the input images. After an inbetween image is derived through the framework, we sometimes need a postprocessing of the image to enhance the result, even though the preprocessing step has already been applied. For example, we may want to reduce the size of the inbetween face in Fig. 3, which can not readily be done by preprocessing input images. To postprocess an inbetween image I 0, we apply a warp function Q to I 0 and generate the final image I. The postprocessing step is useful for local refinement and global manipulation of an inbetween image. Fig. 5 shows the warp functions used in the metamorphosis framework including the preprocessing and postprocessing steps. Warp functions P i distort input images I i towards I 0 i, from which an inbetween image I 0 is generated. The final image I is derived by applying warp function Q to I 0. Fig. 6 illustrates the process with intermediate images. In the preprocessing, the hair of input image I 0 and the mouth and chin of I 2 are moved upwards and to the lower right, respectively. The nose in I 1 is made slightly narrower. The face in inbetween image I 0 now appears more natural 9

10 than that in Fig. 3. In the postprocessing, the face in I 0 is scaled down horizontally to generate the final image I. I0 P0 I0 W0C WC0 IC WC I Q I WC1 W2C W1C WC2 I1 P1 I 1 I2 P2 I2 Figure 5: Warp functions with preprocessing and postprocessing When adding the preprocessing step to the metamorphosis framework, distorted input images I 0 are used to determine the central image I 0 0 i C and warp functions W and W 0 ic Ci. However, in that case, whenever we apply different warp functions P i to input images I i, we must recompute W 0 ic and W 0 Ci to apply the framework to Ii. 0 This can be very cumbersome, especially since several preprocessing steps may be necessary to derive a satisfactory inbetween image. To overcome this drawback, we reconfigure Fig. 5 to Fig. 7 so that I C, W ic, and W Ci may depend only on I i, regardless of P i. In the new configuration, we can derive the corresponding points in the I 0 i s for each point in I C by function compositions P i W Ci. Given a blending function B C defined on I C, warp function W C is computed by linearly interpolating P i W Ci with the weights of B C. The resulting W C is equivalent to W 0 used in Fig. 5. Then, the warp functions from I C i to I 0 can be obtained by W C W ic, properly reflecting the effects of preprocessing. Warp functions W i from I i to the final inbetween image I, including the postprocessing step, can be derived by Q W C W ic. Consider input images I i, warp functions W ic and W Ci, and a blending function B C defined on central image I C. The following equations summarize the process to generate an inbetween image I from B C and warp functions P i and Q, which specify the preprocessing and postprocessing steps. Let p, q, and r represent points in I i, I C, and I, respectively. They are related by q = W ic (p) and r = (Q W C )(q). W C (q) = nx i=1 b i C (q)(p i W Ci )(q) = nx i=1 b i C (q)p i(w Ci (q)) ; 10

11 P0 W0 [ BC ] I0 I0 I0 P1 W1 [ BC ] Q I1 I 1 I1 I I P2 [ ] W2 BC I2 I2 I2 Figure 6: Inbetween image generation with preprocessing and postprocessing I0 P0 I0 W0C WC0 WC I Q I IC W C1 W 1C W C2 W 2C P 1 P 2 I 1 I1 I2 I2 Figure 7: Warp functions with optimal preprocessing and postprocessing 11

12 W i (p) = (Q W C W ic )(p) = Q(W C (W ic (p))) ; B i (p) = (B C W ic )(p) = B C (W ic (p)) ; I i (r) = W i (p) b i i (p)i i(p) ; I(r) = nx i=1 I i (r) : The differences between the framework given above and that in Section 2.4 lie only in the computation of warp functions W C and W i. Additional function compositions are included above to incorporate the preprocessing and postprocessing effects. In Fig. 7, images I 0 i and I 0 are drawn with dashed borders because they are not actually constructed in generating I. As in the case of central image I C, images I 0 i and I 0 provide conceptual intermediate steps to derive W C and W i. Fig. 8 shows an example of inbetween image generation with the framework. Intermediate images I i are the same as the distorted images to be generated if we apply warp function Q to images I 0 i in Fig. 6. In other words, I i reflects the preprocessing and postprocessing effects, in addition to the distortions defined by blending function B C. Hence, only three intermediate images are necessary to obtain the final inbetween image I as in Fig. 3, rather than seven in Fig. 6. Notice that I is the same as that in Fig. 6. W0[ ] BC I0 I0 W1[ ] BC I1 I1 I W 2[ ] B C I2 I2 Figure 8: Inbetween image generation with optimal preprocessing and postprocessing To generate an inbetween image from n input images through the framework, a solution is needed to each of the following three problems: 12

13 how to find 2n warp functions W ic and W Ci, how to specify a blending function B C on I C, and how to specify warp functions P i and Q for preprocessing and postprocessing. 3 WARP FUNCTION GENERATION This section addresses the problem of deriving warp functions, W ic and W Ci, and P i and Q. A conventional image morphing technique is used to compute warp functions for (n? 1) pairs of input images. We propagate these warp functions to obtain W ij for all pairs of input images. W ic is then derived by averaging W ij for each i. We compute W Ci as the inverse function of W ic by using a warp generation algorithm. P i and Q are derived from the user input specified by primitives such as line segments overlaid onto images. 3.1 Warp functions between two images The warp functions between two input images can be derived by a conventional image morphing technique. Traditionally, image morphing between two images begins with an animator establishing their correspondence with pairs of feature primitives. e.g., mesh nodes, line segments, curves, or points. Each primitive specifies an image feature, or landmark. An algorithm is then used to compute a warp function that interpolates the feature correspondence across the input images. There are several image morphing algorithms in common use. They differ in the manner in which features are specified and warp functions are generated. In mesh warping [1], bicubic spline interpolation is used to compute a warp function from the correspondence of mesh points. In field morphing [2], pairs of line segments are used for feature specification, and weighted averaging is used to determine a warp function. More recently, thin-plate splines [8, 4] and multilevel freeform deformations [3] have been used to compute warp functions from selected point-to-point correspondences. In this paper, we use the multilevel free-form deformation algorithm presented in [3] to generate warp functions between two input images. Although any existing warp generation method may be chosen, we have selected this algorithm because it efficiently generates C 2 -continuous and one-to-one warp functions. The one-to-one property guarantees that the distorted image does not fold back upon itself. A warp function represents a mapping between all points in two images. In this work, we save the warp functions in binary files to reduce run-time computation. A warp function for an M N image is stored as an M N array of target x- and y-coordinates for all source pixels. Once feature correspondence between two images I i and I j is established, we can derive warp functions in both directions, i.e., W ij and W ji. Therefore, two M N arrays of target coordinates are stored for each pair of input images for which feature correspondence is established. Fig. 9 depicts the warp generation process. In Figs. 9(a) and (b), the specified features are overlaid on input images I 0 and I 1. Figs. 9(c) and (d) illustrate warp functions W 01 and W 10 generated from the features by the multilevel free-form deformation, respectively. 13

14 (a) I 0 with features (b) I 1 with features (c) warp W 01 (d) warp W 10 Figure 9: Specified features and the resulting warp functions 3.2 Warp function propagation The warp functions between two images can be derived by specifying their feature correspondence. Multiple images, however, require warp functions to be determined for each image to every other image. This exacerbates the already tedious and cumbersome operation of specifying feature correspondence. For instance, n input images require feature correspondence to be established among n(n? 1)=2 pairs of images. We now address the problem of minimizing this feature specification effort. Consider a directed graph G with n vertices. Each vertex v i corresponds to input image I i, and there is an edge e ij from v i to v j if warp function W ij from I i to I j has been derived. To minimize the feature specification effort, we first select (n?1) pairs of images so that the associated edges constitute a connected graph G. We specify the feature correspondence between the selected image pairs to obtain warp functions between them. These warp functions can then be propagated to derive the remaining warp functions for all other image pairs. The propagation is done in the same manner as that used for computing the transitive closure of graph G. The connectivity of G guarantees that warp functions are determined for all pairs of images after propagation. To determine an unknown warp function W ij, we traverse G to find any vertex v k that is shared by existing edges e ik and e kj. If such a vertex v k can be found, we update G to include edge e ij and define W ij as the composite function W kj W ik. When there are several v k s, then the composed warp functions through the v k s are computed and averaged together. If there is no v k connecting v i and v j, W ij remains unknown and e ij is not added to G. This procedure is 14

15 iterated for all unknown warp functions, and the iteration is repeated until all warp functions are determined. If W ij remains unknown in an iteration, it would be determined in a following iteration as G gets updated. From the connectivity of G, at most (n? 2) iterations are required to resolve every unknown warp function. With the warp propagation approach, the user is required to specify feature correspondences for (n? 1) pairs of images. This is far less effort than considering all n(n? 1)=2 pairs. Fig. 10 shows an example of warp propagation. Fig. 10(a) illustrates warp function W 02, which has been derived by specifying feature correspondence between input images I 0 and I 2. To determine warp function W 12 from I 1 to I 2, we compose W 10 and W 02, which are shown in Fig. 9(d) and Fig. 10(a), respectively. Fig. 10(b) illustrates the resulting W 12 = W 02 W 10. Notice that the left and right sides of the hair are widened in W 12, while they are made narrower in W 02. (a) warp W 02 (b) warp W 12 = W 02 W 10 Figure 10: Warp propagation example 3.3 Warp functions for central image We now consider how to derive the warp functions among the central image I C and all input images I i. I C is the uniform inbetween image corresponding to a blending vector (1=n; 1=n; : : : ; 1=n). Hence, from the basic metamorphosis framework in Section P 2.2, warp functions W ic from I i to n I C are straightforward to compute. That is, W ic (p) = j=1 W ij (p)=n, for each point p in I i. Computing warp function W Ci from I C back to I i, however, is not straightforward. We determine W Ci as the inverse function of W ic. Each point p in I i is mapped to a point q in I C by W ic. The inverse of W ic should map each q back to p. Hence, the corresponding points q for pixels p in I i provide W Ci with scattered positional constraints, W Ci (q) = p. The multilevel free-form deformation technique, which has been used to derive warp functions, can also be applied to the constraints to compute W Ci. When warp functions W ij are one-to-one, it is not mathematically clear that their average function W ic is also one-to-one. Conceptually, though, we can expect W ic to be one-to-one because it is the warp function that might be generated by moving the features in I i to the averaged positions. In the worst case that W ic is not one-to-one, the positional constraints, W Ci (q) = p, may have two different positions for a point q. In that case, we ignore one of the two positions and apply the multilevel free-form deformation to the remaining constraints. Fig. 11 shows examples of warp functions between input images and a center image. Fig. 15

16 11(a) illustrates warp function W 0C from I 0 to I C, which is the average of the identity function, W 01 in Fig. 9(c), and W 02 in Fig. 10(a). In Fig. 11(b), W C0 has been derived as the inverse function of W 0C by the multilevel free-form deformation technique. (a) warp W 0C (b) warp W C0 Figure 11: Warp functions between I 0 and I C 3.4 Warp functions for preprocessing and postprocessing Warp functions P i for preprocessing are derived in a similar manner to those between two images, which was presented in Section 3.1. We overlay primitives such as line segments and curves onto image I i as in Section 3.1. However, in this case, the primitives are used to select the parts in I i to be distorted, instead of specifying feature correspondence with another image. P i is defined by moving the primitives to the desired distorted positions of the selected parts and computed by applying the multilevel free-form deformation technique to the displacements. Fig. 12 shows the primitive and its movement specified on input image I 2 to define warp function P 2 in Fig. 6. The primitive has been moved to the lower right to change the positions of the mouth and chin. (a) original positions (b) new positions Figure 12: Specified primitives for preprocessing Warp function Q for postprocessing can be derived in the same manner as P i. In this case, primitives are overlaid onto inbetween image I 0 in Fig. 7 to select parts to be distorted towards 16

17 the final inbetween image I. The overlaid primitives are moved to define Q, and multilevel freeform deformation is used to compute Q from the movements. I 0 is constructed only to allow the specification of features and their movements and not used in the process of inbetween image generation. The postprocessing operation is illustrated in Fig. 13, whereby the primitive specified on I 0 is scaled down horizontally to make the face narrower. (a) original positions (b) new positions Figure 13: Specified primitives for postprocessing 4 BLENDING FUNCTION GENERATION This section addresses the problem of generating a blending function B C defined on the central image I C. A blending vector is sufficient to determine a B C that generates a uniform inbetween image. A nonuniform inbetween image, however, can be specified by assigning different blending rates to selected parts of various input images. The corresponding B C is derived by gathering all the blending rates onto I C and applying scattered data interpolation to them. 4.1 Uniform inbetween image To generate a uniform inbetween image from n input images, a user is P required to specify a n blending vector b = (b 1 ; b 2 ; : : : ; b n ), subject to the constraints b i 0 and i=0 b i = 1. If these constraints P are violated, we enforce them by clipping negative values to zero and dividing each b i n by i=0 b i. The blending function B C is then a constant function having the resulting blending vector as its value at every point in I C. 4.2 Nonuniform inbetween image To generate a nonuniform inbetween image I, a user assigns a real value b i 2 [0; 1] to a selected region R i of input image I i for some i. The value b i assigned to R i determines the influence of I i onto the corresponding part R of inbetween image I. When b i approaches 1, the colors and shapes of the features in R i dominate those in R. Conversely, when b i approaches 0, the influence of R i on R diminishes. Figs. 14(a), (b), and (c) show the polygons used to select regions in I 0, I 1, and I 2 for generating I in Fig. 3. All points inside the polygons in Figs. 14(a), (b), and (c) have been assigned the value 1.0. A blending function B C is generated by first projecting the values b i onto I C. This can be done by mapping points in R i onto I C using warp functions W ic. Fig. 14(d) shows the projection of the 17

18 (a) (b) (c) (d) Figure 14: Selected parts on input images and their projections onto the central image selected parts in Figs. 14(a), (b), and (c) onto I C. Let (b 1 ; b 2 ; : : : ; b n ) be an n-tuple representing the projected values of b i onto a point in I C. This n-tuple is defined only in the projection of R i on I C. Since the user is not required to specify b i for all I i, then some of the b i s may be undefined for the n-tuple. Let D and U denote the sets of defined and undefined elements b i in the n-tuple, respectively. Further, let s be the sum of the defined values in D. There are three cases to consider: s > 1, s 1 and U is empty, and s 1 and U is not empty. If s > 1, then we assign zero to the undefined values and scale down the elements in D to satisfy s = 1. If s 1 and U is empty, then we scale up the elements in D to satisfy s = 1. Otherwise, we assign (1? s)=k to each element in U, where k is the number of elements in U. Normalizing the n-tuples of the projected values permits us to obtain blending vectors for the points in I C that correspond to the selected parts R i in I i. These blending vectors can then be propagated to all points in I C by scattered data interpolation. An interpolating surface through the i-th coordinates of these vectors is constructed to determine b i of a blending vector b at all points in I C. Although any scattered data interpolation method may be chosen, we use multilevel B-spline interpolation, which is a fast technique for generating a C 2 -continuous surface. It was first introduced by the authors to derive transition functions in [3]. After constructing n surfaces, we have an n-tuple at each point in I C. Since each surface is generated independently, the sum of the coordinates in the n-tuple is not necessarily one. In that case, we scale them to force their sum to one. The resulting n-tuples at all points in I C define a 18

19 blending function B C that satisfies the user-specified blending constraints. Fig. 15 illustrates the constructed surfaces to determine B C used in Fig. 3. The heights of the surfaces in Figs. 15(a), (b), and (c) represent b 0, b 1, and b 2 of blending vector b at the points in I C, respectively. In Fig. 15(b), b 1 s are 1.0 at the points corresponding to the eyes and nose in I C, satisfying the requirement specified in Fig. 14(b). They are 0.0 around the hair and mouth and chin due to the value 1.0 assigned to those parts in Figs. 14(a) and (c). Figs. 15(a) and (c) also reflects the requirements for b 0 and b 2 specified in Fig. 14. (a) (b) (c) Figure 15: Constructed surfaces for a blending function 5 IMPLEMENTATION This section describes the implementation of the polymorph framework. We also present the performance of the implemented system and several metamorphosis examples. 5.1 Polymorph system The polymorph system consists of three modules. The first module is an image morphing system that considers two input images at a time. It requires the user to establish feature correspondence among two input images and generates warp functions between them. Although any morph system may be selected, we adopt the one presented in [3] because it facilitates flexible point-to-point correspondences and produces one-to-one warp functions that avoid undesirable foldovers. The first module is applied to (n? 1) pairs of input images, which correspond to the edges selected to constitute a connected graph G. The generated warp functions are sampled at each pixel and stored in binary files. The second module is the warp propagation system. It first reads the binary files of the warp functions associated with the selected edges in graph G. Those warp functions are then propagated to derive n(n? 1) warp functions W ij among all n input images I i. Finally, the second module computes 2n warp functions in both directions between the central image I C and all I i. The resulting warp functions, W ic and W Ci, are stored in binary files and used as the input of the third module, together with all I i. The third module is the inbetween image generation system. It allows the user to control the blending characteristics and the preprocessing and postprocessing effects in an inbetween image. To determine the blending characteristics of a uniform inbetween image, the user is required to provide a blending vector. For a nonuniform inbetween image, the user selects regions in I i and assigns them blending values. If preprocessing and postprocessing are desired, they are achieved by specifying primitives on images I i and I 0, respectively, and moving them to new positions. Once 19

20 the user input is given, the third module first computes blending function B C and warp functions P i and Q. An inbetween image is then generated by applying the metamorphosis framework presented in Section 2.5 to I i, W ic, and W Ci. The modules in the polymorph system are independent of each other and communicate by way of binary files storing warp functions. Any image morphing system can be used as the first module if it can save a derived warp function to a binary file. The second module does not need input images and manipulates only the binary files passed from the first module. The purpose of the first and second modules together is to compute warp functions W ic and W Ci. Given input images, these modules are run only once, and the derived W ic and W Ci are passed to the third module. The user then runs the third module repeatedly to generate several inbetween images by specifying different B C, P i, and Q. 5.2 Performance We now discuss the performance of the polymorph system in terms of the examples shown in Section 2. The resolution of the input images is , and the run time was measured on a SUN SPARC10 workstation. The first module was applied to input image pairs (I 0 ; I 1 ) and (I 0 ; I 2 ) to derive warp functions W 01 ; W 10 ; W 02, and W 20. The second module runs without user interaction and computed W ic and W Ci in 59 seconds. In the third module, it took 7 seconds to obtain the uniform inbetween image in Fig. 2 from blending vector b = ( 1; 1; 1 ). A nonuniform inbetween image requires more computation than a uniform inbetween image because three surfaces need to be constructed to determine blending function B C. It took 38 seconds to derive the inbetween image in Fig. 3 from the user input shown in Fig. 14. When we use preprocessing and postprocessing, warp functions P i and Q should be computed. It took 43 seconds to derive the inbetween image in Fig. 8, which includes the preprocessing and postprocessing effects defined by Fig. 12 and Fig METAMORPHOSIS EXAMPLES The top row of Fig. 16 shows the input images, I 0 ; I 1, and I 2, used in Section 2. We selected three groups of features in these images and assigned them blending value b i = 1 to generate inbetween images. The feature groups consisted of the hair, eyes and nose, and mouth and chin. Each feature group was selected in a different input image. For instance, Fig. 14 shows the feature groups selected to generate the leftmost inbetween image in the middle row of Fig. 16. Notice that the inbetween image is the same as I 0 in Fig. 6. The middle and bottom rows of Fig. 16 show the inbetween images resulting from all possible combinations in selecting those feature groups. Fig. 17 shows the changes in inbetween images when we vary the blending values assigned to selected feature regions in the input images. For example, consider the middle image in the bottom row of Fig. 16. That image was derived by selecting the mouth and chin, hair, and eyes and nose from input images I 0 ; I 1, and I 2, respectively. Blending value b i = 1 was assigned to each selected feature group. In Fig. 17, inbetween images were generated by changing b i to 0.75, 0.5, 0.25, and 0.0, from left to right. Note that decreasing an assigned blending value diminishes the influence of the selected feature in the inbetween image. For instance, with b i = 0, the selected features in all the input images vanish in the rightmost inbetween image in Fig

21 Figure 16: Input images (top row) and combinations of the hair, eyes and nose, and mouth and chin of them (middle and bottom rows) Figure 17: Effects of varying blending values with the same selected regions 21

22 In polymorph, an inbetween image is represented by a point in the simplex whose vertices correspond to input images. We can generate an animation among the input images by deriving inbetween images at points which constitute a path in the simplex. Fig. 18 shows a metamorphosis sequence among the input images in Fig. 16. The inbetween images were obtained at the sample points on the circle inside the triangle shown in Fig. 19. The top-left image in Fig. 18 corresponds to point p 0 and the other images, read from left to right and top to bottom, correspond to the points on the circle in the counterclockwise order. Every inbetween image blends the characteristics of all the input images at once. The blending is determined by the position of the corresponding point relative to the vertices in the triangle. For example, input image I 0 dominates the features in the top-left inbetween image in Fig. 18 because point p 0 is close to the vertex of the triangle corresponding to I 0. Figure 18: A metamorphosis sequence among three images Fig. 20 shows metamorphosis examples from four input images. The input images are given in the top row. Fig. 20(e) is the central image, which is a uniform inbetween image generated by 22

23 I 0 p 0 I1 I2 Figure 19: The path and sample points for the metamorphosis sequence blending vector ( 1; 1; 1; 1 ). In Fig. 20(f), the hair, eyes and nose, mouth and chin, and clothing were derived from Figs. 20(d), (a), (b), and (c), respectively. We rotated and enlarged the eyes and nose in Fig. 20(a) in the preprocessing step to match them with the rest of the face in Fig. 20(f). The eyes and nose in Fig. 20(g) were generated by selecting those in Figs. 20(b) and (d) and assigning them blending value b i = 0:5. The hair and clothing in Fig. 20(g) were derived from Fig. 20(c) and (d), respectively. In Fig. 20(h), the hair and clothing were retained from Fig. 20(a). The eyes, nose, and mouth were blended from Figs. 20(b), (c), and (d) with b i = 1. The resulting image resembles 3 a man with a woman s hair style and clothing. (a) (b) (c) (d) (e) (f) (g) (h) Figure 20: Input images (top row) and inbetween images from them (bottom row) 23

24 7 DISCUSSION In this section, we discuss the application of polymorph in feature-based image composition and extensions of the implemented system. 7.1 Application Polymorph is ideally suited for image composition applications where elements are seamlessly blended from two or more images. The traditional view of image composition is essentially one of generalized cut-and-paste. That is, we cut out a region of interest in a foreground image and identify where it should be pasted in the background image. Composition theory permits several variations for blending, particularly for removing hard edges. Although image composition is widely used to embellish images, current methods are limited in several respects. First, composition is generally a binary operation restricted to only two images at a time, i.e., the foreground and background elements. In addition, geometric manipulation of the elements is not effectively integrated. Instead, it is generally handled independently of the blending operations. The examples above have demonstrated the use of polymorph for image composition. We have extended the traditional cut-and-paste approach to effectively integrate geometric manipulation and blending. A composite image is treated as a metamorphosis between selected regions of several input images. For example, consider the regions selected in Fig. 14. Those regions are seamlessly blended together with respect to geometry and color to produce the inbetween image in Fig. 6. That image would otherwise be considerably more cumbersome to produce using conventional image composition techniques. 7.2 Extensions In the polymorph system, we use warp propagation to obtain warp functions W ij for all pairs of input images. To enable the propagation, we need to derive warp functions associated with the edges selected to make graph G connected. The selected edges in G are independent of each other in the manners of computing the associated warp functions. Different feature sets can be specified for different pairs of input images to apply different warp generation algorithms. This allows the reuse of feature correspondence previously established for a different application such as morphing between two images. Also, simple transformations like an affine mapping may be used to represent warp functions between an image pair when appropriate. Warp functions W ij between input images are derived for the purpose of computing warp functions W ic and W Ci. Suppose that the same set of features has been specified for all input images. For example, given face images, we can specify the eyes, nose, mouth, ears, and profile of each input face I i as its feature set F i. In this case, W ic and W Ci can be computed directly without deriving W ij. That is, we compute the central feature set F C by averaging the positions of feature primitives in F i. We can then derive W ic and W Ci by applying a warp generation algorithm to the correspondence between F i and F C in both directions. With the same set of features on input images, warp functions W i for a uniform inbetween image can be derived in the same manner as W ic. Given a blending vector, an inbetween feature set F is derived by weighted averaging feature sets F i on input images. W i can then be computed by a warp generation algorithm applied to F i and F. With this approach, we do not need to maintain 24

Image Morphing: A Literature Study

Image Morphing: A Literature Study Image Morphing: A Literature Study Harmandeep Singh Sliet University Sangrur, India Amandeep Kumar Sliet University Sangrur, India Gurpreet Singh Punjabi University Patiala, India Abstract:Image morphing

More information

FACIAL IMAGE MORPHING FOR ANIMATION USING MESH WARPING AND CROSS DISSOLVING TECHNIQUE

FACIAL IMAGE MORPHING FOR ANIMATION USING MESH WARPING AND CROSS DISSOLVING TECHNIQUE FACIAL IMAGE MORPHING FOR ANIMATION USING MESH WARPING AND CROSS DISSOLVING TECHNIQUE 1 D B SHIRORE, 2 S R BAJI 1 PG student, Dept of Electronics & Telecommunication, L G N Sapkal College of Engineering

More information

Parameterization of Triangular Meshes with Virtual Boundaries

Parameterization of Triangular Meshes with Virtual Boundaries Parameterization of Triangular Meshes with Virtual Boundaries Yunjin Lee 1;Λ Hyoung Seok Kim 2;y Seungyong Lee 1;z 1 Department of Computer Science and Engineering Pohang University of Science and Technology

More information

Mesh Morphing. Ligang Liu Graphics&Geometric Computing Lab USTC

Mesh Morphing. Ligang Liu Graphics&Geometric Computing Lab USTC Mesh Morphing Ligang Liu Graphics&Geometric Computing Lab USTC http://staff.ustc.edu.cn/~lgliu Morphing Given two objects produce sequence of intermediate objects that gradually evolve from one object

More information

Image warping/morphing

Image warping/morphing Image warping/morphing Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Tom Funkhouser and Alexei Efros Image warping Image formation B A Sampling and quantization What

More information

CSE 554 Lecture 7: Deformation II

CSE 554 Lecture 7: Deformation II CSE 554 Lecture 7: Deformation II Fall 2011 CSE554 Deformation II Slide 1 Review Rigid-body alignment Non-rigid deformation Intrinsic methods: deforming the boundary points An optimization problem Minimize

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

2D Image Morphing using Pixels based Color Transition Methods

2D Image Morphing using Pixels based Color Transition Methods 2D Image Morphing using Pixels based Color Transition Methods H.B. Kekre Senior Professor, Computer Engineering,MP STME, SVKM S NMIMS University, Mumbai,India Tanuja K. Sarode Asst.Professor, Thadomal

More information

Digital Makeup Face Generation

Digital Makeup Face Generation Digital Makeup Face Generation Wut Yee Oo Mechanical Engineering Stanford University wutyee@stanford.edu Abstract Make up applications offer photoshop tools to get users inputs in generating a make up

More information

Interactive Deformation with Triangles

Interactive Deformation with Triangles Interactive Deformation with Triangles James Dean Palmer and Ergun Akleman Visualization Sciences Program Texas A&M University Jianer Chen Department of Computer Science Texas A&M University Abstract In

More information

Warping and Morphing. Ligang Liu Graphics&Geometric Computing Lab USTC

Warping and Morphing. Ligang Liu Graphics&Geometric Computing Lab USTC Warping and Morphing Ligang Liu Graphics&Geometric Computing Lab USTC http://staff.ustc.edu.cn/~lgliu Metamorphosis "transformation of a shape and its visual attributes" Intrinsic in our environment Deformations

More information

Dgp _ lecture 2. Curves

Dgp _ lecture 2. Curves Dgp _ lecture 2 Curves Questions? This lecture will be asking questions about curves, their Relationship to surfaces, and how they are used and controlled. Topics of discussion will be: Free form Curves

More information

Lecture 7: Image Morphing. Idea #2: Align, then cross-disolve. Dog Averaging. Averaging vectors. Idea #1: Cross-Dissolving / Cross-fading

Lecture 7: Image Morphing. Idea #2: Align, then cross-disolve. Dog Averaging. Averaging vectors. Idea #1: Cross-Dissolving / Cross-fading Lecture 7: Image Morphing Averaging vectors v = p + α (q p) = (1 - α) p + α q where α = q - v p α v (1-α) q p and q can be anything: points on a plane (2D) or in space (3D) Colors in RGB or HSV (3D) Whole

More information

Prof. Feng Liu. Winter /05/2019

Prof. Feng Liu. Winter /05/2019 Prof. Feng Liu Winter 2019 http://www.cs.pd.edu/~fliu/courses/cs410/ 02/05/2019 Last Time Image alignment 2 Toda Image warping The slides for this topic are used from Prof. Yung-Yu Chuang, which use materials

More information

Image Morphing. Application: Movie Special Effects. Application: Registration /Alignment. Image Cross-Dissolve

Image Morphing. Application: Movie Special Effects. Application: Registration /Alignment. Image Cross-Dissolve Image Morphing Application: Movie Special Effects Morphing is turning one image into another (through a seamless transition) First movies with morphing Willow, 1988 Indiana Jones and the Last Crusade,

More information

FACIAL ANIMATION FROM SEVERAL IMAGES

FACIAL ANIMATION FROM SEVERAL IMAGES International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL ANIMATION FROM SEVERAL IMAGES Yasuhiro MUKAIGAWAt Yuichi NAKAMURA+ Yuichi OHTA+ t Department of Information

More information

Specification and Computation of Warping and Morphing Transformations. Bruno Costa da Silva Microsoft Corp.

Specification and Computation of Warping and Morphing Transformations. Bruno Costa da Silva Microsoft Corp. Specification and Computation of Warping and Morphing Transformations Bruno Costa da Silva Microsoft Corp. Morphing Transformations Representation of Transformations Specification of Transformations Specification

More information

Multimedia Technology CHAPTER 4. Video and Animation

Multimedia Technology CHAPTER 4. Video and Animation CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia

More information

Image warping/morphing

Image warping/morphing Image warping/morphing Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/3/20 with slides b Richard Szeliski, Steve Seitz, Tom Funkhouser and Aleei Efros Image warping Image formation B A Sampling

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

MODELING AND HIERARCHY

MODELING AND HIERARCHY MODELING AND HIERARCHY Introduction Models are abstractions of the world both of the real world in which we live and of virtual worlds that we create with computers. We are all familiar with mathematical

More information

15.10 Curve Interpolation using Uniform Cubic B-Spline Curves. CS Dept, UK

15.10 Curve Interpolation using Uniform Cubic B-Spline Curves. CS Dept, UK 1 An analysis of the problem: To get the curve constructed, how many knots are needed? Consider the following case: So, to interpolate (n +1) data points, one needs (n +7) knots,, for a uniform cubic B-spline

More information

UNIT 2 2D TRANSFORMATIONS

UNIT 2 2D TRANSFORMATIONS UNIT 2 2D TRANSFORMATIONS Introduction With the procedures for displaying output primitives and their attributes, we can create variety of pictures and graphs. In many applications, there is also a need

More information

Curve and Surface Basics

Curve and Surface Basics Curve and Surface Basics Implicit and parametric forms Power basis form Bezier curves Rational Bezier Curves Tensor Product Surfaces ME525x NURBS Curve and Surface Modeling Page 1 Implicit and Parametric

More information

CSE452 Computer Graphics

CSE452 Computer Graphics CSE452 Computer Graphics Lecture 19: From Morphing To Animation Capturing and Animating Skin Deformation in Human Motion, Park and Hodgins, SIGGRAPH 2006 CSE452 Lecture 19: From Morphing to Animation 1

More information

Synthesizing Realistic Facial Expressions from Photographs

Synthesizing Realistic Facial Expressions from Photographs Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1

More information

Face Morphing. Introduction. Related Work. Alex (Yu) Li CS284: Professor Séquin December 11, 2009

Face Morphing. Introduction. Related Work. Alex (Yu) Li CS284: Professor Séquin December 11, 2009 Alex (Yu) Li CS284: Professor Séquin December 11, 2009 Face Morphing Introduction Face morphing, a specific case of geometry morphing, is a powerful tool for animation and graphics. It consists of the

More information

Graphics Pipeline 2D Geometric Transformations

Graphics Pipeline 2D Geometric Transformations Graphics Pipeline 2D Geometric Transformations CS 4620 Lecture 8 1 Plane projection in drawing Albrecht Dürer 2 Plane projection in drawing source unknown 3 Rasterizing triangles Summary 1 evaluation of

More information

Image-Based Deformation of Objects in Real Scenes

Image-Based Deformation of Objects in Real Scenes Image-Based Deformation of Objects in Real Scenes Han-Vit Chung and In-Kwon Lee Dept. of Computer Science, Yonsei University sharpguy@cs.yonsei.ac.kr, iklee@yonsei.ac.kr Abstract. We present a new method

More information

Character Modeling COPYRIGHTED MATERIAL

Character Modeling COPYRIGHTED MATERIAL 38 Character Modeling p a r t _ 1 COPYRIGHTED MATERIAL 39 Character Modeling Character Modeling 40 1Subdivision & Polygon Modeling Many of Maya's features have seen great improvements in recent updates

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

Interactively Morphing Irregularly Shaped Images Employing Subdivision Techniques

Interactively Morphing Irregularly Shaped Images Employing Subdivision Techniques Interactively Morphing Irregularly Shaped Images Employing Subdivision Techniques Jan Van den Bergh Fabian Di Fiore Johan Claes Frank Van Reeth Expertise Centre for Digital Media - Limburg University Centre

More information

Facial Expression Morphing and Animation with Local Warping Methods

Facial Expression Morphing and Animation with Local Warping Methods Facial Expression Morphing and Animation with Local Warping Methods Daw-Tung Lin and Han Huang Department of Computer Science and Information Engineering Chung Hua University 30 Tung-shiang, Hsin-chu,

More information

Lecture 25: Bezier Subdivision. And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10

Lecture 25: Bezier Subdivision. And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10 Lecture 25: Bezier Subdivision And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10 1. Divide and Conquer If we are going to build useful

More information

Guidelines for proper use of Plate elements

Guidelines for proper use of Plate elements Guidelines for proper use of Plate elements In structural analysis using finite element method, the analysis model is created by dividing the entire structure into finite elements. This procedure is known

More information

Images from 3D Creative Magazine. 3D Modelling Systems

Images from 3D Creative Magazine. 3D Modelling Systems Images from 3D Creative Magazine 3D Modelling Systems Contents Reference & Accuracy 3D Primitives Transforms Move (Translate) Rotate Scale Mirror Align 3D Booleans Deforms Bend Taper Skew Twist Squash

More information

Fundamentals of Warping and Morphing

Fundamentals of Warping and Morphing Fundamentals of Warping and Morphing Luiz Velho IMPA - Institututo de Matemática Pura e Aplicada Outline Metamorphosis in Nature Conceptual Framework Overview of Warping and Morphing Applications in Computer

More information

Bonus Ch. 1. Subdivisional Modeling. Understanding Sub-Ds

Bonus Ch. 1. Subdivisional Modeling. Understanding Sub-Ds Bonus Ch. 1 Subdivisional Modeling Throughout this book, you ve used the modo toolset to create various objects. Some objects included the use of subdivisional surfaces, and some did not. But I ve yet

More information

CS337 INTRODUCTION TO COMPUTER GRAPHICS. Describing Shapes. Constructing Objects in Computer Graphics. Bin Sheng Representing Shape 9/20/16 1/15

CS337 INTRODUCTION TO COMPUTER GRAPHICS. Describing Shapes. Constructing Objects in Computer Graphics. Bin Sheng Representing Shape 9/20/16 1/15 Describing Shapes Constructing Objects in Computer Graphics 1/15 2D Object Definition (1/3) Lines and polylines: Polylines: lines drawn between ordered points A closed polyline is a polygon, a simple polygon

More information

Modelling Flat Spring Performance Using FEA

Modelling Flat Spring Performance Using FEA Modelling Flat Spring Performance Using FEA Blessing O Fatola, Patrick Keogh and Ben Hicks Department of Mechanical Engineering, University of Corresponding author bf223@bath.ac.uk Abstract. This paper

More information

4. Simplicial Complexes and Simplicial Homology

4. Simplicial Complexes and Simplicial Homology MATH41071/MATH61071 Algebraic topology Autumn Semester 2017 2018 4. Simplicial Complexes and Simplicial Homology Geometric simplicial complexes 4.1 Definition. A finite subset { v 0, v 1,..., v r } R n

More information

CS 4204 Computer Graphics

CS 4204 Computer Graphics CS 4204 Computer Graphics 3D Viewing and Projection Yong Cao Virginia Tech Objective We will develop methods to camera through scenes. We will develop mathematical tools to handle perspective projection.

More information

Chapter 5. Transforming Shapes

Chapter 5. Transforming Shapes Chapter 5 Transforming Shapes It is difficult to walk through daily life without being able to see geometric transformations in your surroundings. Notice how the leaves of plants, for example, are almost

More information

The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a

The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a coordinate system and then the measuring of the point with

More information

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Describing Shapes. Constructing Objects in Computer Graphics 1/15

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Describing Shapes. Constructing Objects in Computer Graphics 1/15 Describing Shapes Constructing Objects in Computer Graphics 1/15 2D Object Definition (1/3) Lines and polylines: Polylines: lines drawn between ordered points A closed polyline is a polygon, a simple polygon

More information

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming L1 - Introduction Contents Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming 1 Definitions Computer-Aided Design (CAD) The technology concerned with the

More information

4.4 3D Shape Interpolation - changing one 3D object into another

4.4 3D Shape Interpolation - changing one 3D object into another 4.4 3D Shape Interpolation - changing one 3D object into another 1 Two categories: surface based approach (2 nd case) volume based approach (1 st case) Turk/O Brien 2 Surface based approach (2nd case)

More information

Rasterization. COMP 575/770 Spring 2013

Rasterization. COMP 575/770 Spring 2013 Rasterization COMP 575/770 Spring 2013 The Rasterization Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to

More information

(Refer Slide Time: 00:04:20)

(Refer Slide Time: 00:04:20) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 8 Three Dimensional Graphics Welcome back all of you to the lectures in Computer

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

08 - Designing Approximating Curves

08 - Designing Approximating Curves 08 - Designing Approximating Curves Acknowledgement: Olga Sorkine-Hornung, Alexander Sorkine-Hornung, Ilya Baran Last time Interpolating curves Monomials Lagrange Hermite Different control types Polynomials

More information

Image Morphing. The user is responsible for defining correspondences between features Very popular technique. since Michael Jackson s clips

Image Morphing. The user is responsible for defining correspondences between features Very popular technique. since Michael Jackson s clips Image Morphing Image Morphing Image Morphing Image Morphing The user is responsible for defining correspondences between features Very popular technique since Michael Jackson s clips Morphing Coordinate

More information

Intrinsic Morphing of Compatible Triangulations. VITALY SURAZHSKY CRAIG GOTSMAN

Intrinsic Morphing of Compatible Triangulations. VITALY SURAZHSKY CRAIG GOTSMAN International Journal of Shape Modeling Vol. 9, No. 2 (2003) 191 201 c World Scientific Publishing Company Intrinsic Morphing of Compatible Triangulations VITALY SURAZHSKY vitus@cs.technion.ac.il CRAIG

More information

Computer Animation Visualization. Lecture 5. Facial animation

Computer Animation Visualization. Lecture 5. Facial animation Computer Animation Visualization Lecture 5 Facial animation Taku Komura Facial Animation The face is deformable Need to decide how all the vertices on the surface shall move Manually create them Muscle-based

More information

An Introduction to B-Spline Curves

An Introduction to B-Spline Curves An Introduction to B-Spline Curves Thomas W. Sederberg March 14, 2005 1 B-Spline Curves Most shapes are simply too complicated to define using a single Bézier curve. A spline curve is a sequence of curve

More information

(Refer Slide Time: 00:02:24 min)

(Refer Slide Time: 00:02:24 min) CAD / CAM Prof. Dr. P. V. Madhusudhan Rao Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture No. # 9 Parametric Surfaces II So these days, we are discussing the subject

More information

The aim is to find an average between two objects Not an average of two images of objects but an image of the average object!

The aim is to find an average between two objects Not an average of two images of objects but an image of the average object! The aim is to find an average between two objects Not an average of two images of objects but an image of the average object! How can we make a smooth transition in time? Do a weighted average over time

More information

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling Welcome to the lectures on computer graphics. We have

More information

DATA MODELS IN GIS. Prachi Misra Sahoo I.A.S.R.I., New Delhi

DATA MODELS IN GIS. Prachi Misra Sahoo I.A.S.R.I., New Delhi DATA MODELS IN GIS Prachi Misra Sahoo I.A.S.R.I., New Delhi -110012 1. Introduction GIS depicts the real world through models involving geometry, attributes, relations, and data quality. Here the realization

More information

Image Warping. Srikumar Ramalingam School of Computing University of Utah. [Slides borrowed from Ross Whitaker] 1

Image Warping. Srikumar Ramalingam School of Computing University of Utah. [Slides borrowed from Ross Whitaker] 1 Image Warping Srikumar Ramalingam School of Computing University of Utah [Slides borrowed from Ross Whitaker] 1 Geom Trans: Distortion From Optics Barrel Distortion Pincushion Distortion Straight lines

More information

Free-Form Shape Optimization using CAD Models

Free-Form Shape Optimization using CAD Models Free-Form Shape Optimization using CAD Models D. Baumgärtner 1, M. Breitenberger 1, K.-U. Bletzinger 1 1 Lehrstuhl für Statik, Technische Universität München (TUM), Arcisstraße 21, D-80333 München 1 Motivation

More information

Chapter 3. Sukhwinder Singh

Chapter 3. Sukhwinder Singh Chapter 3 Sukhwinder Singh PIXEL ADDRESSING AND OBJECT GEOMETRY Object descriptions are given in a world reference frame, chosen to suit a particular application, and input world coordinates are ultimately

More information

Geometry. Prof. George Wolberg Dept. of Computer Science City College of New York

Geometry. Prof. George Wolberg Dept. of Computer Science City College of New York Geometry Prof. George Wolberg Dept. of Computer Science City College of New York Objectives Introduce the elements of geometry -Scalars - Vectors - Points Develop mathematical operations among them in

More information

FACES OF CONVEX SETS

FACES OF CONVEX SETS FACES OF CONVEX SETS VERA ROSHCHINA Abstract. We remind the basic definitions of faces of convex sets and their basic properties. For more details see the classic references [1, 2] and [4] for polytopes.

More information

DISCRETE DOMAIN REPRESENTATION FOR SHAPE CONCEPTUALIZATION

DISCRETE DOMAIN REPRESENTATION FOR SHAPE CONCEPTUALIZATION DISCRETE DOMAIN REPRESENTATION FOR SHAPE CONCEPTUALIZATION Zoltán Rusák, Imre Horváth, György Kuczogi, Joris S.M. Vergeest, Johan Jansson Department of Design Engineering Delft University of Technology

More information

Use of Shape Deformation to Seamlessly Stitch Historical Document Images

Use of Shape Deformation to Seamlessly Stitch Historical Document Images Use of Shape Deformation to Seamlessly Stitch Historical Document Images Wei Liu Wei Fan Li Chen Jun Sun Satoshi Naoi In China, efforts are being made to preserve historical documents in the form of digital

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane?

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? Intersecting Circles This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? This is a problem that a programmer might have to solve, for example,

More information

Character Recognition

Character Recognition Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches

More information

1.2 Numerical Solutions of Flow Problems

1.2 Numerical Solutions of Flow Problems 1.2 Numerical Solutions of Flow Problems DIFFERENTIAL EQUATIONS OF MOTION FOR A SIMPLIFIED FLOW PROBLEM Continuity equation for incompressible flow: 0 Momentum (Navier-Stokes) equations for a Newtonian

More information

G 2 Interpolation for Polar Surfaces

G 2 Interpolation for Polar Surfaces 1 G 2 Interpolation for Polar Surfaces Jianzhong Wang 1, Fuhua Cheng 2,3 1 University of Kentucky, jwangf@uky.edu 2 University of Kentucky, cheng@cs.uky.edu 3 National Tsinhua University ABSTRACT In this

More information

animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time

animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to

More information

animation computer graphics animation 2009 fabio pellacini 1

animation computer graphics animation 2009 fabio pellacini 1 animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to

More information

Using Semi-Regular 4 8 Meshes for Subdivision Surfaces

Using Semi-Regular 4 8 Meshes for Subdivision Surfaces Using Semi-Regular 8 Meshes for Subdivision Surfaces Luiz Velho IMPA Instituto de Matemática Pura e Aplicada Abstract. Semi-regular 8 meshes are refinable triangulated quadrangulations. They provide a

More information

Computer Graphics. The Two-Dimensional Viewing. Somsak Walairacht, Computer Engineering, KMITL

Computer Graphics. The Two-Dimensional Viewing. Somsak Walairacht, Computer Engineering, KMITL Computer Graphics Chapter 6 The Two-Dimensional Viewing Somsak Walairacht, Computer Engineering, KMITL Outline The Two-Dimensional Viewing Pipeline The Clipping Window Normalization and Viewport Transformations

More information

Graphics (Output) Primitives. Chapters 3 & 4

Graphics (Output) Primitives. Chapters 3 & 4 Graphics (Output) Primitives Chapters 3 & 4 Graphic Output and Input Pipeline Scan conversion converts primitives such as lines, circles, etc. into pixel values geometric description a finite scene area

More information

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi 1, Francois de Sorbier 1 and Hideo Saito 1 1 Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi,

More information

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016 Computergrafik Matthias Zwicker Universität Bern Herbst 2016 Today Curves NURBS Surfaces Parametric surfaces Bilinear patch Bicubic Bézier patch Advanced surface modeling 2 Piecewise Bézier curves Each

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin Stony Brook University (SUNY at Stony Brook) Stony Brook, New York 11794-2424 Tel: (631)632-845; Fax: (631)632-8334 qin@cs.stonybrook.edu

More information

Parallel Computation of Spherical Parameterizations for Mesh Analysis. Th. Athanasiadis and I. Fudos University of Ioannina, Greece

Parallel Computation of Spherical Parameterizations for Mesh Analysis. Th. Athanasiadis and I. Fudos University of Ioannina, Greece Parallel Computation of Spherical Parameterizations for Mesh Analysis Th. Athanasiadis and I. Fudos, Greece Introduction Mesh parameterization is a powerful geometry processing tool Applications Remeshing

More information

This group is dedicated to Modeler tools for Layout s FiberFX hair and fur system. For the Layout interface and controls see FiberFX

This group is dedicated to Modeler tools for Layout s FiberFX hair and fur system. For the Layout interface and controls see FiberFX Fiber FX Click here to expand Table of Contents... FiberFX Strand Modeler Global Controls Fiber Tab Guides Tab Random Tab Gravity Tab Tools1 Tab Tools2 Tab Options Tab Strand Tool Strand Maker This group

More information

Research Article Polygon Morphing and Its Application in Orebody Modeling

Research Article Polygon Morphing and Its Application in Orebody Modeling Mathematical Problems in Engineering Volume 212, Article ID 732365, 9 pages doi:1.1155/212/732365 Research Article Polygon Morphing and Its Application in Orebody Modeling Hacer İlhan and Haşmet Gürçay

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

Linear Algebra Part I - Linear Spaces

Linear Algebra Part I - Linear Spaces Linear Algebra Part I - Linear Spaces Simon Julier Department of Computer Science, UCL S.Julier@cs.ucl.ac.uk http://moodle.ucl.ac.uk/course/view.php?id=11547 GV01 - Mathematical Methods, Algorithms and

More information

Example Lecture 12: The Stiffness Method Prismatic Beams. Consider again the two span beam previously discussed and determine

Example Lecture 12: The Stiffness Method Prismatic Beams. Consider again the two span beam previously discussed and determine Example 1.1 Consider again the two span beam previously discussed and determine The shearing force M1 at end B of member B. The bending moment M at end B of member B. The shearing force M3 at end B of

More information

Uniform edge-c-colorings of the Archimedean Tilings

Uniform edge-c-colorings of the Archimedean Tilings Discrete & Computational Geometry manuscript No. (will be inserted by the editor) Uniform edge-c-colorings of the Archimedean Tilings Laura Asaro John Hyde Melanie Jensen Casey Mann Tyler Schroeder Received:

More information

Meshless Modeling, Animating, and Simulating Point-Based Geometry

Meshless Modeling, Animating, and Simulating Point-Based Geometry Meshless Modeling, Animating, and Simulating Point-Based Geometry Xiaohu Guo SUNY @ Stony Brook Email: xguo@cs.sunysb.edu http://www.cs.sunysb.edu/~xguo Graphics Primitives - Points The emergence of points

More information

MET71 COMPUTER AIDED DESIGN

MET71 COMPUTER AIDED DESIGN UNIT - II BRESENHAM S ALGORITHM BRESENHAM S LINE ALGORITHM Bresenham s algorithm enables the selection of optimum raster locations to represent a straight line. In this algorithm either pixels along X

More information

Geometric Transformations and Image Warping

Geometric Transformations and Image Warping Geometric Transformations and Image Warping Ross Whitaker SCI Institute, School of Computing University of Utah Univ of Utah, CS6640 2009 1 Geometric Transformations Greyscale transformations -> operate

More information

Warps, Filters, and Morph Interpolation

Warps, Filters, and Morph Interpolation Warps, Filters, and Morph Interpolation Material in this presentation is largely based on/derived from slides originally by Szeliski, Seitz and Efros Brent M. Dingle, Ph.D. 2015 Game Design and Development

More information

Preliminaries: Size Measures and Shape Coordinates

Preliminaries: Size Measures and Shape Coordinates 2 Preliminaries: Size Measures and Shape Coordinates 2.1 Configuration Space Definition 2.1 The configuration is the set of landmarks on a particular object. The configuration matrix X is the k m matrix

More information

Linear Programming and its Applications

Linear Programming and its Applications Linear Programming and its Applications Outline for Today What is linear programming (LP)? Examples Formal definition Geometric intuition Why is LP useful? A first look at LP algorithms Duality Linear

More information

Figure 1: A positive crossing

Figure 1: A positive crossing Notes on Link Universality. Rich Schwartz: In his 1991 paper, Ramsey Theorems for Knots, Links, and Spatial Graphs, Seiya Negami proved a beautiful theorem about linearly embedded complete graphs. These

More information

Parameterization with Manifolds

Parameterization with Manifolds Parameterization with Manifolds Manifold What they are Why they re difficult to use When a mesh isn t good enough Problem areas besides surface models A simple manifold Sphere, torus, plane, etc. Using

More information

Connected Components of Underlying Graphs of Halving Lines

Connected Components of Underlying Graphs of Halving Lines arxiv:1304.5658v1 [math.co] 20 Apr 2013 Connected Components of Underlying Graphs of Halving Lines Tanya Khovanova MIT November 5, 2018 Abstract Dai Yang MIT In this paper we discuss the connected components

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

Image Warping and Morphing. Alexey Tikhonov : Computational Photography Alexei Efros, CMU, Fall 2007

Image Warping and Morphing. Alexey Tikhonov : Computational Photography Alexei Efros, CMU, Fall 2007 Image Warping and Morphing Alexey Tikhonov 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Image Warping in Biology D'Arcy Thompson http://www-groups.dcs.st-and.ac.uk/~history/miscellaneous/darcy.html

More information

Design by Subdivision

Design by Subdivision Bridges 2010: Mathematics, Music, Art, Architecture, Culture Design by Subdivision Michael Hansmeyer Department for CAAD - Institute for Technology in Architecture Swiss Federal Institute of Technology

More information

Interactive Shape Metamorphosis

Interactive Shape Metamorphosis Interactive Shape Metamorphosis David T. Chen Andrei State Department of Computer Science University of North Carolina Chapel Hill, NC 27599 David Banks Institute for Computer Applications in Science and

More information

1 The range query problem

1 The range query problem CS268: Geometric Algorithms Handout #12 Design and Analysis Original Handout #12 Stanford University Thursday, 19 May 1994 Original Lecture #12: Thursday, May 19, 1994 Topics: Range Searching with Partition

More information