Mapping Algorithms for Real-Time Control of an Avatar using. Eight Sensors. Sandia National Laboratories. Albuquerque, NM, USA

Size: px
Start display at page:

Download "Mapping Algorithms for Real-Time Control of an Avatar using. Eight Sensors. Sandia National Laboratories. Albuquerque, NM, USA"

Transcription

1 Mapping Algorithms for Real-Time Control of an Avatar using Eight Sensors Sudhanshu K Semwal 1 Ron Hightower Sharon Stanseld Sandia National Laboratories Albuquerque, NM, USA semwal@redcloud.uccs.edujrrhight@isrc.sandia.govjsastans@sandia.gov 1 On sabbatical from University of Colorado, Colorado Springs Abstract In a virtual environment for small groups of interacting participants, it is important that the physical motion of each participant be replicated by synthetic human forms in real-time. Sensors on a user's body are used to drive an inverse kinematics algorithm. Such iterative algorithms for solving the general inverse kinematics problem are too slow for a real-time interactive environment. In this paper we present analytic, constant-time methods to solve the inverse kinematics problem and drive an avatar gure. Our sensor conguration has only eight sensors per participant, so the sensor data is augmented with information about natural body postures. The algorithm is fast and the resulting avatar motion approximates the actions of the participant quite well. This new analytic solution resolves a problem with an earlier iterative algorithm which had a tendency to position knees and elbows of the avatar in awkward and unnatural positions. 1 Introduction In distributed virtual reality applications with multiple participants, the participants are embodied in the virtual world (Macedonia, Zyda, Pratt, and Brutzman, 1995; Stanseld, Miner, Shawver, and Rogers, 1995; Stanseld, Shawver, Rogers, and Hightower, 1995; Stanseld, 1994) as complete human forms. When these human forms mimic the action of the participant, and become the embodiment of the participant's actions, they are called the Avatar of the participant. An Avatar, is enslaved to the participant as it replicates every move of the participant (Stanseld, 1994). Our virtual environments are populated by both avatars, which replicate the movement of participants, and virtual actors whose autonomous movement is directed by computers. The human gure of our avatar is derived from a gure developed at the Center for Human Modeling and Simulation in Pennsylvania (Badler, Phillips, and Weber 1993). The gure is an articulated hierarchy of 69 rigid body components, whose posture is specied using a set of joint transforms. Figure 1 shows most of the joints in the gure, their names and their indices. Motion algorithms specify the position and orientation of the joints for the synthetic human-gure (Badler et al., 1993; Chadwick, 1989; Thalmann and Thalmann, 1991). There are several motion techniques available to determine the position of joints (Armstrong et al., 1985; Badler et al., 1987; Bruderlin and Williams, 1995; Calvert 1991; Granieri et al., 1995; Hodgins et al., 1995; Isaacs and Cohen, 1988; Lee and Kunii, 1989; Pyeyox 1988; Wilhelms 1991; Semwal, 1993; Unuma et al., 1995; Witkin and Popovic, 1995). Skin and clothes have 1

2 16 23:LClavicle 24:LShoulder 21:LElbow 22:LWrist LPalm LEye 14:LHip HeadPosition REye 15(atlanto_Occipital: bone in the skull) 6(base of the neck) 17:RClavicle 18: RShoulder E t2t1 F 19:RElbow l1t12 20:RWrist l2l1 RPalm C D l5l4 5(Waist) 4 (Lower torso) A B 13:RHip 12:LKnee 11:RKnee G H 10:LAnkle 9:RAnkle LToes RToes The location of eight sensors A-H 8-Sensor positions: 1 on the head 1 at the pelvic 1 each on both palms PLUS 1 each on both ankles 1 each on middle of both upperarms Figure 1: Placement of eight sensors on the skeleton. been represented by both polygons and parametric surfaces (Carignan et al., 1992; Chen and Zeltzer, 1992; Graves 1993; Komatsu 1988; Muraki 1991; Parke 1982; Semwal et al., 1994; Semwal and Hallauer, 1994; Thalmann and Thalmann 1991; Zeltzer 1982). Recently the sensor driven human forms (avatars) (Meyer et al., 1992) have received much attention (Stanseld 1994; Shawver and Stanseld 1995). For modeling an avatar in an interactive virtual environment, constant tracking and fast mapping of the participant's postures to a human form are necessary for real-time interaction (Badler and Holick et al., 1993; Semwal et al., 1995; Tolani, 1995; Waldrop, 1995). The actions of the participant are to be tracked and mapped to the synthetic human form. Since there is a certain amount of variation in the structure of the human body from person to person, the avatar issue { that of enslaving a synthetic human form to the participant's action { is an enormous computational and technical challenge. The purpose of this paper to discuss a fast solution where a small (8) number of sensors control a complex, synthetic human form so that the graphical human form mimics the action of the participant wearing the sensors. 2

3 2 Avatar Formulations There are several distinct avatar formulations for virtual environments. We discuss the following three: a) At each timestep, recreate the pose vector (set of joint transforms) of the participant, given a number of sensor measurements for the participant b) At each timestep, place the end eectors of the simulated human at the place and orientation measured by sensors mounted on the participant c) Continuously move the human-form representing an avatar so that its movement appears natural, and the movement conveys what the participant would want to appear to be doing (however one determines that). Approaches a) and b) are especially suitable for embodied users, where the avatar simply follows the user actions. Approach c) is currently favored for non-embodied users (who nevertheless may need an avatar representation, although the avatar may not follow the user's actions). Approach b) favors the user being able to manipulate the simulated world, while approach a) favors the user appearing to a third party to be in the same posture (e.g. it would be appropriate for a dance performance in virtual space). There are combinations of these approaches also which may be applicable to a virtual environment. In addition, the rst two give rise to a concern with accuracy, centered around the following question: Does the human form, representing the avatar of the participant, reproduce the correct dimensions of the participant's body? If so, then a mapping of one to the other that preserves both the pose vector and the end eector positions is possible; if not, then in general there is no such mapping. This implies either that one must develop ways of measuring the user's dimensions, or that one should develop some principles on how to redene the exact mapping in an acceptable way. One way of redening that mapping is to allow approach b) above to succeed within the intersection of the reach of the user and the avatar, and at the same time, to keep the joints of the avatar appearing intact. This is Penn's approach (Badler and Phillips et al., 1993) as far as the upper body is concerned. Another variant of the b) approach would also limit the reach in a similar way, but would allow the joints to break apart, or overlap when necessary. This is the last approach we discuss in our paper. When the joints are apart or an overlapping occurs, this approach would require some adjustments to the limb dimensions, and suitable deformations. However, most of the joint breaking occurs due to the dierences in the dimensions of the participant's body and avatar's dimensions. In addition, placement of the sensor is on the human body, and not on the joints (inside the human body) and that can also create this mismatch. The rigid body parts can work with the second method presented in this paper, without joint breaking, if the avatar were sized to match the user. Achieving this match in an easy-to-apply way is a research topic. When the joints are apart or an overlapping occurs, this approach would require some adjustments to the limb dimensions and related deformations (stretching of the skin). Another dimension of the problem is the degree of constraints imposed by the sensor data which must be satised by the avatar. These constraints clearly play a role in Penn's approach and in our approach as discussed in the following section. 3 Avatars and Multiple Sensors Sensors are placed on various sites of the participant's body (Figures 1 and 2). There are a variety of position trackers available to track the participant's actions. Meyer et. al. provide an excellent survey of position trackers (Meyer et al., 1992). The magnetic position trackers, most commonly used in current VR research, are relatively inexpensive. They are good in small working volumes. Their accuracy tends 3

4 Tranmitter T Left side Right side 2 Transmitter T2 8 7 Sensors 1-4 are read by transmitter 1 Sensors 5-8 are read by transmitter 2 Figure 2: Placement of eight sensors using two transmitter sets. to diminish as the transmitter-sensor distance increases. Accuracy is also aected by the ferro-magnetic objects in the working volume. In comparison to other position trackers, magnetic trackers have relatively low data rates, as the ltering required for the distortions in the emitted eld, introduces lag. In addition, ferromagnetic objects create eddy currents that distort the emitted eld causing measurement errors. The main advantage of magnetic sensors in a distributed VE is that magnetic sensors allow multiple sensors to share the transmitters. In addition, multiple transmitters can share the same worksapce. For example, in the 3Space FASTRAK system available from Polhemus, a transmitter shares four sensors, and there are upto four such transmitters. We have sixteen sensors and four transmitters in our laboratory. Figure 2 shows the eight-sensors set up, using two transmitters with four sensors per transmitter. The sensors are strapped to the participant's body and then the input from the sensor is taken as a constraint to the related point on the avatar's body. The inverse dynamics and inverse kinematics algorithms can use the position and orientation of the sensors, and place the human form so that the sensor constraints are satised. The inverse dynamics formulation is supposed to be more physically correct as it also considers the weight of the limbs and the forces applied to the skeleton (Armstrong et al., 1985). However, a large number of forces, which have to be correctly estimated to provide natural-looking motion to a synthetic actor, are extremely dicult and computationally expensive to estimate. The inverse kinematics formulation also solves simultaneous equations, and is iterative. Using four sensors, Badler et al. have developed an inverse kinematics solution for driving a Jack R -avatar for real-time interaction. Sensors are put on the back, head, and the two hands. The pelvic sensor is necessary for positioning the body of an avatar. The head-sensor is used to mimic the head movement, and the hand sensors are needed for the hand movements. As the human skeleton used for the Jack R gure has a large number (134) of degrees of freedom with 69 rigid segments and 73 joints (Granieri et al., 1995) there are several redundant solutions (poses) available for satisfying the given set of constraints when 4

5 the inverse kinematics algorithm is used (Badler and Phillips et al., 1993). It is sometimes dicult to choose the right pose out of several available at a time. Many times inverse kinematics algorithms select a pose which is similar to the previous pose. Therefore, once an incorrect pose (not similar to that of the participant) has been selected, it could be maintained for sometime during the simulation. That is why sometimes, using an inverse kinematics algorithm could lead to maintaining locked elbows in the simulation. When we considered the complexity of modeling the human body (Clemente 1987; Kapit and Elson 1979) and algorithmically generating the motion, our goal was that both real-time mapping and realistic movements be achieved by the avatar of the participant. Real-time mapping and realistic movements are two contradictory requirements. Faster mapping of actions would allow more realistic syntheticmodels to be used in a virtual environment. For example, Tolani developed a closed form solution for the Jack R gure's arms by restricting one out of seven degrees of freedom (Tolani, 95). Recently, Waldrop developed a solution for the upper-arm using three sensors (Waldrop 95). 4 Avatars in a Distributed Environment A shared virtual environment is being developed at Sandia National Laboratories for situational and close quarters training. The system is a distributed and shared virtual environment (VE) (Shawver and Stanseld, 1995). The distributed system runs on a number of SGI platforms across a local area network using multi-casting. Graphical views of the virtual environment are generated by concurrent instances of our display program called VR Station. A detailed explanation of the VR Station and the distributed system can be found in (Stanseld 1994; Stansield and Miner et al., 1995; Stanseld and Shawver et al., 1995). The VR Station program (Figure 3) is used to generate each participant's view of the virtual environment, as well as any additional third-party views. Each VR Station process runs on a separate SGI computer equipped with a Reality Engine. The geometric hierarchy of the virtual environment, including the human gure of the avatar(s), is a data structure that is known to all instances of the VR Station process. Of the many transforms in the hierarchy data structure which specify the location of objects in the virtual environment, a small subset of transforms are designated as animated transforms whose values are continually updated from the network via multi-casting. In particular, the animate transforms of the joint orientations dening the posture of an avatar are multicast over the network as 4x4 transforms to all the computers running VR Station processes. Since we have both inches and cms as units for dierent coordinate systems, conversion from inches to cms is accounted for. The animated transforms for an avatar are generated by a separate avatar server, using data from the sensors worn by the participants. Figure 4 shows the conguration used to implement the algorithms presented in this paper. Sensor data is sent across the network to any process that has a need for the information. Each avatar server looks for the sensor data from the appropriate participant, computes joint transforms that correspond to the sensor data, and then sends the results to all the VR Station processes, which update their displays together. Our original avatar server was a Jack R server for applying inverse kinematics algorithms (Badler, Hollick and Granieri 1995). This server used only four sensors (marked A-D in Figure 1) on each user. We use eight sensors for a full body implementation of the avatar-server. The main idea is that up to eight sensors are enough to break the human skeleton into smaller, manageable portions; yet they are only slightly more encumbering than the four sensor solution developed in (Badler and Hollick et al., 1995). The avatar server (see Figure 4) replaces the Jack R -Server in the distributed environment. We are still able to use the Jack R -gure's skeleton hierarchy and body parts. However, now the Jack R -server's 5

6 Track sensors and send over the network A B C use sensor values to set constraints for Jack Server y-up and inches D E Jack R server uses the constraints to define the posture of the Avatar y-up and cms F VR station picks Avatar information from the network; draws the avatar for display z-up and inches Figure 3: Interaction in the Distributed Environment y-up and inches Track sensors and send over the network A B C Avatar Server: Uses sensor values to define the avatar of the participant y-up and cms F VR station picks Avatar information from the network; draws the avatar for display z-up and inches Figure 4: Interaction in the Distributed Environment using the Avatar Server 6

7 inverse-kinematics algorithm has been replaced with the new mapping algorithms which are developed in this paper. Using the same platform allows us to extend the Jack R system, and provides a fair comparison between the mapping methods presented in this paper, and the four-sensors inverse kinematics algorithm developed at Penn by Badler et al. (Badler and Hollick et al., 1995). We also use the same number of joints, segments, degree of freedoms, and body parts as the Jack R gure. In general, the algorithm presented in this paper does not depend on input formats, or distribution issues (since it is not a distributed algorithm). It is a general purpose algorithm, applicable to other environments also. As will be explained later, our constant time mapping algorithm is non-iterative, and there are no restrictions on the seven degrees of freedom for the upper arm. There are no locked elbows. Our solution is for the complete avatar. In particular, we provide a new and fast algorithm for real-time interaction. A high-level psuedo-code of the algorithm is presented below: a) Obtain the sensor positions and orientations from the network b) Break the human skeleton-chain into sub-chains c) Solve for the joint frames in the sub-chains using the fast closed form solution developed in this paper d) Broadcast the new joints defining the avatar pose on the network e) Repeat (a) The details of the closed form algorithm, as well as the sub-chains will now be explained in the following Sections. 5 Closed Form Solutions for an Avatar using Eight Sensors We start with a brief explanation of the mapping from one global coordinate system to another in the distributed environment. We then present a closed form solution for the arms using two sensors for each arm. We also explain our method to map the spine, neck and head, and legs. 5.1 Implementation Terminology A local or global Right Handed Coordinate System (RHCS), also called a frame, can be specied in a 4 by 4 matrix form, with the rotation part of the matrix specifying the three orthogonal vectors of the local coordinate system, and the origin is specied by the translation part of this 4 by 4 matrix. For example, as shown in Figure 5, Rx, Ry, and Rz are the three unit vectors at point P at (x, y, z) w.r.t. the global-frame. The frame at point P can be represented by a 4 by 4 matrix as shown in Figure 5. The vectors Rx, Ry, and Rz form a 3 by 3 rotation part of the matrix, and (x, y, z) the translation part of the matrix. Inverse(M) is used to indicate the inverse of the matrix M. Orientation(M) or Orie(M) is a matrix where the translation part of M has been forced to be zero. We also dene, the term Pos(M) to indicate a 3D vector (x,y,z) for the translation value (x,y,z) of the matrix M. The term T is used for translation. Vector operations are used in implementing our mapping methods. For example, makerhcs routine is used to obtain a RHCS when two vectors A and B are given, such that the resulting RHCS coordinate system vectors are A, C and D where C = A B, and D = C A. The term scalmult(v,d) 7

8 R y y P(x,y,z) R z R local-frame x, x global-frame z 4 by 4 matrix for the local-frame = R x R y R z P Figure 5: Using a 4 by 4 matrix to dene the position and orientation of a frame w.r.t. a global frame Joint 1 S-frame M 1 Joint i-3 M i-2 Joint i-2 Joint i-1 Joint i M i-1 M i D-frame Animate Transform M = i S-frame * M M M 1 * i-2 i-1-1 * D-frame * => concatenate Figure 6: Calculating the animate-transform for a joint when a desired joint-frame is known. indicates an scaling operation where a vector has been scaled by d. We also use subtract(a,b) to indicate the vector (A-B), and unitvector(a) to indicate the unit vector of the vector A. An often used routine found in our implementation of the avatar-server is the traverse tree routine. Here we traverse the hierarchy of the virtual environment (for example the human gure representing an avatar) and, using the current values of the animate transforms, determine the global frame of the joints in the human gure of an avatar. The 4 by 4 (homogeneous coordinate system) matrix formulation allows composite transformations (involving scaling, translation, and rotations) to be specied by concatenation of the matrices. In Figure 6, we have shown a series of transformations to obtain the animate transform (Mi) for the i-th joint. The sensor reading provides the global position and orientation of the i-th joint (called D-frame in Figure 6). Starting from the root frame (called the S-frame in Figure 6), we can calculate the D-frame by concatenating the known animate transform frames from the root to the D-frame along the way. So the animate transform Mi can be calculated as shown in Figure 6. In this paper, when we know the desired global frame for a joint (D-frame) then we use the above algorithm to nd the animate transform of that joint. We call this algorithm ndanimatetransform. 8

9 5.2 Mapping of the Sensor Reading on the Avatar In this section, we briey explain the sensor coordinate system, and the coordinate systems of the Avatar-Server and the VR Station (see also Figure 4): The position and orientation of the FASTRAK magnetic sensors is available in the coordinate system for the physical space (room). This physical space is dened in inches in a right handed coordinate system (RHCS) whose y-axis is perpendicular to the oor of the room. The orientation of the magnetic sensor is such that the z-axis of these sensors lies along the length of the limb, from one joint to another. These sensors are strapped to the user's upper arm and palm. The y axis is perpendicular and points away from the center of the limb. The avatar server also uses a RHCS with its y-axis perpendicular to the oor in the virtual world, and works in cms (see Figure 4). The VR Station process displays the avatar's present posture. The default coordinate system of VR Station is a RHCS, with the z-axis perpendicular to the oor and scale in inches. The traverse tree routine computes the global frame of animate transforms in VR Station coordinate system (i.e. z-axis is up and in inches). Since we mostly work in the avatar-server coordinate system, we convert the traverse tree transforms from the VR Station coordinate system to the Avatar Server coordinate system, and vice-versa. These transformations ensure that the values are in the proper coordinate system in which we are working. The VR Station coordinate system denes the human-skeleton by creating a tree, with the root as the lower torso (joint 4 in Figure 7). The kinematic chains for the lower and upper body emanate from the lower torso (root). The dimension and geometry of the human-body parts are already known as they are dened in the VR Station coordinate system. The only thing one has to determine to specify a posture is the orientation of the local coordinate system for the line segments (limbs) in the tree. This orientation is captured by animate transform for a joint, which is a 4X4 homogeneous, orthogonal matrix with zero translation. The convention is that the z-axis of this RHCS lies along the line-segment from one joint to another. The algorithms are developed in the Avatar Server coordinate system. Once animate transforms for the joints are found, they are sent over the network in a proper format. The VR Station uses these values to draw the posture of the avatar at any given moment. 5.3 Eight Sensors provide six Kinematic Chains Using eight sensors, we now have six kinematic chains (as shown in Figures 7 and 8). They are: from the lower-torso to joint 16, joint 16 to the head, and the left and right arm. The last two chains are from the lower-torso to the left and the right leg. We developed the whole body solution using eight sensors. Two sensors are placed on each arm at the middle of the upper arm and on the hand, one at the head, one at the back, and one each at the ankles. 5.4 The Spine and Neck The human-spine has 7 cervical, 12 thoraic, 5 lumbar and sacrum vertebrae. The exible group of cervical vertebrae support the skull and the neck. Twelve thoraic vertebrae are rigid in comparison, and with twenty-four ribs, support the thorax. The ve lumbar vertebrae carry a large share of body weight and are quite mobile. The sacrum transfers the body weight to the hip joints via its articulation with the pelvic girdle (Kapit and Elson, 1979; Clemente, 1987). Intricate and realistic postures of the avatar 9

10 16 spine joints Left palm & fingers (Lower torso) Face & Eyes Right arm & fingers 14:Lhip 12:LKnee 10:LAnkle LToes 13:RHip 9:RAnkle 11:RKnee RToes Figure 7: Kinematic chains represented as a hierarchical tree HeadPosition LEye REye 1 15(atlanto_Occipital: 16 bone in the skull) 2 6(base of the neck) 23:LClavicle 3 17:RClavicle 24:LShoulder : RShoulder t2t1 Figure 8: Kinematic chains of the neck area 10

11 Sensor s z-axis Sensor s y-axis Sensor s x-axis x-axis y-axis z-axis Ry(-90) (a) (b) Rx(-90) Sensor s x-axis Sensor s z-axis y-axis (c) Figure 9: Placement of the sensor on the head Transmiter z-tr x-tr y-tr O z x y P Sensor s z-axis S Sensor s x-axis Sensor s y-axis S: Back sensor site P: Pelvic Position (not to scale) Figure 10: Back sensor placement in the transmitter frame mimicking the participant are only possible by determining the exact simulation of the spine-joints. Since real-time interaction is the driving focus of our research, similar to Badler's work in (Badler and Hollick et al., 1995), we also use the back-sensor to specify the spine-frame of the avatar. The sensor placed on the back gives a very good estimate for the spine. We use two sensors, one on the top of the head, and the other at the back of the waist to capture the general placement of the spine as well as the neck and the head. In our sixty-eight joints articulated skeletal model identical to Jack R gure, there are twelve thoraic and ve lumbar vertebrae. As shown in Figure 8, the spine has a total of eighteen joints starting from the lower torso and ending at joint number 16. From joint 16, articulated chains for the arms and the head spawn. The articulated chain for the head starts at joint 16, and has two joints, one for the base of the neck (joint 6), and atlanto occipital representing a bone in the skull (joint 15). The face starts from joint 15, and ends with the position of the head sensor. The head sensor is positioned in such a way (strapped to the head mounted display), that the y-axis of the head sensor is pointing vertically away from the head, z-axis faces forward, and the x-axis makes a right handed coordinate system. (See Figures 9 and 10). The back sensor is placed such that the z-axis of the back sensor is along the spine near lumbar 5, and the y-axis points away from the back. Since 11

12 Sensor s x-axis Sensor s z-axis Head Sensor s y-axis Occipital (Joint 15) Neck (Joint 5) Joint 16 z x y P Lower Torso Figure 11: Calculating the frame for Joint 16. the z-axis of the back-sensor is aligned with the spine direction at the back, it gives a good approximation of the position and orientation of the spine. We map the back-sensor value to be the position of the lower torso, this allows us to orient the avatar accurately. Below, we explain our solution in detail. 5.5 Calculating the upper Torso Frame We copy the position and orientation of the back-sensor to the LowerTorso matrix. As explained earlier, the back sensor reading is such that the z-axis of the sensor-frame lies along the spine, the y-axis is perpendicular to the back, facing backwards from the face, and the x-axis is along the body from right to left (See Figure 11). To account for the individual dierences between participants, a mapping from the pelvic frame to the sensor frame is estimated. The participant's posture is calibrated by asking the participant to stand straight, and face along the z-axis of the transmitter-frame. All the sensor frames are also measured in the transmitter frame. The pelvic, the back sensor, and the lower torso frames are shown in Figure 12. The lower torso frame is the root of the avatar system, and is shown at point P in Figures 10, 11, and 13. During calibration, the participant is to stand in such a way that the pelvic frame is aligned with the transmitter frame 1. As mentioned earlier, this transmitter frame is y-up (vertically perpendicular to the oor of the room). We use the following matrix to convert the back sensor data so that we have an estimate of the pelvic frame. Let us assume that point S is (0, 12.59, -5.93) w.r.t. point P in the transmitter frame. We are interested in nding a mapping from the sensor frame to the lower-torso frame. To measure the sensor-frame w.r.t. the pelvic-frame, we rst rotate so that axes are aligned with the desired back-sensor matrix, and then translate by (0, 12.59, -5.93). Since the sensor matrix is given to us, therefore, the reverse of the above (translate by T(0, , 5.93) rst and then rotate by inverse of the rotation part of the back-sensor reading) would be done to convert the back-sensor frame to the estimate of the pelvicframe. In summary, we could estimate the pelvic-frame by using the following transformation: T(0, , 5.93) * Inverse(Orie(calibrationReading)) * sensorreading 1 This is a modied version of Penn's approach, modied by Dan Shawver 12

13 y Ry(-90) z P x (a) Pelvic frame y x P Rx(-90) z z x y P (c)lower-torso frame Figure 12: Transforming the pelvic-frame to the lower-torso frame orientation Where, all the matrices are dened in the transmitter frame of reference. T is the translation matrix for translating point S to P (See Figure 10). Note that Inverse(Orie(calibrationReading)) is the inverse of only the orientation portion of the back sensor reading during calibration. The sensor- Reading matrix is any general reading of the sensor during the simulation. When the sensorreading is the calibrationreading, we correctly obtain the orientation of the pelvic-frame identical to that of the transmitter-frame (Figure 10). There is one more transformation which we need to perform as the convention for orientation of the limbs is such that the z-axis always points from the proximal to the distal joint of the limb-segment (e.g. z-elbow in Figure 14). In our implementation, the geometry of the segments is such that the z-axis of the lower-torso frame is pointing upwards from the torso, x-axis of the lower-torso-frame is aligned with the z-axis of the pelvic frame (and also transmitter frame during calibration). The y-axis of the lower-torso-frame is aligned with the x-axis of the pelvic-frame (and the transmitter-frame). Therefore, to draw the lower-torso properly, we need to convert the pelvic frame to the lower torso frame. As shown in the Figure 13, two rotations are used to perform this transformation. In general, the lower-torso frame can be directly calculated by using the back-sensor reading during simulation as follows: Rx(-90) * Ry(-90) * T(0, , 5.93) * Inverse(Orie(calibrationReading)) * sensorreading 5.6 Using two Sensors to Solve for the Arm The sensors are attached to the upper-arm and the palm so that the z-axis of these sensors is aligned with the limb-lengths, and y-axis is perpendicular to the limb-lengths. The method for nding the orientation of the local coordinate system for the joints of the arm is as follows (See Figure 13): Translate the sensor-frame on the upper arm (upperarm-frame), along the y-axis of the sensor frame by moving a dened distance D, equal to half of the upper-arm thickness (distanceupper- ArmThickness). In this way, we obtain the s1-frame inside the upper-arm and approximately on the humerus bone. In other words, s1-frame = T(scalMult(yupperArm?frame,-D)) * upperarm-frame. Translate the s1-frame on the inside of the upper arm, along P1P2 towards P1 by moving a dened distance D 1, equal to the estimate of distance to shoulder from the position of the upper-arm 13

14 x x P1 y y upperarm z S1 z P2 (a) x S2 P3 y z x-elbow P1 Shoulder s y-axis P2 z-elbow P3 Z-elbow θ y-elbow (b) Figure 13: Solution using two arm sensors sensor (distancetoshoulder). This gives the position of the shoulder, and the animate transform (i.e. the orientation of the local coordinate system) for the shoulder. So, shoulder-frame = T(scalMult(y s1?frame,-d 1 )) * s1-frame. Translate the sensor frame (s1) along P1P2 towards P2 by moving a dened distance D 2, dened as the distance of the elbow from the upper-arm sensor distancetoelbow. This denes the position of the elbow; but we still need to nd the orientation (animate transform) for the elbow, which in turn is based on the wrist position. So, Pos(elbow-frame) = Pos(T(scalMult(ys1?frame,D 2 )) * s1-frame) Translate the sensor-frame on the back of the palm (palm-frame), along the y-axis of the palmframe, by moving a dened distance D 3 to obtain the s2-frame approximately inside the palm. The distance D 3 is equal to half of the palm thickness (distancepalmthickness). In other words, s2-frame = T(scalMult(ypalm?frame,-D3)) * palm-frame. Use dened distance D 4, distance from the s2-frame to the wrist (using distancetowrist) to translate the sensor frame (s2) on the palm along the negative z-axis of the s2-frame, and nd the position and orientation of the wrist's local coordinate system (i.e. the animate transform for the wrist). Or, wrist-frame = T(scalMult(ys2?frame,-D4)) * s2-frame. Next nd the animate transform for the elbow. We know the elbow position, the wrist-position, and the shoulder position. First, dene the z-axis as the unit vector from the elbow to the wrist, call it zelbow. So we have, zelbow?frame = unitvector(subtract(wristposition,elbowposition)) We use the shoulder's y axis to be another vector (y 1 ). These two vectors are sucient to dene a right handed coordinate system or the animate transform of the elbow, by using two cross products (using the makerhcs routine). We observe that the movement of the joint at the elbow is such that the shoulder's y axis (or y 1 vector) and the elbow's y-axis should make ninety degrees in the worst case. This angle is denoted by angle q in Figure 13(b). When the arms are comfortably on the side near the thighs in a natural 14

15 standing pose, this angle angle q is zero in our convention and these two y-axes are parallel to each other. When the arm is moved, by exion/extension about the elbow, the rotation of the lower arm about the elbow with respect to the upper arm, or the supination/pronation of the hand { all of these actions change the relationship between these two y-axes (and so the angle q). However the angle q between these two y-axes remains less than ninety degrees in all cases (unless the person is unusually exible). If the angle is more than ninety degrees by the above calculation, we swap the two vectors before taking the cross product. This ensures that the angle between the two y-axes is not more that ninety degrees. This musculoskeletal constraint provides the animate transform for the elbow joint (lower arm). This completes our closed form solution for the arm calculation using two sensors. We note that this solution does not restrict any degree of freedom for the elbow or the wrist joint (as does Tolani's closed form solution (Tolani, 1995). Once we know the desired frames for the elbow, neck, and shoulders we can calculate the animate transforms for all these three frames by calling the ndanimatetransform routine three times. The calculated position of the shoulder, elbow and the wrist also provide the scaling information between the limb lengths of the participant and their avatar (which uses the dimensions of an average male). The scaling to adjust the length of the limbs could be performed along the z-axis of the local coordinate system for the limb, based upon these estimated values. Thus the dierence between the length of the limbs of the individual participants could be reected in their avatar, by comparing with the corresponding default limb-lengths, and scaling that limb along the z-direction appropriately. 5.7 The Neck and Occipital Frames The head sensor reading is used to properly orient and position the avatar's head. Because of the sensor placement, the head-sensor matrix is such that the y-axis is perpendicular to the head (pointing up where the head is), and the z-axis points forward (See Figure 9). We use two rotations, one about the Y-axis by -90 degrees, and then about the x-axis by -90 degrees, as a result the z-axis points vertically upwards along the direction from the neck to the head, the x-axis is facing forward, and the y-axis is such that a RHCS system is created (See Figure 9(c)). Now we use an interpolation as follows: First, traverse the tree with the known lower-torso-frame as the root value, and obtain the matrix for joint 16 (Figure 8). The head sensor reading orientation is also the orientation of the occipital joint, the position of the occipital joint (number 15) can be estimated by moving along the sensor's present z-axis using an empirical distance (15 cms) from the head to the occipital value. Next we interpolate between the estimated frames for joint 16 and the occipital-frame (joint 15). We have implemented the interpolation by averaging the y-vectors of these two frames to obtain the y-axis of the neck-frame. Similarly the average of z-vectors gives the z-vector of the neck frame. Then we obtain a RHCS frame using these vectors to dene the neck-frame by using makerhcs routine, as described earlier. Knowing the desired frames for the neck and the occipital joints, we nd the animate transform by using the traversetree and the ndanimatetransform routine twice. 5.8 Solution for the Leg frames For the legs, sensors are placed on both ankles. The solution for the legs is similar to the two sensor solution developed earlier for the arm. Here we use the estimated value of the pelvic-frame (lower-torso frame) and the measured value of the ankle-sensor frame to estimate the knee-frame. The orientation of 15

16 Hip y-knee knee distanceankletoknee z-knee y-ankle z-ankle Figure 14: Calculating the leg-frames the knee-frame is the same as the ankle-frame. The position of the knee-joint is estimated by using an empirically dened distance D 6 from the ankle to the knee (called distanceankletoknee), along the negative z-axis of the ankle sensor reading (Figure 14). So we have Pos(knee-frame) = T(scaleMult(zankle?frame,- D 6 )) * Pos(ankle-frame). Using the already calculated lower-torso-frame, we use the traverse tree routine to obtain the hip-position and its y-axis (call it y-temp). Note that the hip-to-knee limb is with the current z-hip axis, so the z-hip axis is calculated as a unit vector from the hip position to the knee position. So, zhip?frame = unitvector(subtract(knee-position,hip-position)) Note that the y-temp and zhip?frame axis may not be perpendicular. However, these two vectors are sucient to obtain a RHCS for the hip-frame. We again use the makerhcs routine and obtain the hip-frame. Note that the calculated RHCS's y-vector for the hip-frame does not make more than 90 degrees with the y-temp vector. This again is a bio-mechanical constraint in the same spirit as the arm solution earlier presented, and provides a natural folding of the legs (see Figures 15-19). 6 Evaluation and Performance Measurements The avatar implementation presented in this paper is directly governed by the magnetic sensor readings. In particular, we do not consider the previous pose as a factor in our calculations. This helps us in avoiding the situation when an avatar may remain in a locked elbow position once it has achieved a locked elbow position. Because of the simple bio-mechanical constraint there are no locked elbows in our implementation. As shown in the Figures 16-20, we have been able to generate a variety of poses using the two mapping methods. The avatar mimics the action of the participant well. All the images in Figures are grabbed from the screen interactively (with the participant wearing the sensors, and the actions of the participant and the avatar recorded at the same time). The closed form algorithm is interactive and fast, limited only by the sensor accuracy and update rates. 16

17 6.1 Implementing two Avatar Formulations and Associated Results We have developed an implementation each for two avatar formulations a) and b). In one method, we use only the orientation of the sensors when estimating the animate transforms (joint frames dening the pose). In the implementation of the formulation a), the general movement of the sensors is extracted from the participant as pose vectors and mapped to the corresponding avatar. This allows an abstraction of the style of the participant. The sensor orientations provide general directions for the movements of the limbs. Since our skeletal model is a non-deformable rigid-body model, the extracted direction vectors map the motion of the participant to the same geometric model, i.e. a larger or smaller participant maps to the same geometric avatar. Their style of walking, however, would be dierent. Since the skeletal model is non-deformable, the biggest advantage of this model is that there is no space between the joints. Secondly this solution is a bit more resilient as the error in the position information from the sensor readings are not used in calculating the joint-frames; only the orientation of the sensors is used for formulation a). The disadvantage of this method is that smaller and larger persons both map to the same sized gures. The geometric limb-dimensions of the avatar are based on the average American male. Since most of us are somewhat dierent from the average American male dimensions, the measured sensor positions on our body, when we are the participants, would be dierent from the site on the avatar for that sensor location. This is shown in Figures 16-17, where the sensors are also displayed. In the second implementation (formulation b), we wanted the sensor-sites on the avatar to be at the measured values of the sensor. In this case, both the sensor position and orientation are used. This is done by nding the animate transform based upon the position as well as the rotation part of the matrices using the ndanimatetransform routine. This way sensor readings map exactly on the avatar. Since the limb sizes do not change in our present implementation and a translation part is added to the animate transform, sometimes the avatar limbs are not joined together. But body parts attached to the sensors mimic the exact sensor positions. Although the resulting avatar has some space between some of the limbs which were supposed to be joined, it is an important result for us. We have shown that an exact match of the measured and the estimated sensor reading is possible using our mapping scheme. The space between the joints should disappear when deformation and elongation algorithms are implemented in future. These algorithms would ll the space between joints, wrap skin around the joints, stretch and deform the body parts. Figure 17 show several poses for the formulation b). Figure show the formulation a) poses without displaying the sensors. The poses seem natural and, mimic the participants well. 6.2 Calibration when Multiple Transmitters are used, and Sensor errors We have observed that the measured position of the sensor and the placement of the limb is sometimes a bit o due to measuring errors inherent in the non-linearity of magnetic sensors. In addition, the shape of the participant's body changes constantly when the participant moves, as the human body joints are not simple joints. Instead they are complicated joints, especially the shoulder and the hip area (Badler and Phillips et al., 1993). In other words, there are anatomical dierences between the user and the geometry of the avatar. The solution would be to interactively deform and appropriately elongate the limbs. One simple way is to scale the limbs along that limb-frame's z-axis. However, this would require adding scaling and un-scaling transforms just before and after the specication of the geometry of the limb. As we have discussed earlier, when magnetic sensors are being used, the mapping is many times non-linear. We use two sets of transmitters. Since transmitters are not placed at the same point, the sensor readings are not in the same frame. We map the second set of sensors (transmitter 2) to transmitter 17

18 Figure 15: Avatar formulation a,b): Sitting poses, and (c,d) folded legs and jointed hands 18

19 1. Transmitter 1 hangs from the ceiling and transmitter 2 is on the table for tracking the lower part of the participant's body. (Figure 2 showed the placement of two transmitters). In the arrangement we have, one set of transmitters is mapped to another by taking various readings for both the transmitters at identical points in 3D space, and developing a mapping matrix to map a frame from one coordinate system to another. This leads to an accurate mapping for our experiments. Since two transmitters are being used, the participant is limited to the overlap of the woking areas of these transmitters. Since the work area is non-linear the sensor-readings would also be non-linear when the participant is outisde the common work area. This limits the mobility of the participant. Our experiments were best when the participant was elevated higher on the table. The sensor could also be lowered down to the level of the participant. We conclude that the working volume of the magnetic sensors is indeed quite small, when we have multiple transmitters 2. Obviously, a correct sensor reading is crucial to the working of our methods. Since both mapping methods use sensor readings, our algorithms would greatly benet by improving the sensor technology to provide a larger linear work-area. While running the experiments, we found that one could also change the physical appearance of the avatar by moving the position of the sensor on the upper-arm, physically along the sensor's z-axis (which is by our design parallel to the z-axis of the limb's local coordinate system). Any change in the values of the arm sensors has an eect on the estimated position of the shoulder, elbow and the wrist joints for the avatar's arm. 7 Conclusions and Future Research Using eight sensors, we have implemented two avatar formulations (a and b) by using new closed form mapping methods for the avatar's whole human body including arms, legs and the head. Our solution for the arm does not restrict any degree of freedom for the human-arm as in (Tolani, 1995). The novelty of our method is that it uses musculoskeletal constraints to solve for the arms and legs with eight sensors in constant time. We believe that our work will lead to a new class of motion algorithms where only a limited number of sensors need be used for real-time representation of the human form in a virtual environment with multiple participants. We have proposed new vector based geometric algorithms, using the natural constraints of the human postures. Our constant time mapping algorithms provide a more natural real-time interaction in comparison to the inverse kinematics algorithm used by the Jack R server. Finally, by breaking the human skeleton into smaller sub-chains, we have opened the possibility of nding solutions in parallel for these sub-chains in our distributed environment. Acknowledgments Special thanks are due to Dan Shawver for explaining the workings of the distributed environment, helping understand the transformation of the back sensor to the pelvic-frame (Figure 9), and reviewing the paper. We would also like to thank Deepak Tolani, Xinmin Zhao, Dave Rogers, Meisha Collins, Denise Carlson and James Singer for their support. This work was performed at Sandia National Laboratories and was supported by the US Department of Energy under Contract DE-ACO4-94AL Vendors of electro-magnetic trackers provide systems with greater range, and more trackers working simultaneously, than we happened to have available in our laboratory. 19

20 Figure 16: Avatar formulation a): more sitting poses (left), b) getting up (right), c) getting up (left), and d) touching toes (right) 20

21 Figure 17: Avatar formulation b): (a, b) Sitting poses. (c, d) folded legs (left), and cycling (right) 21

22 Figure 18: Avatar formulation a) without displaying the sensors: folded legs (left), (b) arms stretched (right). (c) bending at the waist (left), and (d) touching toes (right). 22

23 Figure 19: Avatar formulation (a,b) without displaying the sensors: sitting poses. (c) getting up (left), and (d) lying comfortably (right) 23

24 References [1] Armstrong, W., Green, M., & Lake R. (1985). The dynamics of articulated rigid bodies for purpose of animation, The Visual Computer, 1(4): [2] Badler, N., Manoochehri K., & Walters, G. (1987). Articulated Figure Positioning by Multiple Constraints, IEEE CG&A, 7(6): [3] Badler, N., Phillips, C., & Webber, B. (1993). Simulating Humans Computer Graphics Animation and Control, Oxford University Press. [4] Badler, N., Hollick, M., & Granieri, J. (1993). Real-Time Control of a Virtual Human using Minimal Sensors, PRESENCE, 2(1): [5] Bruderlin, A., & Williams, L. (1995). Motion Signal Processing, In Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH, [6] Calvert, T. (1991). Composition of Realistic Animation Sequences for Multiple Human Figures, Making them Move: Mechanics, Control, and Animation of Articulated Figures, Morgan Kaufmann, CA, [7] Carignan, M., Yang, Y., Magnenat-Thalmann, N., & Thalmann, D. (1992). Dressing Animated Synthetic Actors with Complex Deformable Clothes, Computer Graphics, 26(2): [8] Chadwick, J., Haumann, D., & Parent, R. (1989). Layered construction for deformable animated characters, Computer Graphics, 23(3): [9] Chen, D. & Zeltzer, D. (1982). Pump it Up: Computer Animation of a Biomechanically Based Model of Muscle Using the Finite Element Method, Computer Graphics, 26(2): [10] Clemente, C. (1987). Anatomy: A Regional Atlas of the Human Body. Urban & Schwarzenberg, Baltimore- Munich. [11] 3SPACE FASTRAK user's manual, Revision F, Polhemus, A Kaiser Aerospace and Electronics Company, PO Box 560, Colchester, Vermount, [12] Granieri, J., Crabtree, J., & Badler, N. (1995). Production and Playback of Human Figure Motion for 3D Virtual Environments, VRAIS 1995, [13] Graves, G. (1993). The magic of metaballs, Computer Graphics World May issue, [14] Hodgins, J., Wooten, W., Brogan, D., & O'Brien, J. (1995). Animating Human Athletes, In Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH, [15] Isaacs, P., & Cohen, M. (1988). Mixed Methods for complex Kinematic Constraints in Dynamic Figure Animation. The Visual Computer, 4(6): [16] Kapit, W., & Elson, L. (1979). The Anatomy Coloring Book, Harper and Row Publishers, New York. [17] Komatsu, K. (1988). Human Skin Model capable of Natural Shape Variation, The Visual Computer, 3: [18] Krueger, M. (1991). Articial Reality II, Addison Wesley Publishing Company, Reading, MA. [19] Lee, M., & Kunii, T. (1989). Animation Design - A Database Oriented Animation Design Method with a Video Image Analysis Capability. In: Magnenat-Thalmann N, Thalmann D (eds) State of the Art in Computer Animation, Springer, Tokyo, pp [20] Macedonia, M., Zyda, M., Pratt, D., Brutzman, D., & Barham, P. (1995). Exploiting Reality with Multicast Groups, IEEE CG&A, 15(5): [21] Meyer, K., Applewhite, H., & Biocca, F. (1992). A Survey of Position Trackers. PRESENCE, 1(2): [22] Muraki, S. (1991). Volumetric Shape Description of Range Data using Blobby Model, Computer Graphics, 24(4): [23] Parke, F. (1982). Parametric Models for Facial Animation. IEEE CG&A, 2(6): [24] Pueyox, T. (1988). Human Body Animation: A Survey, The Visual Computer, 3(5),

A 12-DOF Analytic Inverse Kinematics Solver for Human Motion Control

A 12-DOF Analytic Inverse Kinematics Solver for Human Motion Control Journal of Information & Computational Science 1: 1 (2004) 137 141 Available at http://www.joics.com A 12-DOF Analytic Inverse Kinematics Solver for Human Motion Control Xiaomao Wu, Lizhuang Ma, Zhihua

More information

SM2231 :: 3D Animation I :: Basic. Rigging

SM2231 :: 3D Animation I :: Basic. Rigging SM2231 :: 3D Animation I :: Basic Rigging Object arrangements Hierarchical Hierarchical Separate parts arranged in a hierarchy can be animated without a skeleton Flat Flat Flat hierarchy is usually preferred,

More information

Learning Autodesk Maya The Modeling & Animation Handbook. Free Models From Turbo Squid Value US $ Official Autodesk Training Guide

Learning Autodesk Maya The Modeling & Animation Handbook. Free Models From Turbo Squid Value US $ Official Autodesk Training Guide Free Models From Turbo Squid Value US $239.00 Official Autodesk Training Guide Learning Autodesk Maya 2008 The Modeling & Animation Handbook A hands-on introduction to key tools and techniques in Autodesk

More information

EPFL. When an animator species an animation sequence, it privileges when animating the Virtual. body, the privileged information to be manipulated

EPFL. When an animator species an animation sequence, it privileges when animating the Virtual. body, the privileged information to be manipulated Realistic Human Body Modeling P. Fua, R. Plankers, and D. Thalmann Computer Graphics Lab (LIG) EPFL CH-1015 Lausanne, Switzerland 1 Introduction Synthetic modeling of human bodies and the simulation of

More information

Motion Capture User Manual

Motion Capture User Manual ART-Human Motion Capture User Manual Version 2.0 Advanced Realtime Tracking GmbH July 2013 Table of Contents 1 Introduction... 1 1.1 What is ART-Human?... 1 1.2 Features... 1 1.3 New in Version 2.0...

More information

CMSC 425: Lecture 10 Skeletal Animation and Skinning

CMSC 425: Lecture 10 Skeletal Animation and Skinning CMSC 425: Lecture 10 Skeletal Animation and Skinning Reading: Chapt 11 of Gregory, Game Engine Architecture. Recap: Last time we introduced the principal elements of skeletal models and discussed forward

More information

Human Skeletal and Muscle Deformation Animation Using Motion Capture Data

Human Skeletal and Muscle Deformation Animation Using Motion Capture Data Human Skeletal and Muscle Deformation Animation Using Motion Capture Data Ali Orkan Bayer Department of Computer Engineering, Middle East Technical University 06531 Ankara, Turkey orkan@ceng.metu.edu.tr

More information

Design and Implementation of Synthetic Humans for Virtual Environments and Simulation Systems

Design and Implementation of Synthetic Humans for Virtual Environments and Simulation Systems Design and Implementation of Synthetic Humans for Virtual Environments and Simulation Systems SPYROS VOSINAKIS, THEMIS PANAYIOTOPOULOS Department of Informatics University of Piraeus 80 Karaoli & Dimitriou

More information

Computer Animation Fundamentals. Animation Methods Keyframing Interpolation Kinematics Inverse Kinematics

Computer Animation Fundamentals. Animation Methods Keyframing Interpolation Kinematics Inverse Kinematics Computer Animation Fundamentals Animation Methods Keyframing Interpolation Kinematics Inverse Kinematics Lecture 21 6.837 Fall 2001 Conventional Animation Draw each frame of the animation great control

More information

Rigging / Skinning. based on Taku Komura, Jehee Lee and Charles B.Own's slides

Rigging / Skinning. based on Taku Komura, Jehee Lee and Charles B.Own's slides Rigging / Skinning based on Taku Komura, Jehee Lee and Charles B.Own's slides Skeletal Animation Victoria 2 CSE 872 Dr. Charles B. Owen Advanced Computer Graphics Skinning http://www.youtube.com/watch?

More information

Interpolation and extrapolation of motion capture data

Interpolation and extrapolation of motion capture data Interpolation and extrapolation of motion capture data Kiyoshi Hoshino Biological Cybernetics Lab, University of the Ryukyus and PRESTO-SORST, Japan Science and Technology Corporation Nishihara, Okinawa

More information

Computer Animation. Courtesy of Adam Finkelstein

Computer Animation. Courtesy of Adam Finkelstein Computer Animation Courtesy of Adam Finkelstein Advertisement Computer Animation What is animation? o Make objects change over time according to scripted actions What is simulation? o Predict how objects

More information

Basics of Design p. 2 Approaching Design as an Artist p. 4 Knowing Your Character p. 4 Making Decisions p. 4 Categories of Design p.

Basics of Design p. 2 Approaching Design as an Artist p. 4 Knowing Your Character p. 4 Making Decisions p. 4 Categories of Design p. Basics of Design p. 2 Approaching Design as an Artist p. 4 Knowing Your Character p. 4 Making Decisions p. 4 Categories of Design p. 6 Realistic Designs p. 6 Stylized Designs p. 7 Designing a Character

More information

Data-driven Approaches to Simulation (Motion Capture)

Data-driven Approaches to Simulation (Motion Capture) 1 Data-driven Approaches to Simulation (Motion Capture) Ting-Chun Sun tingchun.sun@usc.edu Preface The lecture slides [1] are made by Jessica Hodgins [2], who is a professor in Computer Science Department

More information

Realistic Rendering and Animation of a Multi-Layered Human Body Model

Realistic Rendering and Animation of a Multi-Layered Human Body Model Realistic Rendering and Animation of a Multi-Layered Human Body Model Mehmet Şahin Yeşil and Uǧur Güdükbay Dept. of Computer Engineering, Bilkent University, Bilkent 06800 Ankara, Turkey email: syesil@alumni.bilkent.edu.tr,

More information

Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya

Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya Hartmann - 1 Bjoern Hartman Advisor: Dr. Norm Badler Applied Senior Design Project - Final Report Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya Introduction Realistic

More information

ANATOMICALLY CORRECT ADAPTION OF KINEMATIC SKELETONS TO VIRTUAL HUMANS

ANATOMICALLY CORRECT ADAPTION OF KINEMATIC SKELETONS TO VIRTUAL HUMANS ANATOMICALLY CORRECT ADAPTION OF KINEMATIC SKELETONS TO VIRTUAL HUMANS Christian Rau and Guido Brunnett Computer Graphics Group, Chemnitz University of Technology, Strasse der Nationen 62, Chemnitz, Germany

More information

Kinematics and Orientations

Kinematics and Orientations Kinematics and Orientations Hierarchies Forward Kinematics Transformations (review) Euler angles Quaternions Yaw and evaluation function for assignment 2 Building a character Just translate, rotate, and

More information

Character Animation COS 426

Character Animation COS 426 Character Animation COS 426 Syllabus I. Image processing II. Modeling III. Rendering IV. Animation Image Processing (Rusty Coleman, CS426, Fall99) Rendering (Michael Bostock, CS426, Fall99) Modeling (Dennis

More information

Articulated Characters

Articulated Characters Articulated Characters Skeleton A skeleton is a framework of rigid body bones connected by articulated joints Used as an (invisible?) armature to position and orient geometry (usually surface triangles)

More information

Virtual Interaction System Based on Optical Capture

Virtual Interaction System Based on Optical Capture Sensors & Transducers 203 by IFSA http://www.sensorsportal.com Virtual Interaction System Based on Optical Capture Peng CHEN, 2 Xiaoyang ZHOU, 3 Jianguang LI, Peijun WANG School of Mechanical Engineering,

More information

Computer Animation. Michael Kazhdan ( /657) HB 16.5, 16.6 FvDFH 21.1, 21.3, 21.4

Computer Animation. Michael Kazhdan ( /657) HB 16.5, 16.6 FvDFH 21.1, 21.3, 21.4 Computer Animation Michael Kazhdan (601.457/657) HB 16.5, 16.6 FvDFH 21.1, 21.3, 21.4 Overview Some early animation history http://web.inter.nl.net/users/anima/index.htm http://www.public.iastate.edu/~rllew/chrnearl.html

More information

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector Inverse Kinematics Given a desired position (p) & orientation (R) of the end-effector q ( q, q, q ) 1 2 n Find the joint variables which can bring the robot the desired configuration z y x 1 The Inverse

More information

animation projects in digital art animation 2009 fabio pellacini 1

animation projects in digital art animation 2009 fabio pellacini 1 animation projects in digital art animation 2009 fabio pellacini 1 animation shape specification as a function of time projects in digital art animation 2009 fabio pellacini 2 how animation works? flip

More information

Animation Lecture 10 Slide Fall 2003

Animation Lecture 10 Slide Fall 2003 Animation Lecture 10 Slide 1 6.837 Fall 2003 Conventional Animation Draw each frame of the animation great control tedious Reduce burden with cel animation layer keyframe inbetween cel panoramas (Disney

More information

Adding Hand Motion to the Motion Capture Based Character Animation

Adding Hand Motion to the Motion Capture Based Character Animation Adding Hand Motion to the Motion Capture Based Character Animation Ge Jin and James Hahn Computer Science Department, George Washington University, Washington DC 20052 {jinge, hahn}@gwu.edu Abstract. Most

More information

Game Programming. Bing-Yu Chen National Taiwan University

Game Programming. Bing-Yu Chen National Taiwan University Game Programming Bing-Yu Chen National Taiwan University Character Motion Hierarchical Modeling Character Animation Motion Editing 1 Hierarchical Modeling Connected primitives 2 3D Example: A robot arm

More information

Communication in Virtual Environments. Communication in Virtual Environments

Communication in Virtual Environments. Communication in Virtual Environments Communication in Virtual Environments Igor Linköping University www.bk.isy.liu.se/staff/igor Outline Goals of the workshop Virtual Environments and related topics Networked Collaborative Virtual Environments

More information

Kinematics & Motion Capture

Kinematics & Motion Capture Lecture 27: Kinematics & Motion Capture Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017 Forward Kinematics (Slides with James O Brien) Forward Kinematics Articulated skeleton Topology

More information

This week. CENG 732 Computer Animation. Warping an Object. Warping an Object. 2D Grid Deformation. Warping an Object.

This week. CENG 732 Computer Animation. Warping an Object. Warping an Object. 2D Grid Deformation. Warping an Object. CENG 732 Computer Animation Spring 2006-2007 Week 4 Shape Deformation Animating Articulated Structures: Forward Kinematics/Inverse Kinematics This week Shape Deformation FFD: Free Form Deformation Hierarchical

More information

Multimodal Motion Capture Dataset TNT15

Multimodal Motion Capture Dataset TNT15 Multimodal Motion Capture Dataset TNT15 Timo v. Marcard, Gerard Pons-Moll, Bodo Rosenhahn January 2016 v1.2 1 Contents 1 Introduction 3 2 Technical Recording Setup 3 2.1 Video Data............................

More information

Overview. Animation is a big topic We will concentrate on character animation as is used in many games today. humans, animals, monsters, robots, etc.

Overview. Animation is a big topic We will concentrate on character animation as is used in many games today. humans, animals, monsters, robots, etc. ANIMATION Overview Animation is a big topic We will concentrate on character animation as is used in many games today humans, animals, monsters, robots, etc. Character Representation A character is represented

More information

Animation II: Soft Object Animation. Watt and Watt Ch.17

Animation II: Soft Object Animation. Watt and Watt Ch.17 Animation II: Soft Object Animation Watt and Watt Ch.17 Soft Object Animation Animation I: skeletal animation forward kinematics x=f(φ) inverse kinematics φ=f -1 (x) Curves and Surfaces I&II: parametric

More information

MUSCULOSKELETAL SIMULATION :

MUSCULOSKELETAL SIMULATION : TUTORIAL MUSCULOSKELETAL SIMULATION : FROM MOTION CAPTURE TO MUSCULAR ACTIVITY IN LOWER LIMB MODELS Nicolas Pronost and Anders Sandholm Musculoskeletal simulation? What is it? 2 Musculoskeletal simulation?

More information

7 Modelling and Animating Human Figures. Chapter 7. Modelling and Animating Human Figures. Department of Computer Science and Engineering 7-1

7 Modelling and Animating Human Figures. Chapter 7. Modelling and Animating Human Figures. Department of Computer Science and Engineering 7-1 Modelling and Animating Human Figures 7-1 Introduction Modeling and animating an articulated figure is one of the most formidable tasks that an animator can be faced with. It is especially challenging

More information

Computer Animation and Visualisation. Lecture 3. Motion capture and physically-based animation of characters

Computer Animation and Visualisation. Lecture 3. Motion capture and physically-based animation of characters Computer Animation and Visualisation Lecture 3. Motion capture and physically-based animation of characters Character Animation There are three methods Create them manually Use real human / animal motions

More information

Muscle Activity: From D-Flow to RAGE

Muscle Activity: From D-Flow to RAGE Muscle Activity: From D-Flow to RAGE Small project 2013 (Game and Media Technology) Name: Jorim Geers Student number: 3472345 Supervisor: Nicolas Pronost Date: January 22, 2014 Table of contents 1. Introduction...

More information

Interactive Computer Graphics

Interactive Computer Graphics Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create

More information

animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time

animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to

More information

COMPUTER ANIMATION 3 KEYFRAME ANIMATION, RIGGING, SKINNING AND CHARACTER ANIMATION. Rémi Ronfard, Animation, M2R MOSIG

COMPUTER ANIMATION 3 KEYFRAME ANIMATION, RIGGING, SKINNING AND CHARACTER ANIMATION. Rémi Ronfard, Animation, M2R MOSIG COMPUTER ANIMATION 3 KEYFRAME ANIMATION, RIGGING, SKINNING AND CHARACTER ANIMATION Rémi Ronfard, Animation, M2R MOSIG 2 Outline Principles of animation Keyframe interpolation Rigging, skinning and walking

More information

Human Posture Analysis

Human Posture Analysis Human Posture Analysis Overview Conventions What's New? Getting Started Creating a Manikin User Tasks Using the Posture Editor Selecting or Editing the DOF (Degree of Freedom) Displaying and Editing Angular

More information

2D/3D Geometric Transformations and Scene Graphs

2D/3D Geometric Transformations and Scene Graphs 2D/3D Geometric Transformations and Scene Graphs Week 4 Acknowledgement: The course slides are adapted from the slides prepared by Steve Marschner of Cornell University 1 A little quick math background

More information

CS 231. Inverse Kinematics Intro to Motion Capture. 3D characters. Representation. 1) Skeleton Origin (root) Joint centers/ bones lengths

CS 231. Inverse Kinematics Intro to Motion Capture. 3D characters. Representation. 1) Skeleton Origin (root) Joint centers/ bones lengths CS Inverse Kinematics Intro to Motion Capture Representation D characters ) Skeleton Origin (root) Joint centers/ bones lengths ) Keyframes Pos/Rot Root (x) Joint Angles (q) Kinematics study of static

More information

About this document. Introduction. Where does Life Forms fit? Prev Menu Next Back p. 2

About this document. Introduction. Where does Life Forms fit? Prev Menu Next Back p. 2 Prev Menu Next Back p. 2 About this document This document explains how to use Life Forms Studio with LightWave 5.5-6.5. It also contains short examples of how to use LightWave and Life Forms together.

More information

CSE452 Computer Graphics

CSE452 Computer Graphics CSE452 Computer Graphics Lecture 19: From Morphing To Animation Capturing and Animating Skin Deformation in Human Motion, Park and Hodgins, SIGGRAPH 2006 CSE452 Lecture 19: From Morphing to Animation 1

More information

Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics. Velocity Interpolation. Handing Free Surface with MAC

Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics. Velocity Interpolation. Handing Free Surface with MAC Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics Velocity Interpolation Original image from Foster & Metaxas, 1996 In 2D: For each axis, find the 4 closest face velocity samples: Self-intersecting

More information

reference motions [], [5] or derivation of a motion from a reference motion by adding emotions or behaviours to keyframes [9],[8]. However, whatever t

reference motions [], [5] or derivation of a motion from a reference motion by adding emotions or behaviours to keyframes [9],[8]. However, whatever t Keyframe interpolation with self-collision avoidance Jean-Christophe Nebel University of Glasgow Computer Science Department Glasgow, G2 8QQ, UK Abstract. 3D-keyframe animation is a popular method for

More information

Chapter 9 Animation System

Chapter 9 Animation System Chapter 9 Animation System 9.1 Types of Character Animation Cel Animation Cel animation is a specific type of traditional animation. A cel is a transparent sheet of plastic on which images can be painted

More information

A tool for constructing 3D Environments with Virtual Agents

A tool for constructing 3D Environments with Virtual Agents A tool for constructing 3D Environments with Virtual Agents Spyros Vosinakis, Themis Panayiotopoulos Knowledge Engineering Laboratory, Department of Informatics, University of Piraeus, 80 Karaoli & Dimitriou

More information

3D Motion Retrieval for Martial Arts

3D Motion Retrieval for Martial Arts Tamsui Oxford Journal of Mathematical Sciences 20(2) (2004) 327-337 Aletheia University 3D Motion Retrieval for Martial Arts Department of Computer and Information Sciences, Aletheia University Tamsui,

More information

CS-184: Computer Graphics. Today. Forward kinematics Inverse kinematics. Wednesday, November 12, Pin joints Ball joints Prismatic joints

CS-184: Computer Graphics. Today. Forward kinematics Inverse kinematics. Wednesday, November 12, Pin joints Ball joints Prismatic joints CS-184: Computer Graphics Lecture #18: Forward and Prof. James O Brien University of California, Berkeley V2008-F-18-1.0 1 Today Forward kinematics Inverse kinematics Pin joints Ball joints Prismatic joints

More information

Advanced Graphics and Animation

Advanced Graphics and Animation Advanced Graphics and Animation Character Marco Gillies and Dan Jones Goldsmiths Aims and objectives By the end of the lecture you will be able to describe How 3D characters are animated Skeletal animation

More information

Human Posture Analysis

Human Posture Analysis Human Posture Analysis Page 1 Preface Using This Guide Where to Find More Information Conventions What's New? Getting Started Creating a Manikin User Tasks Using the Posture Editor Segments Degree of Freedom

More information

animation computer graphics animation 2009 fabio pellacini 1

animation computer graphics animation 2009 fabio pellacini 1 animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to

More information

CS-184: Computer Graphics

CS-184: Computer Graphics CS-184: Computer Graphics Lecture #19: Motion Capture!!! Prof. James O Brien! University of California, Berkeley!! V2015-S-18-1.0 Today 1 18-MoCap.key - April 8, 2015 Motion Capture 2 2 18-MoCap.key -

More information

CS354 Computer Graphics Character Animation and Skinning

CS354 Computer Graphics Character Animation and Skinning Slide Credit: Don Fussell CS354 Computer Graphics Character Animation and Skinning Qixing Huang April 9th 2018 Instance Transformation Start with a prototype object (a symbol) Each appearance of the object

More information

CMSC 425: Lecture 10 Basics of Skeletal Animation and Kinematics

CMSC 425: Lecture 10 Basics of Skeletal Animation and Kinematics : Lecture Basics of Skeletal Animation and Kinematics Reading: Chapt of Gregor, Game Engine Architecture. The material on kinematics is a simplification of similar concepts developed in the field of robotics,

More information

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation Computer Animation Aitor Rovira March 2010 Human body animation Based on slides by Marco Gillies Human Body Animation Skeletal Animation Skeletal Animation (FK, IK) Motion Capture Motion Editing (retargeting,

More information

Using an Intermediate Skeleton and Inverse Kinematics for Motion Retargeting

Using an Intermediate Skeleton and Inverse Kinematics for Motion Retargeting EUROGRAPHICS 2000 / M. Gross and F.R.A. Hopgood (Guest Editors) Volume 19 (2000), Number 3 Using an Intermediate Skeleton and Inverse Kinematics for Motion Retargeting Jean-Sébastien Monzani, Paolo Baerlocher,

More information

Animations. Hakan Bilen University of Edinburgh. Computer Graphics Fall Some slides are courtesy of Steve Marschner and Kavita Bala

Animations. Hakan Bilen University of Edinburgh. Computer Graphics Fall Some slides are courtesy of Steve Marschner and Kavita Bala Animations Hakan Bilen University of Edinburgh Computer Graphics Fall 2017 Some slides are courtesy of Steve Marschner and Kavita Bala Animation Artistic process What are animators trying to do? What tools

More information

Animation. CS 465 Lecture 22

Animation. CS 465 Lecture 22 Animation CS 465 Lecture 22 Animation Industry production process leading up to animation What animation is How animation works (very generally) Artistic process of animation Further topics in how it works

More information

CS 231. Inverse Kinematics Intro to Motion Capture

CS 231. Inverse Kinematics Intro to Motion Capture CS 231 Inverse Kinematics Intro to Motion Capture Representation 1) Skeleton Origin (root) Joint centers/ bones lengths 2) Keyframes Pos/Rot Root (x) Joint Angles (q) 3D characters Kinematics study of

More information

Animation. CS 4620 Lecture 33. Cornell CS4620 Fall Kavita Bala

Animation. CS 4620 Lecture 33. Cornell CS4620 Fall Kavita Bala Animation CS 4620 Lecture 33 Cornell CS4620 Fall 2015 1 Announcements Grading A5 (and A6) on Monday after TG 4621: one-on-one sessions with TA this Friday w/ prior instructor Steve Marschner 2 Quaternions

More information

Reading. Topics in Articulated Animation. Character Representation. Animation. q i. t 1 t 2. Articulated models: Character Models are rich, complex

Reading. Topics in Articulated Animation. Character Representation. Animation. q i. t 1 t 2. Articulated models: Character Models are rich, complex Shoemake, Quaternions Tutorial Reading Topics in Articulated Animation 2 Articulated models: rigid parts connected by joints Animation They can be animated by specifying the joint angles (or other display

More information

6.837 Computer Graphics Hierarchical Modeling Wojciech Matusik, MIT EECS Some slides from BarbCutler & Jaakko Lehtinen

6.837 Computer Graphics Hierarchical Modeling Wojciech Matusik, MIT EECS Some slides from BarbCutler & Jaakko Lehtinen 6.837 Computer Graphics Hierarchical Modeling Wojciech Matusik, MIT EECS Some slides from BarbCutler & Jaakko Lehtinen Image courtesy of BrokenSphere on Wikimedia Commons. License: CC-BY-SA. This content

More information

ME5286 Robotics Spring 2015 Quiz 1

ME5286 Robotics Spring 2015 Quiz 1 Page 1 of 7 ME5286 Robotics Spring 2015 Quiz 1 Total Points: 30 You are responsible for following these instructions. Please take a minute and read them completely. 1. Put your name on this page, any other

More information

INPUT PARAMETERS FOR MODELS I

INPUT PARAMETERS FOR MODELS I 9A-1 INPUT PARAMETERS FOR MODELS I Lecture Overview Equations of motion Estimation of muscle forces Required model parameters Body segment inertial parameters Muscle moment arms and length Osteometric

More information

Markerless human motion capture through visual hull and articulated ICP

Markerless human motion capture through visual hull and articulated ICP Markerless human motion capture through visual hull and articulated ICP Lars Mündermann lmuender@stanford.edu Stefano Corazza Stanford, CA 93405 stefanoc@stanford.edu Thomas. P. Andriacchi Bone and Joint

More information

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper): Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00 Topic (Research Paper): Jinxian Chai and Jessica K. Hodgins, Performance Animation

More information

Triangulation: A new algorithm for Inverse Kinematics

Triangulation: A new algorithm for Inverse Kinematics Triangulation: A new algorithm for Inverse Kinematics R. Müller-Cajar 1, R. Mukundan 1, 1 University of Canterbury, Dept. Computer Science & Software Engineering. Email: rdc32@student.canterbury.ac.nz

More information

Last Time? Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation

Last Time? Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation Last Time? Inverse Kinematics Navier-Stokes Equations Conservation of Momentum & Mass Incompressible Flow Today How do we animate? Keyframing Procedural Animation Physically-Based Animation Forward and

More information

Motion Editing with Data Glove

Motion Editing with Data Glove Motion Editing with Data Glove Wai-Chun Lam City University of Hong Kong 83 Tat Chee Ave Kowloon, Hong Kong email:jerrylam@cityu.edu.hk Feng Zou City University of Hong Kong 83 Tat Chee Ave Kowloon, Hong

More information

Planning in Mobile Robotics

Planning in Mobile Robotics Planning in Mobile Robotics Part I. Miroslav Kulich Intelligent and Mobile Robotics Group Gerstner Laboratory for Intelligent Decision Making and Control Czech Technical University in Prague Tuesday 26/07/2011

More information

Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images

Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images Seong-Jae Lim, Ho-Won Kim, Jin Sung Choi CG Team, Contents Division ETRI Daejeon, South Korea sjlim@etri.re.kr Bon-Ki

More information

MOTION capture is a technique and a process that

MOTION capture is a technique and a process that JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2008 1 Automatic estimation of skeletal motion from optical motion capture data xxx, Member, IEEE, Abstract Utilization of motion capture techniques

More information

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation Last Time? Animation, Motion Capture, & Inverse Kinematics Navier-Stokes Equations Conservation of Momentum & Mass Incompressible Flow Today How do we animate? Keyframing Procedural Animation Physically-Based

More information

3D Character animation principles

3D Character animation principles References: http://download.toonboom.com/files/templates/studio/animation_charts_pack2_studio.pdf (Breakdown poses) http://www.siggraph.org/education/materials/hypergraph/animation/character_animati on/principles/follow_through.htm

More information

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 3: Forward and Inverse Kinematics

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 3: Forward and Inverse Kinematics MCE/EEC 647/747: Robot Dynamics and Control Lecture 3: Forward and Inverse Kinematics Denavit-Hartenberg Convention Reading: SHV Chapter 3 Mechanical Engineering Hanz Richter, PhD MCE503 p.1/12 Aims of

More information

Announcements: Quiz. Animation, Motion Capture, & Inverse Kinematics. Last Time? Today: How do we Animate? Keyframing. Procedural Animation

Announcements: Quiz. Animation, Motion Capture, & Inverse Kinematics. Last Time? Today: How do we Animate? Keyframing. Procedural Animation Announcements: Quiz Animation, Motion Capture, & Inverse Kinematics On Friday (3/1), in class One 8.5x11 sheet of notes allowed Sample quiz (from a previous year) on website Focus on reading comprehension

More information

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation Last Time? Animation, Motion Capture, & Inverse Kinematics Navier-Stokes Equations Conservation of Momentum & Mass Incompressible Flow Today How do we animate? Keyframing Procedural Animation Physically-Based

More information

Keyframing an IK Skeleton Maya 2012

Keyframing an IK Skeleton Maya 2012 2002-2012 Michael O'Rourke Keyframing an IK Skeleton Maya 2012 (This tutorial assumes you have done the Creating an Inverse Kinematic Skeleton tutorial in this set) Concepts Once you have built an Inverse

More information

Industrial Robots : Manipulators, Kinematics, Dynamics

Industrial Robots : Manipulators, Kinematics, Dynamics Industrial Robots : Manipulators, Kinematics, Dynamics z z y x z y x z y y x x In Industrial terms Robot Manipulators The study of robot manipulators involves dealing with the positions and orientations

More information

An object in 3D space

An object in 3D space An object in 3D space An object's viewpoint Every Alice object has a viewpoint. The viewpoint of an object is determined by: The position of the object in 3D space. The orientation of the object relative

More information

CS-184: Computer Graphics. Today

CS-184: Computer Graphics. Today CS-184: Computer Graphics Lecture #20: Motion Capture Prof. James O Brien University of California, Berkeley V2005-F20-1.0 Today Motion Capture 2 Motion Capture Record motion from physical objects Use

More information

Character Animation. Presented by: Pam Chow

Character Animation. Presented by: Pam Chow Character Animation Presented by: Pam Chow Overview Animation is a big topic We will concentrate on character animation as is used in many games today humans, animals, monsters, robots, etc. PLAZMO AND

More information

Math background. 2D Geometric Transformations. Implicit representations. Explicit representations. Read: CS 4620 Lecture 6

Math background. 2D Geometric Transformations. Implicit representations. Explicit representations. Read: CS 4620 Lecture 6 Math background 2D Geometric Transformations CS 4620 Lecture 6 Read: Chapter 2: Miscellaneous Math Chapter 5: Linear Algebra Notation for sets, functions, mappings Linear transformations Matrices Matrix-vector

More information

Character Animation 1

Character Animation 1 Character Animation 1 Overview Animation is a big topic We will concentrate on character animation as is used in many games today humans, animals, monsters, robots, etc. Character Representation A character

More information

3D on the WEB and Virtual Humans

3D on the WEB and Virtual Humans 3D on the WEB and Virtual Humans Christian Babski, Daniel Thalmann Computer Graphics Laboratory, Swiss Federal Institute of Technology CH1015 Lausanne, Switzerland {babski,boulic,thalmann}@lig.di.epfl.ch

More information

CS 231. Deformation simulation (and faces)

CS 231. Deformation simulation (and faces) CS 231 Deformation simulation (and faces) Deformation BODY Simulation Discretization Spring-mass models difficult to model continuum properties Simple & fast to implement and understand Finite Element

More information

2 ATTILA FAZEKAS The tracking model of the robot car The schematic picture of the robot car can be seen on Fig.1. Figure 1. The main controlling task

2 ATTILA FAZEKAS The tracking model of the robot car The schematic picture of the robot car can be seen on Fig.1. Figure 1. The main controlling task NEW OPTICAL TRACKING METHODS FOR ROBOT CARS Attila Fazekas Debrecen Abstract. In this paper new methods are proposed for intelligent optical tracking of robot cars the important tools of CIM (Computer

More information

Human Body Analysis with Biomechanics Criteria

Human Body Analysis with Biomechanics Criteria Human Body Analysis with Biomechanics Criteria J. M. Buades, F. J. Perales, and M. Gonzalez Computer Graphics & Vision Group Universitat de les Illes Balears (UIB) C/Valldemossa Km. 7.5, 07122 - Palma

More information

Animation. CS 4620 Lecture 32. Cornell CS4620 Fall Kavita Bala

Animation. CS 4620 Lecture 32. Cornell CS4620 Fall Kavita Bala Animation CS 4620 Lecture 32 Cornell CS4620 Fall 2015 1 What is animation? Modeling = specifying shape using all the tools we ve seen: hierarchies, meshes, curved surfaces Animation = specifying shape

More information

Design and Optimization of the Thigh for an Exoskeleton based on Parallel Mechanism

Design and Optimization of the Thigh for an Exoskeleton based on Parallel Mechanism Design and Optimization of the Thigh for an Exoskeleton based on Parallel Mechanism Konstantin Kondak, Bhaskar Dasgupta, Günter Hommel Technische Universität Berlin, Institut für Technische Informatik

More information

White Paper. OLGA Explained. Lasse Roren. Author:

White Paper. OLGA Explained. Lasse Roren. Author: White Paper OLGA Explained Author: Lasse Roren Revision: 05/001 - August 2005 Introduction OLGA (Optimized Lower-limb Gait Analysis) was introduced in 2003 as a plug-in which works with the Vicon Workstation

More information

Animator Friendly Rigging Part 2b

Animator Friendly Rigging Part 2b Animator Friendly Rigging Part 2b Creating animation rigs which solve problems, are fun to use, and don t cause nervous breakdowns. - 1- CONTENTS Review The Requirements... 5 Torso Animation Rig Requirements...

More information

Chapter 5. Transforming Shapes

Chapter 5. Transforming Shapes Chapter 5 Transforming Shapes It is difficult to walk through daily life without being able to see geometric transformations in your surroundings. Notice how the leaves of plants, for example, are almost

More information

Lab # 3 - Angular Kinematics

Lab # 3 - Angular Kinematics Purpose: Lab # 3 - Angular Kinematics The objective of this lab is to understand the relationship between segment angles and joint angles. Upon completion of this lab you will: Understand and know how

More information

CS 231. Deformation simulation (and faces)

CS 231. Deformation simulation (and faces) CS 231 Deformation simulation (and faces) 1 Cloth Simulation deformable surface model Represent cloth model as a triangular or rectangular grid Points of finite mass as vertices Forces or energies of points

More information

Inverse Kinematics Programming Assignment

Inverse Kinematics Programming Assignment Inverse Kinematics Programming Assignment CS 448D: Character Animation Due: Wednesday, April 29 th 11:59PM 1 Logistics In this programming assignment, you will implement a simple inverse kinematics solver

More information

Kinematics. Kinematics analyzes the geometry of a manipulator, robot or machine motion. The essential concept is a position.

Kinematics. Kinematics analyzes the geometry of a manipulator, robot or machine motion. The essential concept is a position. Kinematics Kinematics analyzes the geometry of a manipulator, robot or machine motion. The essential concept is a position. 1/31 Statics deals with the forces and moments which are aplied on the mechanism

More information